TheLostSwede
News Editor
- Joined
- Nov 11, 2004
- Messages
- 17,683 (2.41/day)
- Location
- Sweden
System Name | Overlord Mk MLI |
---|---|
Processor | AMD Ryzen 7 7800X3D |
Motherboard | Gigabyte X670E Aorus Master |
Cooling | Noctua NH-D15 SE with offsets |
Memory | 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68 |
Video Card(s) | Gainward GeForce RTX 4080 Phantom GS |
Storage | 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000 |
Display(s) | Acer XV272K LVbmiipruzx 4K@160Hz |
Case | Fractal Design Torrent Compact |
Audio Device(s) | Corsair Virtuoso SE |
Power Supply | be quiet! Pure Power 12 M 850 W |
Mouse | Logitech G502 Lightspeed |
Keyboard | Corsair K70 Max |
Software | Windows 10 Pro |
Benchmark Scores | https://valid.x86.fr/yfsd9w |
TrendForce's latest report on enterprise SSDs reveals that a surge in demand for AI has led AI server customers to significantly increase their orders for enterprise SSDs over the past two quarters. Upstream suppliers have been accelerating process upgrades and planning for 2YY products—slated to enter mass production in 2025—in order to meet the growing demand for SSDs in AI applications.
TrendForce observes that increased orders for enterprise SSDs from AI server customers have resulted in contract prices for this category rising by over 80% from 4Q23 to 3Q24. SSDs play a crucial role in AI development. In AI model training, SSDs primarily store model parameters, including evolving weights and deviations.
Another key application of SSDs is creating checkpoints to periodically save AI model training progress, allowing recovery from specific points in case of interruptions. Due to a high reliance on fast data transfer and superior write endurance for these functions, customers typically opt for 4 TB/8 TB TLC SSD to meet the demanding requirements of the training process.
TrendForce points out that SSDs used in AI inference servers assist in adjusting and optimizing AI models during the inference process. Notably, SSDs can update data in real time to fine-tune inference model outcomes. AI inference primarily provides retrieval-augmented generation (RAG) and LLM services. SSDs store the reference documents and knowledge bases that RAG and LLM use to generate more informative responses. Additionally, as more generated information is displayed as videos or images, the storage capacity required also increases, making high-capacity SSDs such as TLC/QLC 16 TB or larger the preferred choice for AI inference applications.
Growth rate of AI SSD demand exceeds 60% as suppliers accelerate development of high-capacity products
In 2024, the AI server SSD demand market has seen a significant increase in demand for products larger than 16 TB starting in the second quarter. With the arrival of NVIDIA's H100, H20, and H200 series products, customers have begun to further boost their orders for 4 TB and 8 TB TLC enterprise SSDs. TrendForce estimates that this year's AI-related SSD procurement capacity will exceed 45 EB. Over the next few years, AI servers are expected to drive an average annual growth rate of over 60% in SSD demand, with AI SSD demand potentially rising from 5% of total NAND Flash consumption in 2024 to 9% in 2025.
AI inference servers will continue to adopt high-capacity SSD products. Suppliers have already started accelerating process upgrades and are aiming for mass production of 2YY/3XX-layer products from 1Q25, as well as eventually producing 120 TB enterprise SSD products.
View at TechPowerUp Main Site | Source
TrendForce observes that increased orders for enterprise SSDs from AI server customers have resulted in contract prices for this category rising by over 80% from 4Q23 to 3Q24. SSDs play a crucial role in AI development. In AI model training, SSDs primarily store model parameters, including evolving weights and deviations.
Another key application of SSDs is creating checkpoints to periodically save AI model training progress, allowing recovery from specific points in case of interruptions. Due to a high reliance on fast data transfer and superior write endurance for these functions, customers typically opt for 4 TB/8 TB TLC SSD to meet the demanding requirements of the training process.
TrendForce points out that SSDs used in AI inference servers assist in adjusting and optimizing AI models during the inference process. Notably, SSDs can update data in real time to fine-tune inference model outcomes. AI inference primarily provides retrieval-augmented generation (RAG) and LLM services. SSDs store the reference documents and knowledge bases that RAG and LLM use to generate more informative responses. Additionally, as more generated information is displayed as videos or images, the storage capacity required also increases, making high-capacity SSDs such as TLC/QLC 16 TB or larger the preferred choice for AI inference applications.
Growth rate of AI SSD demand exceeds 60% as suppliers accelerate development of high-capacity products
In 2024, the AI server SSD demand market has seen a significant increase in demand for products larger than 16 TB starting in the second quarter. With the arrival of NVIDIA's H100, H20, and H200 series products, customers have begun to further boost their orders for 4 TB and 8 TB TLC enterprise SSDs. TrendForce estimates that this year's AI-related SSD procurement capacity will exceed 45 EB. Over the next few years, AI servers are expected to drive an average annual growth rate of over 60% in SSD demand, with AI SSD demand potentially rising from 5% of total NAND Flash consumption in 2024 to 9% in 2025.
AI inference servers will continue to adopt high-capacity SSD products. Suppliers have already started accelerating process upgrades and are aiming for mass production of 2YY/3XX-layer products from 1Q25, as well as eventually producing 120 TB enterprise SSD products.
View at TechPowerUp Main Site | Source