News Posts matching #PCIe

Return to Keyword Browsing

MSI Unveils Z890 Motherboard Lineup With MEG, MPG, MAG, and PRO Series

MSI proudly introduces the next generation of motherboards, designed to pair seamlessly with Intel brand-new Core Ultra Processors. The Z890 MEG Series, MPG Series, MAG Series, and PRO Series motherboards are designed to harness the power of Intel's latest technology, delivering enhanced AI computing efficiency compared to previous generations. These boards provide exceptional performance with advanced features, MSI innovations, premium components, and improved connectivity. But what truly sets them apart is their user-friendly DIY solutions, empowering you to take control of your computing experience. Built with gamers, creators, and all users in mind, MSI's Z890 motherboards offer the very best in computing power. On launch, there will be 5 models available, which are the Intel Core Ultra 9 285K, Intel Core Ultra 7 265K / 265KF, and Intel Core Ultra 5 245K / 245KF processors.

With the introduction of MSI's Z890 motherboards, designed to push performance boundaries and with a completely redesigned, more intuitive BIOS, users can now maximize motherboard performance with ease. Powered by MSI's Ultra Engine, memory design optimization enables overclocking up to 9200+ MT/s across the entire Z890 motherboard lineup. And with the returning of the MEG Z890 UNIFY-X, built with overclocking in mind, takes memory frequencies even higher, reaching up to 9600 MT/s. The Z890 motherboards incorporate a number of powerful features like AI Boost with the ability to overclock the NPU further for AI enhancement capabilities, Game Boost, Performance Preset, enhanced XMP Memory Profiles, and Memory Try It!, allowing users to elevate their system's performance to new heights.

Astera Labs Introduces New Portfolio of Fabric Switches Purpose-Built for AI Infrastructure at Cloud-Scale

Astera Labs, Inc, a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced a new portfolio of fabric switches, including the industry's first PCIe 6 switch, built from the ground up for demanding AI workloads in accelerated computing platforms deployed at cloud-scale. The Scorpio Smart Fabric Switch portfolio is optimized for AI dataflows to deliver maximum predictable performance per watt, high reliability, easy cloud-scale deployment, reduced time-to-market, and lower total cost of ownership.

The Scorpio Smart Fabric Switch portfolio features two application-specific product lines with a multi-generational roadmap:
  • Scorpio P-Series for GPU-to-CPU/NIC/SSD PCIe 6 connectivity- architected to support mixed traffic head-node connectivity across a diverse ecosystem of PCIe hosts and endpoints.
  • Scorpio X-Series for back-end GPU clustering-architected to deliver the highest back-end GPU-to-GPU bandwidth with platform-specific customization.

Apacer Delivers Unrivaled Reliability in SSD and DRAM Solutions for Gaming Applications

Apacer is thrilled to unveil its latest advancements in data security, reliability, and integrity for casino gaming systems in the Global Gaming Expo at the Expo Hall, taking place from October 8th to 10th, 2024. We invite you to visit us at booth 5221 to explore our state-of-the-art SSD and DRAM solutions, meticulously designed to address the demanding needs of the gaming industry.

At G2E 2024, Apacer will highlight a range of innovative products tailored to the unique requirements of gaming applications:

Seasonic Launches the New PRIME PX-2200 Power Supply

After a successful product introduction during Computex 2024, Sea Sonic is now launching the sales of the new PRIME PX-2200 power supply. The Seasonic PRIME Series represents Sea Sonic's pinnacle of engineering, blending cutting-edge design with premium materials and advanced production processes. The latest models showcase digital fan control, modern interleaved PFC (Power Factor Correction), and a patented dual-transformer design running in parallel.

Seasonic's Latest Power Milestone Unveiled
The PRIME PX-2200 stands out above all PRIME units by capable of delivering an impressive 2200 watts of power at 80 Plus Platinum EU 230 V levels. Catering to a wide range of power users, this ATX 3.1 and PCIe 5.1 compatible powerhouse delivers top-quality power combined with long-term reliability for various high-performance applications.

Samsung Starts Mass Production of PCle 5.0 PM9E1 SSD

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced it has begun mass producing PM9E1, a PCle 5.0 SSD with the industry's highest performance and largest capacity. Built on its in-house 5-nanometer (nm)-based controller and eighth-generation V-NAND (V8) technology, the PM9E1 will provide powerful performance and enhanced power efficiency, making it an optimal solution for on-device AI PCs. Key attributes in SSDs, including performance, storage capacity, power efficiency and security, have all been improved compared to its predecessor (PM9A1a).

"Our PM9E1 integrated with a 5 nm controller delivers industry-leading power efficiency and utmost performance validated by our key partners," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "In the rapidly growing on-device AI era, Samsung's PM9E1 will offer a robust foundation for global customers to effectively plan their AI portfolios."

Innodisk Unveils Advanced CXL Memory Module to Power AI Servers

Innodisk, a leading global AI solution provider, continues to push the boundaries of innovation with the launch of its cutting-edge Compute Express Link (CXL) Memory Module, which is designed to meet the rapid growth demands of AI servers and cloud data centers. As one of the few module manufacturers offering this technology, Innodisk is at the forefront of AI and high-performance computing.

The demand for AI servers is rising quickly, with these systems expected to account for approximately 65% of the server market by 2024, according to Trendforce (2024). This growth has created an urgent need for greater memory bandwidth and capacity, as AI servers now require at least 1.2 TB of memory to operate effectively. Traditional DDR memory solutions are increasingly struggling to meet these demands, especially as the number of CPU cores continues to multiply, leading to challenges such as underutilized CPU resources and increasing latency between different protocols.

ASUS Announces ESC N8-E11 AI Server with NVIDIA HGX H200

ASUS today announced the latest marvel in the groundbreaking lineup of ASUS AI servers - ESC N8-E11, featuring the intensely powerful NVIDIA HGX H200 platform. With this AI titan, ASUS has secured its first industry deal, showcasing the exceptional performance, reliability and desirability of ESC N8-E11 with HGX H200, as well as the ability of ASUS to move first and fast in creating strong, beneficial partnerships with forward-thinking organizations seeking the world's most powerful AI solutions.

Shipments of the ESC N8-E11 with NVIDIA HGX H200 are scheduled to begin in early Q4 2024, marking a new milestone in the ongoing ASUS commitment to excellence. ASUS has been actively supporting clients by assisting in the development of cooling solutions to optimize overall PUE, guaranteeing that every ESC N8-E11 unit delivers top-tier efficiency and performance - ready to power the new era of AI.

SK hynix Presents Extensive AI Memory Lineup at Expanded FMS 2024

SK hynix has returned to Santa Clara, California to present its full array of groundbreaking AI memory technologies at FMS: the Future of Memory and Storage (FMS) 2024 from August 6-8. Previously known as Flash Memory Summit, the conference changed its name to reflect its broader focus on all types of memory and storage products amid growing interest in AI. Bringing together industry leaders, customers, and IT professionals, FMS 2024 covers the latest trends and innovations shaping the memory industry.

Participating in the event under the slogan "Memory, The Power of AI," SK hynix is showcasing its outstanding memory capabilities through a keynote presentation, multiple technology sessions, and product exhibits.

NVM Express Releases NVMe 2.1 Specifications

NVM Express, Inc. today announced the release of three new specifications and eight updated specifications. This update to NVMe technology builds on the strengths of previous NVMe specifications, introducing significant new features for modern computing environments while also streamlining development and time to market.

"Beginning as a single PCIe SSD specification, NVMe technology has grown into nearly a dozen specifications, including multiple command sets, that provide pivotal support for NVMe technology across all major transports and standardize many aspects of storage," said Peter Onufryk, NVM Express Technical Workgroup Chair. "NVMe technology adoption continues to grow and has succeeded in unifying client, cloud, AI and enterprise storage around a common architecture. The future of NVMe technology is bright and we have 75 new authorized technical proposals underway."

Micron Develops Industry's First PCIe Gen 6 Data Center SSD for Ecosystem Enablement

Micron Technology, Inc., today announced it is the first to develop PCIe Gen 6 data center SSD technology for ecosystem enablement as part of a portfolio of memory and storage products to support the broad demand for AI. Addressing these demands, Raj Narasimhan, senior vice president and general manager of Micron's Compute and Networking Business Unit, will present a keynote at FMS titled, "Data is at the heart of AI: Micron memory and storage are fueling the AI revolution," on Wednesday, Aug. 7, at 11:00 a.m. Pacific time. The session will focus on how Micron's industry-leading products are impacting AI system architectures while enabling faster and more power-efficient solutions to manage vast data sets.

At FMS, Micron will demonstrate that it is the first to develop a PCIe Gen 6 SSD for ecosystem enablement, once again showcasing its storage technology leadership. By making this technology — which delivers sequential read bandwidths of over 26 GB/s — available to partners, Micron is kickstarting the PCIe Gen 6 ecosystem. This achievement builds on Micron's recent announcement of the world's fastest data center SSD, the Micron 9550, and further bolsters Micron's leadership position in AI storage.

Silicon Motion Launches Power Efficient PCIe Gen 5 SSD Controller

Silicon Motion Technology Corporation, a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today announced SM2508, the best power efficiency PCIe Gen 5 NVMe 2.0 client SSD controller for AI PCs and gaming consoles. It's the world's first PCIe Gen 5 client SSD controller using TSMC's 6 nm EUV process, offering a 50% reduction in power consumption compared to competitive offerings in the 12 nm process. With less than 7 W power consumption for the entire SSD, it delivers 1.7x better power efficiency than PCIe Gen 4 SSDs and up to 70% better than current competitive PCIe Gen 5 offerings on the market. Silicon Motion will be showcasing its SM2508 based SSD design and other innovations during the Future of Memory and Storage event from Aug. 6 to 8 at booth #315:

Silicon Motion's SM2508 is a superior-performance, low-power PCIe Gen 5 x4 NVMe 2.0 SSD controller designed for AI-capable PC notebooks. It supports eight NAND channels with up to 3,600 MT/s per channel, delivering sequential performance speeds of up to 14.5 GB/s and 13.6 GB/s and random performance speeds of up to 2.5M IOPS, providing up to 2x higher performance than PCIe Gen 4 products. The SM2508 maximizes PCIe Gen 5 performance with an impressive power consumption of approximately 3 W. It features Silicon Motion's proprietary 8th-generation NANDXtend technology, which includes an on-disk training algorithm designed to reduce ECC timing. This enhancement boosts performance and maximizes power efficiency while ensuring compatibility with the latest 3D TLC/QLC NAND technologies, enabling higher data density and meeting the evolving demands of next-generation AI PCs.

Pineboards Launches AI Bundle Hailo 8L Raspberry Pi HAT+ with NVMe SSD Support

It feels that only a few days have passed since we announced the HatDrive! Nano, and there is much more in the pipeline that we're excited to share with you! Today, though, we have our Pineboards Ai Bundle (Hailo 8L) to whet your ever-hungry AI appetites, and we think you're going to love it.

Combining an M.2 2280 M-Key NVMe connection with an M.2 2230 A/E-Key connection pre-loaded with a Hailo-8L on a bottom-mounted Raspberry Pi 5 HAT enables you to get your AI fix whilst also being able to boot and make use of fast NVMe storage. This builds on the success of our ever-popular Raspberry Pi 5 AI HAT and Google Coral combinations, but massively bumps the processing power, enabling you to do so much more!

Micron Announces Volume Production of Ninth-Generation NAND Flash Technology

Micron Technology, Inc., announced today that it is shipping ninth-generation (G9) TLC NAND in SSDs, making it the first in the industry to achieve this milestone. Micron G9 NAND features the industry's highest transfer speed of 3.6 GB/s, delivering unsurpassed bandwidth for reading and writing data. The new NAND enables best-in-class performance for artificial intelligence (AI) and other data-intensive use cases from personal devices and edge servers to enterprise and cloud data centers.

"The shipment of Micron G9 NAND is a testament to Micron's prowess in process technology and design innovations," said Scott DeBoer, executive vice president of Technology and Products at Micron. "Micron G9 NAND is up to 73% denser than competitive technologies in the market today, allowing for more compact and efficient storage solutions that benefit both consumers and businesses."

Alphawave Semi Launches Industry's First 3nm UCIe IP with TSMC CoWoS Packaging

Alphawave Semi, a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, has launched the industry's first 3 nm successful silicon bring-up of Universal Chiplet Interconnect Express (UCIe) Die-to-Die (D2D) IP with TSMC's Chip-on-Wafer-on-Substrate (CoWoS) advanced packaging technology.

The complete PHY and Controller subsystem was developed in collaboration with TSMC and targets applications such as hyperscaler, high-performance computing (HPC) and artificial intelligence (AI).

Micron Introduces 9550 NVMe Data Center SSD

Micron Technology, Inc., today announced availability of the Micron 9550 NVMe SSD - the world's fastest data center SSD and industry leader in AI workload performance and power efficiency. The Micron 9550 SSD showcases Micron's deep expertise and innovation by integrating its own controller, NAND, DRAM and firmware into one world-class product. This integrated solution enables class-leading performance, power efficiency and security features for data center operators.

The Micron 9550 SSD delivers best-in-class performance with 14.0 GB/s sequential reads and 10.0 GB/s sequential writes to provide up to 67% better performance over similar competitive SSDs and enables industry-leading performance for demanding workloads such as AI. In addition, its random reads of 3,300 KIOPS are up to 35% better and random writes of 400 KIOPS are up to 33% better than competitive offerings.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

Tenstorrent Launches Next Generation Wormhole-based Developer Kits and Workstations

Tenstorrent is launching their next generation Wormhole chip featuring PCIe cards and workstations designed for developers who are interested in scalability for multi-chip development using Tenstorrent's powerful open-source software stacks.

These Wormhole-based cards and systems are now available for immediate order on tenstorrent.com:
  • Wormhole n150, powered by a single processor
  • Wormhole n300, powered by two processors
  • TT-LoudBox, a developer workstation powered by four Wormhole n300s (eight processors)

Qualitas Semiconductor Develops First In-House PCIe 6.0 PHY IP

Qualitas Semiconductor Co., Ltd. has developed a new PCIe 6.0 PHY IP, marking a significant advance in computer interconnect technology. This new product, created using advanced 5 nm process technology is designed to meet the high-speed data transfer needs of the AI era. The Qualitas' PCIe PHY IP using 5 nm FinFet CMOS technology consists of hardmacro PMA and PCS compliant to PCIe Base 6.0 specification.

The PCIe 6.0 PHY IP can achieve transmission speeds up to 64GT/s per lane. When using all 16 lanes, it can transfer data at rates up to 256 GB/s. These speeds make it well-suited for data centers and self-driving car technologies, where rapid data processing is essential. Qualitas achieved this performance by implementing 100G PAM4 signaling technology. Highlighting the importance of the new IP, Qualitas CEO Dr. Duho Kim signaled the company's intent to continue pushing boundaries in semiconductor technology.

NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector

In the preparation season for NVIDIA's upcoming GeForce RTX 50 Series of GPUs, codenamed "Blackwell," one power supply manufacturer accidentally leaked the power configurations of all SKUs. Seasonic operates its power supply wattage calculator, allowing users to configure their systems online and get power supply recommendations. This means that the system often gets filled with CPU/GPU SKUs to accommodate the massive variety of components. This time we have the upcoming GeForce RTX 50 series, with RTX 5050 all the way up to the top RTX 5090 GPU. Starting with the GeForce RTX 5050, this SKU is expected to carry a 100 W TDP. Its bigger brother, the RTX 5060, bumps the TDP to 170 W, 55 W higher than the previous generation "Ada Lovelace" RTX 4060.

The GeForce RTX 5070, with a 220 W TDP, is in the middle of the stack, featuring a 20 W increase over the Ada generation. For higher-end SKUs, NVIDIA prepared the GeForce RTX 5080 and RTX 5090, with 350 W and 500 W TDP, respectively. This also represents a jump in TDP from Ada generation with an increase of 30 W for RTX 5080 and 50 W for RTX 5090. Interestingly, this time NVIDIA wants to unify the power connection system of the entire family with a 16-pin 12V-2x6 connector but with an updated PCIe 6.0 CEM specification. The increase in power requirements for the "Blackwell" generation across the SKUs is interesting, and we are eager to see if the performance gains are enough to balance efficiency.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

Intel Demonstrates First Fully Integrated Optical IO Chiplet

Intel Corporation has achieved a revolutionary milestone in integrated photonics technology for high-speed data transmission. At the Optical Fiber Communication Conference (OFC) 2024, Intel's Integrated Photonics Solutions (IPS) Group demonstrated the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet co-packaged with an Intel CPU and running live data. Intel's OCI chiplet represents a leap forward in high-bandwidth interconnect by enabling co-packaged optical input/output (I/O) in emerging AI infrastructure for data centers and high performance computing (HPC) applications.

"The ever-increasing movement of data from server to server is straining the capabilities of today's data center infrastructure, and current solutions are rapidly approaching the practical limits of electrical I/O performance. However, Intel's groundbreaking achievement empowers customers to seamlessly integrate co-packaged silicon photonics interconnect solutions into next-generation compute systems. Our OCI chiplet boosts bandwidth, reduces power consumption and increases reach, enabling ML workload acceleration that promises to revolutionize high-performance AI infrastructure," said Thomas Liljeberg, senior director, Product Management and Strategy, Integrated Photonics Solutions (IPS) Group.

Realtek is Aiming to Make 5 Gbps Ethernet Switches More Affordable with New Platform

At Computex, Realtek was showing off a new 5 Gbps switch platform which is set to bring much more affordable high-speed Ethernet switches to the consumer market. At the core of the new switch platform sits Realtek's RTL9303 which is an eight port 10 Gbps switch controller. This was released a few years ago as a low cost 10 Gbps switch IC, but as it still required third party PHYs, it never really took off. The RTL9303 is built around an 800 MHz MIPS 34Kc CPU and supports up to 1 GB of DDR3 RAM as well as 64 MB of SPI NOR Flash for the firmware.

When combined with Realtek's RTL8251B 5 Gbps PHY, the end result is a comparably low-cost 5 Gbps switch. According to Ananadtech, Realtek is expecting a US$25 price per port, which is only about $10 more per port than your typical 2.5 Gbps switch today, even though some are as little as US$10 per port. When combined with a Realtek RTL8126 PCIe based 5 Gbps NIC which retails from around US$30, 5 Gbps Ethernet looks like a very sensible option in terms of price/performance. Admittedly 2.5 Gbps Ethernet cards can be had for as little as $13, but they started out at a higher price point compared to what 5 Gbps NICs are already selling for. Meanwhile, 10 Gbps NICs are still stuck at around US$80-90, with switches in most cases costing at least US$45 per port, but often a lot more. 5 Gbps Ethernet also has the advantage of being able to operate on CAT 5e cabling at up to 60 metres and CAT 6 cabling at up 100 metres, which means there's no need to replace older cabling to benefit from it.

Zephyr Unveils ITX-sized Sakura Blizzard RTX 4070 Graphics Card

PC enthusiasts who crave powerful graphics in compact systems have a new option from Zephyr. The Chinese manufacturer has launched the world's first ITX-sized GeForce RTX 4070 graphics card. Dubbed the Sakura Blizzard, this GPU packs NVIDIA's AD104-250 chip and 12 GB of GDDR6X memory into a footprint of just 172 x 123 x 42 mm. While slightly taller than standard PCIe brackets, the two-slot cooler should fit most Mini-ITX cases. The card's cute pink shroud and solitary cooling fan give it a unique aesthetic. But don't let the pink looks fool you - Zephyr claims this compact powerhouse can keep the GPU and memory up to 10°C cooler than some dual-fan RTX 4070 designs, which needs to be taken with a grain of salt. Thermal testing videos show the fan spinning at 2,400 RPM to maintain GPU temperatures around 73°C under load in a 25°C room. Meanwhile, synthetic benchmarks reportedly demonstrate no performance compromises versus full-sized RTX 4070 implementations.

Zephyr's initial production run has already sold out in China. However, a second batch is slated for mid-July availability to meet the apparent higher demand for small form factor RTX 40-series GPUs. The launch comes just weeks after NVIDIA unveiled new "SFF-ready" design guidelines at Computex 2024. As the power-hungry RTX 40 lineup hit the market, many voiced concerns over the cards' ever-growing dimensions. NVIDIA's renewed SFF PC focus signal options like the Sakura Blizzard could become more common. For space-constrained enthusiasts, having top-tier GPU muscle in a properly-cooled Mini-ITX card is a big win. Zephyr's ITX-sized RTX 4070 shows powerful things can come in small packages, and we hope more manufacturers follow this philosophy.

Western Digital Introduces New Enterprise AI Storage Solutions and AI Data Cycle Framework

Fueling the next wave of AI innovation, Western Digital today introduced a six-stage AI Data Cycle framework that defines the optimal storage mix for AI workloads at scale. This framework will help customers plan and develop advanced storage infrastructures to maximize their AI investments, improve efficiency, and reduce the total cost of ownership (TCO) of their AI workflows. AI models operate in a continuous loop of data consumption and generation - processing text, images, audio and video among other data types while simultaneously producing new unique data. As AI technologies become more advanced, data storage systems must deliver the capacity and performance to support the computational loads and speeds required for large, sophisticated models while managing immense volumes of data. Western Digital has strategically aligned its Flash and HDD product and technology roadmaps to the storage requirements of each critical stage of the cycle, and today introduced a new industry-leading, high-performance PCIe Gen 5 SSD to support AI training and inference; a high-capacity 64 TB SSD for fast AI data lakes; and the world's highest capacity ePMR, UltraSMR 32 TB HDD for cost-effective storage at scale.

"There's no doubt that Generative AI is the next transformational technology, and storage is a critical enabler. The implications for storage are expected to be significant as the role of storage, and access to data, influences the speed, efficiency and accuracy of AI Models, especially as larger and higher-quality data sets become more prevalent," said Ed Burns, Research Director at IDC. "As a leader in Flash and HDD, Western Digital has an opportunity to benefit in this growing AI landscape with its strong market position and broad portfolio, which meets a variety of needs within the different AI data cycle stages."

Sparkle Presents Streamer 4K60 Video Capture Card at Computex 2024

At Computex 2024, Sparkle has unveiled its latest product aimed at streamers, the Streamer 4K60. This innovative PCIe video capture card is engineered to cater to the demands of high-quality video production. Sparkle has seamlessly integrated many advanced features into this remarkable device, ensuring that content creators and streamers can unleash their creativity without compromise. The Streamer 4K60 boasts the ability to capture ultra-high-definition video at a smooth 60 frames per second in 4K resolution or even 120 frames per second in 1080p resolution. Its dual HDMI input support facilitates video passthrough and enables users to explore the realms of Picture-in-Picture (PiP) and cooperative streaming functionalities. Furthermore, this cutting-edge device effortlessly combines HDR10 support, a robust metal cover design for durability, and comprehensive software compatibility with popular platforms like OBS and XSplit. For connection to the PC, it requires a PCIe 2.0 x1 connector. For video passthrough and video input, HDMI 2.1 is used. The card is priced at 299 USD and will be available in late July or early August.
Return to Keyword Browsing
Oct 11th, 2024 11:00 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts