News Posts matching #ASIC

Return to Keyword Browsing

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

TrendForce: Enterprise SSD Contract Prices Likely to Increase by 15% QoQ for 3Q21 Due to High SSD Demand and Short Supply of Upstream IC Components

The ramp-up of the Intel Ice Lake and AMD Milan processors is expected to not only propel growths in server shipment for two consecutive quarters from 2Q21 to 3Q21, but also drive up the share of high-density products in North American hyperscalers' enterprise SSD purchases, according to TrendForce's latest investigations. In China, procurement activities by domestic hyperscalers Alibaba and ByteDance are expected to increase on a quarterly basis as well. With the labor force gradually returning to physical offices, enterprises are now placing an increasing number of IT equipment orders, including servers, compared to 1H21. Hence, global enterprise SSD procurement capacity is expected to increase by 7% QoQ in 3Q21. Ongoing shortages in foundry capacities, however, have led to the supply of SSD components lagging behind demand. At the same time, enterprise SSD suppliers are aggressively raising the share of large-density products in their offerings in an attempt to optimize their product lines' profitability. Taking account of these factors, TrendForce expects contract prices of enterprise SSDs to undergo a staggering 15% QoQ increase for 3Q21.

New Intel XPU Innovations Target HPC and AI

At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the world's supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

"To maximize HPC performance we must leverage all the computer resources and technology advancements available to us," said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. "Intel is the driving force behind the industry's move toward exascale computing, and the advancements we're delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization."

Seagate Introduces Groundbreaking Exos CORVAULT Hardware-Based Self-Healing Block Storage System

Seagate, a world leader in data storage infrastructure solutions, launched a uniquely intelligent category of mass-capacity storage designed to streamline data management and reduce human intervention for macro edge and data center environments. The new Exos CORVAULT high-density storage system offers SAN-level performance built on Seagate's breakthrough storage architecture that combines the sixth generation VelosCT ASIC, ADAPT erasure code data protection, and Autonomous Drive Regeneration.

Designed on the Seagate Exos 4U106 12 Gb/s platform, CORVAULT offers "five nines" availability (99.999%) helping to deliver consistently high reliability. The maximum-density 4U chassis accommodates 106 drives in only seven inches (18 cm) of rack space. It is tuned to maximize drive performance by protecting against vibrational and acoustic interference, heat, and power irregularities.

Bosch Unveils One Billion Euro Chip Manufacturing Facility in Germany

Robert Bosch GmbH, commonly known as just Bosch, has today unveiled the results of the company's biggest investment ever. On Monday, the company has unveiled its one billion Euro manufacturing facility, which roughly translates to 1.2 billion US Dollars. The manufacturing plant is located in Dresden, Germany, and it aims to supply the leading self-driving automobile companies with chips that are in great demand. As the main goal for the plant is to manufacture chips for the automotive industry, this new 7,200 m² Dresden facility is supposed to provide car makers with Application-Specific Integrated Circuits (ASICs) for power management and tasks such as triggering the automatic braking system of cars.

The one billion Euro facility was funded partly by the funds coming from the European Union investment scheme, which donated as much as 200 million Euros ($243 million). The goal of the plan is to start with the manufacturing of chips for power tools as early as July and start production of automotive chips in September. All of the chips will be manufactured on 300 mm wafers, which offers a major improvement in quantity compared to 200 and 150 mm wafers currently used by Bosch. The opening of this facility will surely help with the global chip shortages, which have even hit the automotive sector.

Marvell Launches Industry's First 1.6T Ethernet PHY with 100G PAM4 I/Os in 5nm

Marvell today introduced the industry's first 1.6T Ethernet PHY with 100G PAM4 electrical input/outputs (I/Os) in 5nm. The demand for increased bandwidth in the data center to support massive data growth is driving the transition to 1.6T (Terabits per second) in the Ethernet backbone. 100G serial I/Os play a critical role in the cloud infrastructure to help move data across compute, networking and storage in a power-efficient manner. The new Marvell Alaska C PHY is designed to accelerate the transition to 100G serial interconnects and doubles the bandwidth speeds of the previous generation of PHYs to bring scalability for performance-critical cloud workloads and applications such as artificial intelligence and machine learning.

Marvell's 1.6T Ethernet PHY solution, the 88X93160, enables next-generation 100G serial-based 400G and 800G Ethernet links for high-density switches. The doubling of the signaling rate creates signal integrity challenges, driving the need for retimer devices for high port count switch designs. It's critical that retimer and gearboxes used for these applications are extremely power efficient. Implemented in the latest 5nm node, the Marvell 800GbE PHY provides a 40% savings in I/O power compared to existing 50G PAM4 based I/Os.

TSMC to Execute Bitmain's Orders for 5nm Crypto-Mining ASICs from Q3-2021

TSMC will be manufacturing next-generation 5 nm ASICs for Bitmain. The company designs purpose-built machines for mining crypto-currency, using ASICs. DigiTimes reports that the 5 nm volume production could kick off form Q3-2021. Bitmain's latest Antminer ASIC-based mining machines announced last month were purported to be up to 32 times faster than a GeForce RTX 3080 at mining Ethereum. Recent history has shown that whenever ASICs catch up or beat GPUs at mining, prices of GPUs tend to drop. With no 5 nm GPUs on the horizon for Q3-2021, one really can expect market pressure from crypto-miners to drop off when Antminers gain traction.

Xilinx Reports Fiscal Fourth Quarter and Fiscal Year 2021 Results

Xilinx, Inc. (Nasdaq: XLNX), the leader in adaptive computing, today announced record revenues of $851 million for the fiscal fourth quarter, up 6% over the previous quarter and an increase of 13% year over year. Fiscal 2021 revenues were $3.15 billion, largely flat from the prior fiscal year. GAAP net income for the fiscal fourth quarter was $188 million, or $0.75 per diluted share. Non-GAAP net income for the quarter was $204 million, or $0.82 per diluted share. GAAP net income for fiscal year 2021 was $647 million, or $2.62 per diluted share. Non-GAAP net income for fiscal year 2021 was $762 million, or $3.08 per diluted share.

Additional fourth quarter of fiscal year 2021 comparisons are provided in the charts below. "We are pleased with our fourth quarter results as we delivered record revenues and double-digit year-over-year growth in the midst of a challenging supply chain environment," said Victor Peng, Xilinx president and CEO. "Xilinx saw further improvement in demand across a majority of our diversified end markets with key strength in our Wireless, Data Center and Automotive markets, the pillars of our growth strategy. Our teams have executed well and we remain focused on continuing to meet customers' critical needs.

Team Group Announces T-CREATE EXPERT NVMe SSD with Extreme 12,000 TBW Endurance

In recent years, the cryptocurrency market has been gaining a great deal of attention, leading to a continuous surge in global mining. Chia, started trading in May, is one of the new types of cryptocurrencies. Its mining method is different from previous cryptocurrencies that use GPUs and ASICs to complete calculations and earn profits. The everlastingly durability EXPERT PCIe SSD, developed by TEAMGROUP's creator sub-brand T-CREATE, is the best choice for the environmentally-friendly "storage capacity mining" that Chia promotes.

The Chia Network utilizes a consensus algorithm called "Proof of Space and Time." A Chia farmer's possible yield is directly proportional to their amount of storage space. If you want to earn higher profits today, you need to have more hard drive space. This approach ensures that no one will design special-purpose hardware (ASIC) for mining it. Storage capacity and power consumption are also relatively unrelated. Therefore, Chia Network is a new "green" currency system. If you want to join the mining community utilizing this environmentally-friendly model, T-CREATE EXPERT PCIe SSD can help you get the greatest results. It features spectacular TBW values of up 12,000 TB, making it the perfect tool for supporting the intense write-cycle algorithms required for the mining process.

YouTube Updates Server Infrastructure With Custom ASICs for Video Transcoding

Video streaming is looking a bit like magic. The uploader sends a video to one platform in one resolution and encoding format, while the viewer requests a video in a specific resolution and encoding format used by the device the video is streamed on. YouTube knows this best, as it represents the world's largest video platform with over 2 billion users visiting the platform each month. That takes a massive load on the server infrastructure over at Google's data centers that host the service. There is about 500 hours worth of video content uploaded to the platform every minute, and regular hardware isn't being enough anymore to handle everything.

That is why YouTube has developed custom chips, ASICs, that are called VCUs or Video (trans)Coding Units. In Google data centers, there is a large problem with transcoding. Each video needs to adapt to the streaming platform and desired specifications, and doing that on regular hardware is a problem. By using ASIC devices, such as VCUs, Google can keep up with the demand and deliver the best possible quality. Codenamed Argos, the chip can deliver 20-33x improvement in efficiency compared to the regular server platform. In data centers, the VCU is implemented as a regular PCIe card, with two chips under the heatsinks.

Commodore 64 Modded To Mine Bitcoin

We saw the modified Nintendo Game Boy last month which could crank out Bitcoins at a blistering 0.8 hashes per second or ~125 trillion times slower than a modern Bitcoin ASIC miner. If you are searching for something a bit more modest than the Game Boy take a look at the Commodore 64 which has been modded to achieve a Bitcoin mining rate of 0.3 hashes per second. The Commodore 64 was released by IBM in 1982 featuring the MOS Technology 6510 processor clocked at 1.023 MHz and paired with 64 KB RAM and 20 KB ROM.

While the Commodore currently falls behind the Game Boy there is hope on the horizon with the creator of the program claiming a 10x performance improvement to over 3 hashes per second is possible by re-writing the code in machine language. The commodore 64 can be further upgraded with the SuperCPU upgrade which boosts mining speeds to over 60 hashes per second completely destroying the Game Boy but still falling just short of the latest ASIC miners at ~18,000,000,000,000 hashes per second. Obviously, this demonstration was not meant as a practical application but it is interesting to see how cryptocurrency mining can be implemented on older hardware and the amazing rate of technological advancement we have seen over the last 40 years.

Bitmain Releases Antminer E9 Ethereum ASIC With Performance of 32 RTX 3080 Cards

Antminer has recently announced their most powerful Ethereum miner yet the E9 with performance of 3 GH/s as the price of Ethereum reaches all-time highs. The Chinese manufacturer advertises that this is equivalent to 32 NVIDIA RTX 3080 cards while coming in with significantly less power consumption and likely a lower price. The Antminer E9 achieves its 3 GH/s mining speed with a power consumption of just 2556 W which gives it an efficiency of 0.85 J/M which would make it one of the most efficient Ethereum miners available. While the ASIC appears to offer significant advantages it is unlikely to meet the global demand for global Ethereum miners and is unlikely to affect global GPU shortages. Bitmain did not announce specific pricing or availability information for the Antminer E9 ASIC.

Tenstorrent Selects SiFive Intelligence X280 for Next-Generation AI Processors

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced that Tenstorrent, an AI semiconductor and software start-up developing next-generation computers, will license the new SiFive Intelligence X280 processor in its AI training and inference processor. SiFive will deliver more details of its SiFive Intelligence initiative including the SiFive Intelligence X280 processor at the Linley Spring Processor Conference on April 23rd.

Tenstorrent's novel approach to inference and training effectively and efficiently accommodates the exponential growth in the size of machine learning models while offering best-in-class performance.

Nintendo Game Boy Modded to Mine Bitcoin

Nintendo's Game Boy handheld console was launched in 1989, making it 32 years old. Being widely regarded as the icon of handheld gaming, it was sold in millions and has been copied countless times. However, with some spare time and a crazy mind, the console has been modified to mine Bitcoin cryptocurrency. Yes, you are reading that right. An 8-bit console is mining the biggest and the most valuable cryptocurrency. An electronics enthusiast named "stacksmashing" has set himself a difficult task - to prove that the console can mine some Bitcoin, at any possible rate. And he has managed to prove it is possible, although with some modifications.

Given that the console lacks any connectivity options to the outside world due to its age, the modder had to use SPI (Serial Peripheral Interface) to connect the Game Boy with the Raspberry Pi, which had the task of connecting the Game Boy to the internet to mine some Bitcoin. Using the custom 8-bit Sharp LR35902 processor running at 4.19 MHz, the console is naturally not very powerful. Thus, it can not do any meaningful mining and to compare it to modern mining ASICs is just silly. However, it is interesting to see proof of concept and get to see some engineering fun. For more information, please check out the YouTube video here.

Google Hires Intel Veteran Uri Frank To Lead Datacenter SoC Development

Google has recently hired Intel veteran Uri Frank as VP of Engineering for the newly created server chip design division. The new division will develop custom Systems on Chip for use in Google datacenters to gain higher performance and to use less power by integrating hardware and software. Google has considerable experience in hardware development starting with its Tensor Processing Unit in 2015, its Video Processing Units in 2018, and in 2019 he first open-source silicon root-of-trust project. Google has also developed custom hardware solutions for SSDs, HDDs, network switches, and network interface cards in collaboration with external partners.

Google hopes to reduce the latency and bandwidth between different components by integrating them all into custom SoCs to improve power consumption and cost compared to individual ASICs on a motherboard. The development of these custom SoCs will be a long one with Google planning to hire hundreds of SoC engineers so we will be waiting a few years before we begin to see these deployed. This move is consistent with rivals Amazon Web Services and Microsoft Azure who are both also developing custom server chips for their datacenters. Google will continue to purchase existing productions where it is more practical to do so and hopes to create an ecosystem that will benefit the entire industry.

NVIDIA GeForce RTX 3070 Ti and RTX 3080 Ti Alleged Memory Specs and ASIC Codes Surface

An add-in card partner source shared with VideoCardz some juicy details about a pair of upcoming high-end GeForce RTX 30-series "Ampere" graphics cards. Called the GeForce RTX 3070 Ti and GeForce RTX 3080 Ti, the two are aimed to restore NVIDIA's competitiveness against the likes of AMD's recent Radeon RX 6000 series GPUs. It looks like NVIDIA doesn't want to play the memory size game just yet, despite giving the RTX 3060 12 GB of it.

The GeForce RTX 3070 Ti appears to be maxing out the GA104 silicon and carries the ASIC code "GA104-400-A#." The current RTX 3070 enables all but one of the TPCs on the GA104, working out to 5,888 CUDA cores. The new RTX 3070 Ti probably maxes out the GA104 to its CUDA core count of 6,144. The more substantial upgrade, however, is memory. The card ditches 14 Gbps GDDR6 for fast GDDR6X memory of an unknown speed—probably higher than 16 Gbps. The memory size remains 8 GB, across 256-bit.

Intel and DARPA Develop Secure Structured ASIC Chips Made in the US

Intel and the U.S. Defense Advanced Research Projects Agency (DARPA) today announced a three-year partnership to advance the development of domestically manufactured structured Application Specific Integrated Circuit (ASIC) platforms. The Structured Array Hardware for Automatically Realized Applications (SAHARA) partnership enables the design of custom chips that include state-of-the-art security countermeasure technologies. A reliable, secure, domestic source of leading-edge semiconductors remains critical to the U.S.

"We are combining our most advanced Intel eASIC structured ASIC technology with state-of-the-art data interface chiplets and enhanced security protection, and it's all being made within the U.S. from beginning to end. This will enable defense and commercial electronics systems developers to rapidly develop and deploy custom chips based on Intel's advanced 10 nm semiconductor process," said José Roberto Alvarez, senior director, CTO Office, Intel Programmable Solutions Group.

Seagate Designs RISC-V Cores to Power Data Mobility and Trustworthiness

Seagate Technology plc announced that it has designed two processors based on the open RISC-V instruction set architecture (ISA). One of the open standards-enabled cores is designed for high performance and the other is area-optimized. The high-performance processor has already been built with RISC-V-enabled silicon and demonstrated as functional in hard disk drives (HDDs). The area-optimized core has been designed and is in the process of being built.

Because both processors offer RISC-V security features, the benefits add up to more robust edge-to-cloud data trustworthiness, security, and mobility—all essential in the era when so much data is on the move. The announcement, made today at the virtual RISC-V Summit 2020, is the first public report on the results of Seagate's several years of collaboration with RISC-V International.

BittWare Launches IA-840F with Intel Agilex FPGA and Support for oneAPI

BittWare, a Molex company, today unveiled the IA-840F, the company's first Intel Agilex -based FPGA card designed to deliver significant performance-per-watt improvements for next-generation data center, networking and edge compute workloads. Agilex FPGAs deliver up to 40% higher performance or up to 40% lower power, depending on application requirements. BittWare maximized I/O features using the Agilex chip's unique tiling architecture with dual QSFP-DDs (4× 100G), PCIe Gen4 x16, and three MCIO expansion ports for diverse applications. BittWare also announced support for Intel oneAPI, which enables an abstracted development flow for dramatically simplified code re-use across multiple architectures.

"Modern data center workloads are incredibly diverse, requiring customers to implement a mix of scalar, vector, matrix and spatial architectures," said Craig Petrie, vice president of marketing for BittWare. "The IA-840F ensures that customers can quickly and easily exploit the advanced features of the Intel Agilex FPGA. For those customers who prefer to develop FPGA applications at an abstracted level, we are including support for oneAPI. This new unified software programming environment allows customers to program the Agilex FPGA from a single code base with native high-level language performance across architectures."

Intel Sells Its Power Management Chip Business, Enpirion, to Mediatek for $85 Million

Intel continues to "shed fat" on its business portfolio. After last year's sale of its smartphone modem chip business to Apple, the company is now parting ways with its power management circuitry division - Enpirion - and offloading it to Richtek, a division of Taiwanese MediaTek. The sale price of $85 million is a drop in the bucket for Intel's overall bottom line, so it's not a way for the company to cash in some desperately needed money - all accounts of Intel's troubles in the wake of its semiconductor manufacturing issues and AMD's market resurgence pale in comparison to Intel's revenues.

This actually looks like a company that's actually streamlining its R&D expenditures and focusing on execution for the markets Intel sees as most important for today and for tomorrow. Intel's Enpirion focuses on building power management chips for FPGA circuits, SoCs, CPUs, and ASICs, and will now serve to bolster MediaTek's SoC business while allowing the Taiwanese company to expand and diversify its business portfolio, even as Intel focuses on their core competencies.

Marvell Announces Industry's First 112G 5nm SerDes Solution for Scaling Cloud Data Center Infrastructure

Marvell today unveiled the industry's first 112G 5 nm SerDes solution that has been validated in hardware. The DSP-based SerDes boasts industry-leading performance, power and area, helping to propel 112G as the interconnect of choice for next generation 5G, enterprise, and cloud data center infrastructure. Marvell has recently secured a new custom ASIC design win customer that will embed this new IP to build next generation top-of-rack (ToR) and spine switches for leading hyperscale data centers around the world. The Marvell 5 nm SerDes solution doubles the bandwidth of current systems based on 56G while enabling the deployment of 112G I/Os in many exciting new applications, including network and data center switching, network traffic management, machine learning training and inference, and application-specific accelerators.

Today's news, which comes on the heels of the company's announcement with TSMC of its 5 nm portfolio, further strengthens Marvell's leading data infrastructure offerings in the industry's most advanced process geometry. The 112G 5 nm SerDes solution is part of Marvell's industry-leading IP portfolio that addresses the full spectrum of infrastructure requirements and includes processor subsystems, encryption engines, system-on-chip fabrics, chip-to-chip interconnects, and a variety of physical layer interfaces.

NVIDIA Readies New GeForce RTX 30-series SKU Positioned Between RTX 3070 and RTX 3080

Possibly unsure of the GeForce RTX 3070 tackling AMD's Radeon RX 6000 series parts, NVIDIA is designing a new RTX 30-series SKU positioned between the RTX 3070 and RTX 3080. This is not a 16 GB variant of the RTX 3070, but rather a new SKU based on the 8 nm "GA102" silicon, according to a reliable source with NVIDIA leaks, kopite7kimi. The SKU is based on the GA102 with the ASIC code "GA102-150-KD-A1." The silicon is configured with 7,424 CUDA cores across 58 streaming multiprocessors (29 TPCs), 232 tensor cores, 232 TMUs, 58 RT cores, and an unknown number of ROPs. According to kopite7kimi, the card is configured with a 320-bit wide memory interface, although it's not known if this is conventional GDDR6, like the RTX 3070 has, or faster GDDR6X, like that on the RTX 3080.

NVIDIA recently "cancelled" a future 16 GB variant of the RTX 3070, and 20 GB variant of the RTX 3080, which is possibly the company calibrating its response to the Radeon RX 6000 series. We theorize that doubling in memory amounts may not have hit the desired cost-performance targets; and the company probably believes the competitive outlook of the RTX 3080 10 GB is secure. This explains the need for a SKU with performance halfway between that of the RTX 3070 and RTX 3080. As for pricing, with the RTX 3070 positioned at $500 and the RTX 3080 at $700, the new SKU could be priced somewhere in between. AMD's RDNA2-based Radeon RX 6000 series GPUs are expected to feature DirectX 12 Ultimate logo compliance, meaning that there is a level playing ground between AMD and NVIDIA in the performance segment.

Intel Partners with Heidelberg University Computing Center to Establish oneAPI Academic Center of Excellence

Intel and Heidelberg University Computing Center (URZ) today announced that they have established oneAPI Academic Center of Excellence (CoE) at UZR. The newly established CoE has a goal to further develop Intel's oneAPI standard and enable it to work on AMD GPUs. This information is a bit shocking, however, Intel believes that the technology should work on a wide range of processors, no matter the vendor. The heterogeneous hardware programming is the main goal here. In a Twitter thread, an Intel employee specifies that Intel has also been working with Arm and NVIDIA to bring Data-Parallel C++ (DPC++), a core of oneAPI, to those vendors as well. That should bring this universal programming model to every device and adapt to every platform, which is a goal of heterogeneous programming - whatever you need to program a CPU, GPU, or some other ASIC, it is covered by a single API, specifically oneAPI.
UZRURZ's work as a oneAPI CoE will add advanced DPC++ capabilities into hipSYCL, which supports systems based on AMD GPUs, NVIDIA GPUs, and CPUs. New DPC++ extensions are part of the SYCL 2020 provisional specification that brings features such as unified shared memory to hipSYCL and the platforms it supports - furthering the promise of oneAPI application support across system architectures and vendors.

NVIDIA GeForce RTX 3090 "CEO Edition" Rears its Head, Most Likely a Fake

Social media is abuzz with a screengrab of a regional webpage of the NVIDIA website purporting a "GeForce RTX 3090 CEO Edition" graphics card. Positioned a notch above the "Founders Edition," this $3,499 card, if even remotely real, could be a limited edition product. The screengrab references "48 GB of G6X" memory. We're not sure how this is even possible. The RTX 3090 already uses 8 Gbit GDDR6X chips, piggybacking two chips per 32-bit memory channel, unless Micron has done the unthinkable by launching a 16 Gbit G6X chip within 2020. Frankly, we're very interested to see how the next-gen Quadro RTX even achieves its alleged 48 GB of GDDR6.

That aside, the alloy frame now comes with a gold finish. We wonder if memory and a fancy trim is all that NVIDIA is asking the extra 2 Grand for, or if it even maxed out the "GA102" ASIC (there are two more TPCs left to unlock). As for the name "CEO Edition," there have been instances of tech CEOs flexing their vanity on limited edition products. Limited edition Ryzen and Radeon products, for example, bear the AMD CEO's signature. So the name "CEO Edition" by itself isn't implausible. Just not with these specs, and not this price.

Rambus Advances HBM2E Performance to 4.0 Gbps for AI/ML Training Applications

Rambus Inc. (NASDAQ: RMBS), a premier silicon IP and chip provider making data faster and safer, today announced it has achieved a record 4 Gbps performance with the Rambus HBM2E memory interface solution consisting of a fully-integrated PHY and controller. Paired with the industry's fastest HBM2E DRAM from SK hynix operating at 3.6 Gbps, the solution can deliver 460 GB/s of bandwidth from a single HBM2E device. This performance meets the terabyte-scale bandwidth needs of accelerators targeting the most demanding AI/ML training and high-performance computing (HPC) applications.

"With this achievement by Rambus, designers of AI and HPC systems can now implement systems using the world's fastest HBM2E DRAM running at 3.6 Gbps from SK hynix," said Uksong Kang, vice president of product planning at SK hynix. "In July, we announced full-scale mass-production of HBM2E for state-of-the-art computing applications demanding the highest bandwidth available."
Return to Keyword Browsing