News Posts matching #EPYC

Return to Keyword Browsing

GIGABYTE Announces G242-Z11 HPC Node with PCIe 4.0

GIGABYTE Technology,, an industry leader in high-performance servers and workstations, today announced the launch of the GIGABYTE G242-Z11 with PCIe 4.0, which adds to an already extensive line of G242 series servers, designed for AI, deep learning, data analytics, and scientific computing. High-speed interfaces such as Ethernet, Infiniband, and PCI Express rely on fast data transfer, and PCIe 3.0 can pose a bottleneck in some servers. With the expansion of the AMD EPYC family of processors comes PCIe Gen 4.0, which is valuable to servers so as not to bottleneck high bandwidth applications. The 2nd Gen AMD EPYC 7002 processors have added PCIe Gen 4.0, and GIGABYTE has included an ever-evolving line of servers to accommodate the latest technology.

The G242-Z11 caters to the capabilities of 2nd Gen AMD EPYC 7002 series processors. The G242-Z11 is built around a single AMD EPYC processor, and this even includes the new 280 W 64-core (128 threads) AMD EPYC 7H12. Besides a high core count, the 7002 series has 128 PCIe lanes and natively supports PCIe Gen 4.0. It offers double the speed and bandwidth when compared to PCIe 3.0. Having PCIe 4.0 allows for 16GT/s per lane and a total bandwidth of 64 GB/s. As far as memory support, the G242-Z11 has support for 8-channel DDR4 with room for up to 8 DIMMs. In this 1 DIMM per channel configuration, it can support up to 2 TB of memory and speeds up to 3200 MHz.

Intel Ice Lake-SP Processors Get Benchmarked Against AMD EPYC Rome

Intel is preparing to launch its next-generation for server processors and the next in line is the Ice Lake-SP 10 nm CPU. Featuring a Golden Cove CPU and up to 28 cores, the CPU is set to bring big improvements over the past generation of server products called Cascade Lake. Today, thanks to the sharp eye of TUM_APISAK, we have a new benchmark of the Ice Lake-SP platform, which is compared to AMD's EPYC Rome offerings. In the latest GeekBench 4 score, appeared an engineering sample of unknown Ice Lake-SP model with 28 cores, 56 threads, a base frequency of 1.5 GHz, and a boost of 3.19 GHz.

This model was put in a dual-socket configuration that ends up at a total of 56 core and 112 threads, against a single 64 core AMD EPYC 7442 Rome CPU. The dual-socket Intel configuration scored 3424 points in the single-threaded test, where AMD configuration scored notably higher 4398 points. The lower score on Intel's part is possibly due to lower clocks, which should improve in the final product, as this is only an engineering sample. When it comes to the multi-threaded test, Intel configuration scored 38079 points, where the AMD EPYC system did worse and scored 35492 points. The reason for this higher result is unknown, however, it shows that Ice Lake-SP has some potential.

AMD Confirms "Zen 4" on 5nm, Other Interesting Tidbits from Q2-2020 Earnings Call

AMD late Tuesday released its Q2-2020 financial results, which saw the company rake in revenue of $1.93 billion for the quarter, and clock a 26 percent YoY revenue growth. In both its corporate presentation targeted at the financial analysts, and its post-results conference call, AMD revealed a handful interesting bits looking into the near future. Much of the focus of AMD's presentation was in reassuring investors that [unlike Intel] it is promising a stable and predictable roadmap, that nothing has changed on its roadmap, and that it intends to execute everything on time. "Over the past couple of quarters what we've seen is that they see our performance/capability. You can count on us for a consistent roadmap. Milan point important for us, will ensure it ships later this year. Already started engaging people on Zen4/5nm. We feel customers are very open. We feel well positioned," said president and CEO Dr Lisa Su.

For starters, there was yet another confirmation from the CEO that the company will launch the "Zen 3" CPU microarchitecture across both the consumer and data-center segments before year-end, which means both Ryzen and EPYC "Milan" products based on "Zen 3." Also confirmed was the introduction of the RDNA2 graphics architecture across consumer graphics segments, and the debut of the CDNA scalar compute architecture. The company started shipping semi-custom SoCs to both Microsoft and Sony, so they could manufacture their next-generation Xbox Series X and PlayStation 5 game consoles in volumes for the Holiday shopping season. Semi-custom shipments could contribute big to the company's Q3-2020 earnings. CDNA won't play a big role in 2020 for AMD, but there will be more opportunities for the datacenter GPU lineup in 2021, according to the company. CDNA2 debuts next year.

AMD Reports Second Quarter 2020 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the second quarter of 2020 of $1.93 billion, operating income of $173 million, net income of $157 million and diluted earnings per share of $0.13. On a non-GAAP basis, operating income was $233 million, net income was $216 million and diluted earnings per share was $0.18. "We delivered strong second quarter results, led by record notebook and server processor sales as Ryzen and EPYC revenue more than doubled from a year ago," said Dr. Lisa Su, AMD president and CEO. "Despite some macroeconomic uncertainty, we are raising our full-year revenue outlook as we enter our next phase of growth driven by the acceleration of our business in multiple markets."

Linux Performance of AMD Rome vs Intel Cascade Lake, 1 Year On

Michael Larabel over at Phoronix posted an extremely comprehensive analysis on the performance differential between AMD's Rome-based EPYC and Intel's Cascade Lake Xeons one-year after release. The battery of tests, comprising more than 116 benchmark results, pits a Xeon Platinum 8280 2P system against an EPYC 7742 2P one. The tests were conducted pitting performance of both systems while running benchmarks under the Ubuntu 19.04 release, which was chosen as the "one year ago" baseline, against the newer Linux software stack (Ubuntu 20.10 daily + GCC 10 + Linux 5.8).

The benchmark conclusions are interesting. For one, Intel gained more ground than AMD over the course of the year, with the Xeon platform gaining 6% performance across releases, while AMD's EPYC gained just 4% over the same period of time. This means that AMD's system is still an average of 14% faster across all tests than the Intel platform, however, which speaks to AMD's silicon superiority. Check some benchmark results below, but follow the source link for the full rundown.

Advanced Security Features of AMD EPYC Processors Enable New Google Cloud Confidential Computing Portfolio

AMD and Google Cloud today announced the beta availability of Confidential Virtual Machines (VMs) for Google Compute Engine powered by 2nd Gen AMD EPYC processors, taking advantage of the processors' advanced security features. The first product in the Google Cloud Confidential Computing portfolio, Confidential VMs, enables customers for the first time to encrypt data in-use while it is being processed and not just when at rest and in-transit. Based on the N2D family of VMs for Google Compute Engine, Confidential VMs provide customers high performance processing for the most demanding computational tasks and enable encryption for even the most sensitive data in the cloud while it is being processed.

"At Google Cloud, we believe the future of cloud computing will increasingly shift to private, encrypted services where users can be confident that the confidentiality of their data is always under their control. To help customers in making that transition, we've created Confidential VMs, the first product in our Google Cloud Confidential Computing portfolio," said Vint Cerf, vice president and chief internet evangelist, Google. "By using advanced security technology in the AMD EPYC processors, we've created a breakthrough technology that allows customers to encrypt their data in the cloud while it's being processed and unlock computing scenarios that had previously not been possible."

AMD 64-core EPYC "Milan" Based on "Zen 3" Could Ship with 3.00 GHz Clocks

AMD's 3rd generation EPYC line of enterprise processors that leverage the "Zen 3" microarchitecture, could innovate in two directions - towards increasing performance by doing away with the CCX (compute complex) multi-core topology; and taking advantage of a newer/refined 7 nm-class node to increase clock-speeds. Igor's Lab decoded as many as three OPNs of the upcoming 3rd gen EPYC series, including a 64-core/128-thread part that ships with frequency of 3.00 GHz. The top 2nd gen EPYC 64-core part, the 7662, ships with 2.00 GHz base frequency and 3.30 GHz boost; and 225 W TDP. AMD is expected to unveil its "Zen 3" microarchitecture within 2020.

AMD Ryzen Threadripper PRO 3995WX Processor Pictured: 8-channel DDR4

Here is the first picture of the Ryzen Threadripper PRO 3995WX processor, designed to be part of AMD's HEDT/workstation processor launch for this year. The picture surfaced briefly on the ChipHell forums, before being picked up by HXL (@9550pro) This processor is designed to compete with Intel Xeon W series processors, such as the W-3175X, and is hence located a segment above even the "normal" Threadripper series led by the 64-core/128-thread Threadripper 3990X. Besides certain features exclusive to Ryzen PRO series processors, the killer feature with the 3995WX is a menacing 8-channel DDR4 memory interface, that can handle up to 2 TB of memory with ECC.

The Threadripper PRO 3995X is expected to have a mostly identical I/O to the most expensive EPYC 7662 processor. As a Ryzen-branded chip, it could feature higher clock speeds than its EPYC counterpart. To enable its 8-channel memory, the processor could come with a new socket, likely the sWRX8, and AMD WRX80 chipset, although it wouldn't surprise us if these processors have some form of inter-compatibility with sTRX4 and TRX40 (at limited memory bandwidth and PCIe capabilities, of course). Sources tell VideoCardz that AMD could announce the Ryzen Threadripper PRO series as early as July 14, 2020.

As CERN Plans LHC Expansion, AMD Powers Latest Science Feats

AMD has entered a strategic partnership with the European Organization for Nuclear Research (CERN) in which the company seems poised to see its EPYC processors powering the latest and greatest when it comes to man-made incursions into the secrets of the universe. AMD's 2nd Gen EPYC 7742 processors are already being deployed in CERN's current Large Hadron Collider (LHC), a physics-defying particle accelerator. The LHC has already given us discoveries as important as the Higgs-Boson - a fundamental particle that has given profound insight into the workings of the Universe according to the Standard Model, and the discovery of which garnered the 2013 Nobel Prize for physics.

The current LHC is a 17-mile-long (27 km) underground ring of superconducting magnets housed in a pipe-like structure, or cryostat, which is cooled to temperatures just above absolute zero. Every single particle collision in the LHC generates some 40 TB/s of data that has to be stored, analyzed, and its irrelevant components discarded so as to generate usable data (all in the name of science). Even as AMD's EPYC 2 lineup is already being used for this effect in the current LHC, CERN has recently announced plans to back a €20bn investment on a second generation Hadron Collider. The Future Circular Collider (FCC), as it is being tentatively called, will be four times the size (over 100 km long) and six times more powerful than the LHC. And you can rest assured that all that data will still need to be processed, at a rate that's likely to increase proportionally to the power of the Future Circular Collider. Whether AMD will be the chosen partner for the hardware needed for this task remains unclear, but the fact that AMD's products are already being used in the current LHC could spell a very relevant outcome for AMD's financials in the future. Not to mention the earned bragging rights on account of their hardware being used for sciences' most extraordinary feats.

AMD EPYC Scores New Supercomputing and High-Performance Cloud Computing System Wins

AMD today announced multiple new high-performance computing wins for AMD EPYC processors, including that the seventh fastest supercomputer in the world and four of the 50 highest-performance systems on the bi-annual TOP500 list are now powered by AMD. Momentum for AMD EPYC processors in advanced science and health research continues to grow with new installations at Indiana University, Purdue University and CERN as well as high-performance computing (HPC) cloud instances from Amazon Web Services, Google, and Oracle Cloud.

"The leading HPC institutions are increasingly leveraging the power of 2nd Gen AMD EPYC processors to enable cutting-edge research that addresses the world's greatest challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "Our AMD EPYC CPUs, Radeon Instinct accelerators and open software programming environment are helping to advance the industry towards exascale-class computing, and we are proud to strengthen the global HPC ecosystem through our support of the top supercomputing clusters and cloud computing environments."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

TYAN Brings the Latest Server Advancements at its 2020 Server Solutions Online Exhibition

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, is showcasing its latest lineup of HPC, storage, cloud and embedded platforms powered by 2nd Gen AMD EPYC 7002 series processors and 2nd Gen Intel Xeon Scalable Processors at TYAN server solutions online exhibition.

"With over 30 years of experience offering state-of-the-art server platforms and server motherboards, TYAN has been recognized by large scale data center customers and server channels," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Combining the latest innovation from our partners, like Intel and AMD, TYAN customers enable to win the market opportunities precisely with TYAN's server building block offerings."

ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs

ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.

ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

2nd Gen AMD EPYC Processors Now Delivering More Computing Power to Amazon Web Services Customers

AMD today announced that 2nd Gen AMD EPYC processor powered Amazon Elastic Compute Cloud (EC2) C5a instances are now generally available in the AWS U.S. East, AWS U.S. West, AWS Europe and AWS Asia Pacific regions.

Powered by a 2nd Gen AMD EPYC processor running at frequencies up to 3.3Ghz, the Amazon EC2 C5a instances are the sixth instance family at AWS powered by AMD EPYC processors. By using the 2nd Gen AMD EPYC processor, the C5a instance delivers leadership x86 price-performance for a broad set of compute-intensive workloads including batch processing, distributed analytics, data transformations, log analytics and web applications.

AMD CEO Lisa Su Tops Earnings as Highest Paid CEO in The S&P 500

Lisa Su of Advanced Micro Devices has become the world's highest-paid CEO, according to a recent survey from The Associated Press on CEO compensation. Lisa Su's pay package was valued at $58.5 million after some extremely impressive company performance over her last five years as CEO on the back of the wild success of EPYC, Ryzen, and Radeon. This pay package comprised a base salary of $1 million, a performance bonus of $1.2 million, $56 million in stocks. This makes Lisa Su the first woman to become the highest-paid CEO and one of only 20 women on the list, versus 309 men.

AMD COVID-19 HPC Fund Donates 7 Petaflops of Compute Power to Researchers

AMD and technology partner Penguin Computing Inc., a division of SMART Global Holdings, Inc, today announced that New York University (NYU), Massachusetts Institute of Technology (MIT) and Rice University are the first universities named to receive complete AMD-powered, high-performance computing systems from the AMD HPC Fund for COVID-19 research. AMD also announced it will contribute a cloud-based system powered by AMD EPYC and AMD Radeon Instinct processors located on-site at Penguin Computing, providing remote supercomputing capabilities for selected researchers around the world. Combined, the donated systems will collectively provide researchers with more than seven petaflops of compute power that can be applied to fight COVID-19.

"High performance computing technology plays a critical role in modern viral research, deepening our understanding of how specific viruses work and ultimately accelerating the development of potential therapeutics and vaccines," said Lisa Su, president and CEO, AMD. "AMD and our technology partners are proud to provide researchers around the world with these new systems that will increase the computing capability available to fight COVID-19 and support future medical research."

Distant Blips on the AMD Roadmap Surface: Rembrandt and Raphael

Several future AMD processor codenames across various computing segments surfaced courtesy of an Expreview leak that's largely aligned with information from Komachi Ensaka. It does not account for "Matisse Refresh" that's allegedly coming out in June-July as three gaming-focused Ryzen socket AM4 desktop processors; but roadmap from 2H-2020 going up to 2022 sees many codenames surface. To begin with, the second half of 2020 promises to be as action packed as last year's 7/7 mega launch. Over in the graphics business, the company is expected to debut its DirectX 12 Ultimate-compliant RDNA2 client graphics, and its first CDNA architecture-based compute accelerators. Much of the processor launch cycle is based around the new "Zen 3" microarchitecture.

The server platform debuting in the second half of 2020 is codenamed "Genesis SP3." This will be the final processor architecture for the SP3-class enterprise sockets, as it has DDR4 and PCI-Express gen 4.0 I/O. The EPYC server processor is codenamed "Milan," and combines "Zen 3" chiplets along with an sIOD. EPYC Embedded (FP6 package) processors are codenamed "Grey Hawk."

GIGABYTE Announces HPC Systems Powered by NVIDIA A100 Tensor Core GPUs

GIGABYTE, a supplier of high-performance computing (HPC) systems, today disclosed four NVIDIA HGX A100 platforms under development. These platforms will be available with NVIDIA A100 Tensor Core GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. These four products include G262 series servers that can hold four NVIDIA A100 GPUs and G492 series that can provide eight A100 GPUs. Each series also distinguishes between two models, which support the 3rd generation Intel Xeon Scalable processor and the 2nd generation AMD EPYC processor. The NVIDIA HGX A100 platform is a key element in the NVIDIA accelerated data center concept that brings huge parallel computing power to customers, thereby helping customers accelerate their digital transformation.

With GPU acceleration becoming the mainstream technology in today's data center. Scientists, researchers and engineers are committed to using GPU-accelerated HPC and artificial intelligence (AI) to meet the important challenges of the current world. The NVIDIA accelerated data center concept, including GIGABYTE high-performance servers with NVIDIA NVSwitch, NVIDIA NVLink, and NVIDIA A100 GPUs, will provide GPU computing power required for different computing scales. The NVIDIA accelerated data center also features NVIDIA Mellanox HDR InfiniBand high-speed networking and NVIDIA Magnum IO software that supports GPUDirect RDMA and GPUDirect Storage.

AMD 2nd Gen EPYC Processors Set to Power Oracle Cloud Infrastructure Compute E3 Platform

Today, AMD announced that 2nd Gen AMD EPYC processors are powering the Oracle Cloud Infrastructure Compute E3 platform, bringing a new level of high-performance computing to Oracle Cloud. Using the AMD EPYC 7742 processor, the Oracle Cloud "E3 standard" and the bare metal compute instances are available today and leverage key features of the Gen AMD EPYC processors including class-leading memory bandwidth and the highest core count for an x86 data center processor. These features enable the Oracle Cloud E3 platform to be well suited for both general purpose and high bandwidth workloads such as big data analytics, memory intense workloads and Oracle business applications.

AMD Reports First Quarter 2020 Financial Results

AMD today announced revenue for the first quarter of 2020 of $1.79 billion, operating income of $177 million, net income of $162 million and diluted earnings per share of $0.14. On a non-GAAP* basis, operating income was $236 million, net income was $222 million and diluted earnings per share was $0.18.

"We executed well in the first quarter, navigating the challenging environment to deliver 40 percent year-over-year revenue growth and significant gross margin expansion driven by our Ryzen and EPYC processors," said Dr. Lisa Su, AMD president and CEO. "While we expect some uncertainty in the near-term demand environment, our financial foundation is solid and our strong product portfolio positions us well across a diverse set of resilient end markets. We remain focused on strong business execution while ensuring the safety of our employees and supporting our customers, partners and communities. Our strategy and long-term growth plans are unchanged."

AMD "Matisse" and "Rome" IO Controller Dies Mapped Out

Here are the first detailed die maps of the I/O controller dies of AMD's "Matisse" and "Rome" multi-chip modules that make up the company's 3rd generation Ryzen and 2nd generation EPYC processor families, respectively, by PC enthusiast and VLSI engineer "Nemez" aka @GPUsAreMagic on Twitter, with underlying die-shots by Fitzchens Fitz. The die maps of the "Matisse" cIOD in particular give us fascinating insights to how AMD designed the die to serve both as a cIOD and as an external FCH (AMD X570 and TRX40 chipsets). At the heart of both these chips' design effort is using highly configurable SerDes (serializer/deserializers) that can work as PCIe, SATA, USB 3, or other high-bandwidth serial interfaces, using a network of fabric switches and PHYs. This is how motherboard designers are able to configure the chipsets for the I/O they want for their specific board designs.

The "Matisse" cIOD has two x16 SerDes controllers and an I/O root hub, along with two configurable x16 SerDes PHYs, while the "Rome" sIOD has four times as many SerDes controllers, along with eight times as many PHYs. The "Castle Peak" cIOD (3rd gen Ryzen Threadripper) disables half the SerDes resources on the "Rome" sIOD, along with half as many memory controllers and PHYs, limiting it to 4-channel DDR4. The "Matisse" cIOD features two IFOP (Infinity Fabric over Package) links, wiring out to the two "Zen 2" CCDs (chiplets) on the MCM, while the "Rome" sIOD features eight such IFOP interfaces for up to eight CCDs, along with IFIS (Infinity Fabric Inter-Socket) links for 2P motherboards. Infinity Fabric internally connects all components on both IOD dies. Both dies are built on the 12 nm FinFET (12LP) silicon fabrication node at GlobalFoundries.
Matisse cIOD Rome cIOD

TYAN Updates Transport HX Barebones with New AMD EPYC 7002 Series Processors

TYAN, an industry-leading server platform design manufacturer and MiTAC Computing Technology Corporation subsidiary, today announced support for high frequency AMD EPYC 7F32, AMD EPYC 7F52 and AMD EPYC 7F72 processor-based server motherboards and server systems to the market. TYAN's HPC and storage server platforms continue to offer exceptional performance to datacenter customers.

"Leveraging AMD's innovation in 7 nm process technology, PCIe 4.0 I/O, and an embedded security architecture, TYAN's 2nd Gen AMD EPYC processor-based platforms are designed to address the most demanding challenges facing the datacenter", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Adding the new AMD EPYC 7002 Series processors with TYAN server platforms enable us to provide new capabilities to our customers and partners."

x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

Intel's x86 processor architecture has been the dominant CPU instruction set for many decades, since IBM decided to put the Intel 8086 microprocessor into its first Personal Computer. Later, in 2006, Apple decided to replace their PowerPC based processors in Macintosh computers with Intel chips, too. This was the time when x86 became the only option for the masses to use and develop all their software on. While mobile phones and embedded devices are mostly Arm today, it is clear that x86 is still the dominant ISA (Instruction Set Architecture) for desktop computers today, with both Intel and AMD producing processors for it. Those processors are going inside millions of PCs that are used every day. Today I would like to share my thoughts on the demise of the x86 platform and how it might vanish in favor of the RISC-based Arm architecture.

Both AMD and Intel as producer, and millions of companies as consumer, have invested heavily in the x86 architecture, so why would x86 ever go extinct if "it just works"? The answer is that it doesn't just work.

AMD Financial Analyst Day 2020 Live Blog

AMD Financial Analyst Day presents an opportunity for AMD to talk straight with the finance industry about the company's current financial health, and a taste of what's to come. Guidance and product teasers made during this time are usually very accurate due to the nature of the audience. In this live blog, we will post information from the Financial Analyst Day 2020 as it unfolds.
20:59 UTC: The event has started as of 1 PM PST. CEO Dr Lisa Su takes stage.
Return to Keyword Browsing