• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Equipped to Lead Industry to Era of Exascale Computing

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,777 (7.41/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
At the International Supercomputing Conference (ISC), Kirk Skaugen, Intel Corporation vice president and general manager of the Data Center Group, outlined the company's vision to achieve ExaFLOP/s performance by the end of this decade. An ExaFLOP/s is quintillion computer operations per second, hundreds times more than today's fastest supercomputers.

Reaching exascale levels of performance in the future will not only require the combined efforts of industry and governments, but also approaches being pioneered by the Intel Many Integrated Core (Intel MIC) Architecture, according to Skaugen. Managing the explosive growth in the amount of data shared across the Internet, finding solutions to climate change, managing the growing costs of accessing resources such as oil and gas, and a multitude of other challenges require increased amounts of computing resources that only increasingly high-performing supercomputers can address.



"While Intel Xeon processors are the clear architecture of choice for the current TOP500 list of supercomputers, Intel is further expanding its focus on high-performance computing by enabling the industry for the next frontier with our Many Integrated Core architecture for petascale and future exascale workloads," said Skaugen. "Intel is uniquely equipped with unparalleled manufacturing technologies, new architecture innovations and a familiar software programming environment that will bring us closer to this exciting exascale goal."

Paving the Way to Exaflop Performance
Intel's relentless pursuit of Moore's Law -- doubling the transistor density on microprocessors roughly every 2 years to increase functionality and performance while decreasing costs -- combined with an innovative, highly efficient software programming model and extreme system scalability were noted by Skaugen as key ingredients for crossing the threshold of petascale computing into a new era of exascale computing. With this increase in performance, though, comes a significant increase in power consumption.

As an example, for today's fastest supercomputer in China, the Tianhe-1A, to achieve exascale performance, it would require more than 1.6 GW of power - an amount large enough to supply electricity to 2 million homes - thus presenting an energy efficiency challenge.

To address this challenge, Intel and European researchers have established three European labs with three main goals: to create a sustained partner presence in Europe; take advantage of the growing relevance of European high-performance computing (HPC) research; and exponentially grow capabilities in computational science, engineering and strategic computing. One of the technical goals of these labs is to create simulation applications that begin to address the energy efficiency challenges of moving to exascale performance.

Skaugen said there is the potential for tremendous growth of the HPC market. While supercomputers from the 1980s delivered GigaFLOP/s (billions of floating point operations per second) performance, today's most powerful machines have increased this value by several million times. This, in turn, has increased the demand for processors used in supercomputing. By 2013 Intel expects the top 100 supercomputers in the world to use one million processors. By 2015 this number is expected to double, and is forecasted to reach 8 million units by the end of the decade. The performance of the TOP500 #1 system is estimated to reach 100 PetaFLOP/s in 2015 and break the barrier of 1 ExaFLOP/s in 2018. By the end of the decade the fastest system on Earth is forecasted to be able to provide performance of more than 4 ExaFLOP/s.

Intel MIC Architecture Software Development Momentum
The Intel MIC architecture is a key addition to the company's existing products, including Intel Xeon processors, and expected to help lead the industry into the era of exascale computing. The first Intel MIC product, codenamed "Knights Corner," is planned for production on Intel's 22-nanometer technology that featuring innovative 3-D Tri-Gate transistors. Intel is currently shipping Intel MIC software development platforms, codenamed "Knights Ferry," to select development partners.

At ISC, Intel and some of its partners including Forschungszentrum Juelich, Leibniz Supercomputing Centre (LRZ), CERN and Korea Institute of Science and Technology Information (KISTI) showed early results of their work with the "Knights Ferry" platform. The demonstrations showed how Intel MIC architecture delivers both performance and software programmability advantages.

"The programming model advantage of Intel MIC architecture enabled us to quickly scale our applications running on Intel Xeon processors to the Knights Ferry Software Development Platform," said Prof. Arndt Bode of the Leibniz Supercomputing Centre. "This workload was originally developed and optimized for Intel Xeon processors but due to the familiarity of the programming model we could optimize the code for the Intel MIC architecture within hours and also achieved over 650 GFLOPS of performance."

Intel also showed server and workstation platforms from SGI, Dell, HP, IBM, Colfax and Supermicro, all of which are working with Intel to plan products based on "Knights Corner." "SGI recognizes the significance of inter-processor communications, power, density and usability when architecting for exascale," said SGI CTO Dr. Eng Lim Goh. "The Intel MIC products will satisfy all four of these priorities, especially with their anticipated increase in compute density coupled with familiar X86 programming environment."

TOP500 Supercomputers
The 37th edition of the Top500 list, which was announced at ISC, shows that Intel continues to be a force in high-performance computing, with 387 systems or more than 77 percent, powered by Intel processors. Out of all new entries to the list in 2011, Intel powered systems accounted for close to 88 percent. More than half of these new additions are based on latest 32nm Intel Xeon 5600 series processors which now alone power more than 35% of all systems in TOP500 list, three times the amount comparing to last year.

The semi-annual TOP500 list of supercomputers is the work of Hans Meuer of the University of Mannheim, Erich Strohmaier and Horst Simon of the U.S. Department of Energy's National Energy Research Scientific Com

View at TechPowerUp Main Site
 
I am thinking of what science could use this exascale computing :wtf:
 
Is that a radeon card with an intel sticker :laugh:
 
I am thinking of what science could use this exascale computing :wtf:

I read a depressing news article about the state of the oceans (and the horrific effect it'll have on the entire planet). http://www.bbc.co.uk/news/science-environment-13796479

My point is science would gain greatly from such computational powers but unless we as a race can learn to mitigate our effect on our planet (CO2 release pertinent to acidic oceans, over fishing, toxic pollution etc) then our science achievements will be a legacy read by nobody.
 
I got excited to see somebody else getting into discrete graphics, but nope :ohwell:
 
by the end of this decade

Good thing for them is the fact that this decade just barely started...
 
Can't wait 2 be playing 4-d solatire on one of those things
 
Is this Larabee's offspring? I see 32 cores in there with what seems like shared L2 cache in between some of them
 
that die shot looks like a 32 core if I'm not mistaken?

also wondering if that pci-e engineering sample is real or just a mock up, thats frikken awesome.
 
so it's many P4 cpus in a gpu package which cant play games gpus normally can?

if anybody can convince people to buy this, it's INTEL.
 
Is this Larabee's offspring?
I believe it is, when Intel scrapped it as a consumer dedicated graphics competitor and said they would still be using architecture they researched from larabee, this is whats resulted.

Would have preferred that research to have become a consumer product, imagine the bitcoins a larabee platform could have mined! :roll: :cry:
 
I wonder if this will be able to compete with AMD's "Graphics Core Next" GPU which is supposed to be heavily developed with compute in mind.
 
Is this Larabee's offspring? I see 32 cores in there with what seems like shared L2 cache in between some of them

I do beleive so. Intel said it was not "killing" Larabee; I guess this is what they turned it into.
 
Back in September:

While not many specifics are known about the Knights Corner chip, the Knights Ferry servers used to power the Wolfenstein tech demo had chips with 32 x86 cores clocked at 1.2GHz, capable of processing four threads per core – allowing it to handle 128 threads. Four of these were used in the Wolfenstein demo.

http://www.youtube.com/watch?v=XVZDH15TRro

Newer units will have more cores (>50), possibly at a higher clockspeed, doubling the performance of the units used in the Wolf demo.
 
Intel is supposed to be setting the pace not trying to catch up????????!!!!!!!, I thought it was Larrabee Discrete Graphics Card/HPC Card,,, it would have been nice, I was like Suck on that AMD & Nvidia but ooo noooo

Will we be able to afford this MIC thing???
Will It Game (When I Say Game, I mean Will It be able to crash Nvidia & AMD)???
If not what the Hell,......


We want a discrete card from Intel
Where is larrabee,????

They should try and absorb Nvidia!!!!!!??????
 
so it's many P4 cpus in a gpu package which cant play games gpus normally can?

if anybody can convince people to buy this, it's INTEL.

Sigh... :mad: FYI, some people actually WORK for a living and have the need for monstrous computing power (way beyond what you can imagine). There's actually serious work that could be completed by using this equipment. Not everything in this damn world is fun and games. :shadedshu
 
how would this compare to nvidia tesla?
 
I am thinking of what science could use this exascale computing :wtf:
3D virtual porn. :roll:


I do beleive so. Intel said it was not "killing" Larabee; I guess this is what they turned it into.
Larrabee was supposed to be a graphics card first and a high-performance computing project second. Larrabee the graphics card was killed because Intel basically wanted to make an OGL and DX library for x86 and the card simply couldn't be competitive with AMD/NVIDIA (Larrabee's flexibility meant it wouldn't be as efficient as a graphics card). This is the HPC version. I doubt we'll ever see one of these HPC cards with a DVI port on it so, in that regard, Larrabee the graphics card is done for.


how would this compare to nvidia tesla?
CUDA vs x86. x86 = much better and virtually anyone can program for it without having to learn much new. x86 is also far more powerful in that it can handle logic processing much better than CUDA.
 
Last edited:
3D virtual porn. :roll:



Larrabee was supposed to be a graphics card first and a high-performance computing project second. Larrabee the graphics card was killed because Intel basically wanted to make an OGL and DX library for x86 and the card simply couldn't be competitive with AMD/NVIDIA (Larrabee's flexibility meant it wouldn't be as efficient as a graphics card). This is the HPC version. I doubt we'll ever see one of these HPC cards with a DVI port on it so, in that regard, Larrabee the graphics card is done for.



CUDA vs x86. x86 = much better and virtually anyone can program for it without having to learn much new. x86 is also far more powerful in that it can handle logic processing much better than CUDA.

So that would mean...that this could be a threat to AMD and Nvidia? It does only make sense for intel to spit something out like a GPU. They need to get both CPU and GPU down if they want to keep up with what AMD is doing and planning for the next 5years or so.
 
NVIDIA, yes, AMD, not so much. NVIDIA has invested a lot in CUDA but AMD has really only invested enough in Streams to make it work. This could literally kill the viability of CUDA and Streams not only because it offers more performance but it is substantially easier to code for.

Intel isn't concerned much about GPUs because they have the CPU market cornered and, as long as they have the CPU market, they'll also have the IGP market too. Most people aren't going to buy an AMD processor because it has a better IGP.

AMD and NVIDIA won't see any competition in the foreseeable future from Intel in the discreet GPU market.
 
Larrabee was supposed to be a graphics card first and a high-performance computing project second. Larrabee the graphics card was killed because Intel basically wanted to make an OGL and DX library for x86 and the card simply couldn't be competitive with AMD/NVIDIA (Larrabee's flexibility meant it wouldn't be as efficient as a graphics card). This is the HPC version. I doubt we'll ever see one of these HPC cards with a DVI port on it so, in that regard, Larrabee the graphics card is done for.

You're just stating the obvious though, which is mostly all in the OP. The OP is all about HPC, and not much else.
 
Back
Top