• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,847 (1.65/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI RTX 3080 10GB Suprim X
Storage 3x SSDs 2x HDDs
Display(s) ASUS VG27AQL1A x2 2560x1440 8bit IPS
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.

Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.



When both phases are complete, the entire system will be capable of 11 petaflops of theoretical performance which is an increase of over five times what currently Finnish scientists had available. The system will be used by numerous agencies and universities in multiple studies such as astrophysics, drug development, nanoscience, and AI research. All that said, performance like this doesn't come cheap either with Finland investing €37 million ($41.8 million) in their endeavor to upgrade and update their high-performance computing infrastructure.

View at TechPowerUp Main Site
 
LOL, they'll want to chuck cascade lake into an actual lake once they get those power bills.
 
LOL, they'll want to chuck cascade lake into an actual lake once they get those power bills.
Our country is already called "the country of thousands of lakes", one more won't hurt :laugh:
 
Good times ahead for AMD.
 
So.. are they finnished with intel then?
 
@Axaion
So.. are they finnished with intel then?
On one hand they get "3 legs" : (after reading the article) "1st and 2nd legs are Intel cpu's with some Nvidia gpu' s" and the fore two are their 1st phase, so on and so forh, their second phase "3rd leg" comprising of AMD cpu's altogether abundantly "sprinckled" with copious amounts of RAM and Storage.

The other beeing pun intended?
So.. are they finnished with intel then?
le: ortho;spelling
 
excuse my ignorance but what the f-ck does one do with such a machine.. assuming its all being put together for some practical purpose..

trog
 
assuming its all being put together for some practical purpose..
No practical use whatsoever. They did it just to irritate people on tech forums and such...
 
But can it run Crysis ?
 
excuse my ignorance but what the f-ck does one do with such a machine.. assuming its all being put together for some practical purpose..

trog


Extreme dataset computing, for example the actual physical interactions between a medicine and a cancer cell, trying to find what makes it work, so they can see if another atom in place of may work to bind better to a protein or activate a enzyme more fully.

It's hugely time consuming and why we are seeking quantum computing to become a reality, instead of the pet project it currently is.
 
excuse my ignorance but what the f-ck does one do with such a machine.. assuming its all being put together for some practical purpose..

trog

Research. For example if you work in a college project you can ask for some computational power from the supercomputer assiociated to that college if you need it. It's like you can "rent" a part of the computer to do whatever you need.
 
excuse my ignorance but what the f-ck does one do with such a machine.. assuming its all being put together for some practical purpose..

trog
To hit 144FPS in minesweeper, I think.
 
excuse my ignorance but what the f-ck does one do with such a machine.. assuming its all being put together for some practical purpose..

trog

It's so obvious. Just to put the specs on their forum account... :rolleyes:
 
It's hugely time consuming and why we are seeking quantum computing to become a reality, instead of the pet project it currently is.

Quantum is expensive to run in the first place and on this scale not even available.
 
LOL, they'll want to chuck cascade lake into an actual lake once they get those power bills.

It's a supercomputer, power bills are the last thing on their mind.
 
It's a supercomputer, power bills are the last thing on their mind.

They most surely are not.

These supercomputers tend to cost as much in power bills over a few years as the cost of building them in the 1st place: it's why server chips have much lower clocks VS regular desktops because it helps tremendously with the power bills.
 
It's a supercomputer, power bills are the last thing on their mind.

Some of these centers use as much power as a small factory. It's not negligible.
 
Some of these centers use as much power as a small factory. It's not negligible.

While not negligible my point was more it's performance over consumption in these applications. Otherwise they wouldn't build them in the first place.

The high consumption was actually part of my point.

it's why server chips have much lower clocks VS regular desktops because it helps tremendously with the power bills.

I thought that had more to do with data integrity, but will concede I could be wrong here.
 
I think performance per watt is a significant consideration, as well as cooling the entire configuration. Just think of the difference that even 5W more TDP per socket would add up to in a 3125 socket system. That’s 15000+ more watts of cooling required.
 
I thought that had more to do with data integrity, but will concede I could be wrong here.

Both, but it really is essential these machines run within a certain power envelope or the cost of ownership will skyrocket. There is a balance to be struck, if you project a supercomputer to run X years you might as well just add more racks instead of dealing with all the disadvantages of high clocks/perf per 1U. Extra surface area is extremely cheap anyway considering where these centers are built.

Another aspect is probably also the fact that bottlenecking occurs, storage and RAM are much more important in this space. No point oversaturating anything. And on top of all that, there are limitations to what can be fitted under the IHS, before you straight burn a hole in your server + yield issues. High clocking many cores are progressively harder to make, its the whole reason Epyc and TR are so amazing, they cut that yield risk / the number of dies.
 
Back
Top