• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Next-generation Intel Xeon Scalable Processors to Deliver Breakthrough Platform Performance with up to 56 Processor Cores

Joined
Oct 27, 2009
Messages
1,133 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
No, it is 128 PER CPU. AMD confirms it on their website. You have a total of 256 with 2 Epyc CPUs. Though, finding a way to USE all 256, that’ll require lots of hardware (but I’m sure some server users will find a way to use that many). Plus for second gen, its 128 PCIe 4.0 lanes per cpu. That’s yummy. Also, intel’s 56 core cpu is soldered, meaning you have to buy the motherboard. Can’t swap the cpu in case something happens. Even intel is making custom cooling solutions for it, depending on the U size of the server chassis. Whereas Epyc can be used in much more places.

No just no.... read before writing and learn.
Couple of notes:
Rome boards are designed with expectation of 250w/socket, either for milan or for turbo, reviews will tell.
128 lanes of PCIE 4 per cpu, or when configured in dual cpu mode half the lanes are coordinated as XGMI links which are x16 links but a more efficient protocol giving lower latency and higher bandwidth.

Server makers can opt to use 3 or 4 XGMI links giving an extra possible 32 lanes but that would sacrifice inter-socket bandwidth while increasing the needs for it. I think its a bad play as 128 pcie 4 lanes is a shitton of bandwidth...

Intel 9200 is BGA and boards and chips have to be bought from intel its a 200k sort of play without ram... and almost no one is buying first gen. It draws too much power, there is no differentiation to be had between vendors... it's just not a good thing. Intel has sort of listened and made a gen2 with cooperlake being socketed and upgradable to icelake.

Comparing 9200 and rome is not useful as it's not really in the market. Intel having 96 pcie 3.0 lanes vs 128-160 pcie 4.0 lanes is just an insane bandwidth difference. As far as server config is concerned I expect many single proc rome servers, and most dual proc to be configured with 3 xgmi links.

Intel will retail single threaded performance advantage in the server realm most likely, but will be dominated in anything that can use the insane amount of threads AMD is offering.

As far as what Keller is working on... he is VP of SOC and is working on die stacking and other vertical highly integrated density gains...
He claiming 50x density improvements over 10nm and it is "virtually working already"
The amendments on power are coming, more detailed reviews on power usage.
225w is the official top sku, I see gigabyte allowing CTDP up to 240w.

What we do know is dual 64c use less than dual 28c by a healthy margin, and 1 64c is about all it takes to match or better dual 28c.

The 2020 "competition" is a socketed version of the 9200, so the bga will no longer be an issue, power probably still will be, or it won't be very competitive.
Currently on an AMD unoptimized path (not using even AVX2 which rome supports) Using AVX512 on Intel, a dual 8280 2x 10k chip will match a 2x 7k Rome setup, give rome AVX2 and that will never happen.


56-core $10000 ..64-cores $7000 yeah no brainer.

Nonono tech, its 10k for 28c ... these 56c chips are 20-40k each and you have to have 2 soldered down on an intel board...
Intel is going to have to offer 80% + discounts to sell chips.
 
Last edited:
Joined
Feb 26, 2016
Messages
546 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling Corsair H170i Elite Cappelix w/ NF-A14 iPPC IP67 fans
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 1
Power Supply EVGA 1600 T2 w/ NF-A14 iPPC IP67 fan
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Logitech G910 Stickerbombed
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
No just no.... read before writing and learn.

The amendments on power are coming, more detailed reviews on power usage.
225w is the official top sku, I see gigabyte allowing CTDP up to 240w.

What we do know is dual 64c use less than dual 28c by a healthy margin, and 1 64c is about all it takes to match or better dual 28c.

The 2020 "competition" is a socketed version of the 9200, so the bga will no longer be an issue, power probably still will be, or it won't be very competitive.
Currently on an AMD unoptimized path (not using even AVX2 which rome supports) Using AVX512 on Intel, a dual 8280 2x 10k chip will match a 2x 7k Rome setup, give rome AVX2 and that will never happen.




Nonono tech, its 10k for 28c ... these 56c chips are 20-40k each and you have to have 2 soldered down on an intel board...
Intel is going to have to offer 80% + discounts to sell chips.
Yep, ur right, i need to learn...

No, you don't. With the same configuration as Naples you get 128 lanes for 2 CPUs. This is the default configuration. With additional configuration allowed for system builders you can have 128-160 lanes for 2 CPUs.
sure.

Yall acting like you know everything, tell my why this particular CPU says it supports 128 lanes? https://www.amd.com/en/products/cpu/amd-epyc-7551p

Don't even tell me "dual socket", its a P CPU. Clearly my glasses are working.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Yall acting like you know everything, tell my why this particular CPU says it supports 128 lanes? https://www.amd.com/en/products/cpu/amd-epyc-7551p
Don't even tell me "dual socket", its a P CPU. Clearly my glasses are working.
Yes, single-socket EPYC has 128 lanes.
Context of my comment was dual socket. It is not as simple as dual socket having 2x128 lanes. Dual socket Naples has 128 lanes. Dual socket Rome can have more - 128 is the default but OEM can configure it to be 160-192 lanes.
 
Joined
Oct 27, 2009
Messages
1,133 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
Ffs no one ever said 1 cpu didn't have 128 pcie lanes... Just that a 2p system doesn't have 256... Now go back and read as to why that is so rather than shouting off Mt stupid.
 
Joined
Feb 26, 2016
Messages
546 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling Corsair H170i Elite Cappelix w/ NF-A14 iPPC IP67 fans
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 1
Power Supply EVGA 1600 T2 w/ NF-A14 iPPC IP67 fan
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Logitech G910 Stickerbombed
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
Ffs no one ever said 1 cpu didn't have 128 pcie lanes... Just that a 2p system doesn't have 256... Now go back and read as to why that is so rather than shouting off Mt stupid.
A 2P system has 256 PCIe lanes from the CPUs. I never said 256 usable PCIe lanes.
 
Joined
Oct 30, 2008
Messages
1,901 (0.34/day)
Processor 5930K
Motherboard MSI X99 SLI
Cooling WATER
Memory 16GB DDR4 2132
Video Card(s) EVGAY 2070 SUPER
Storage SEVERAL SSD"S
Display(s) Catleap/Yamakasi 2560X1440
Case D Frame MINI drilled out
Audio Device(s) onboard
Power Supply Corsair TX750
Mouse DEATH ADDER
Keyboard Razer Black Widow Tournament
Software W10HB
Benchmark Scores PhIlLyChEeSeStEaK
And not one person that posted will ever even see this CPU, SMMH! OK back to why AMD CPU'S wont over clock...................:clap:
 
Joined
Oct 27, 2009
Messages
1,133 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
A 2P system has 256 PCIe lanes from the CPUs. I never said 256 usable PCIe lanes.
No, you clearly did not understand the architecture one bit by your earlier posts which have already been quoted so don't bother trying to change them, here, I will quote again for you.
Berfs1 said:
No, it is 128 PER CPU. AMD confirms it on their website. You have a total of 256 with 2 Epyc CPUs. Though, finding a way to USE all 256, that’ll require lots of hardware (but I’m sure some server users will find a way to use that many). Plus for second gen, its 128 PCIe 4.0 lanes per cpu. That’s yummy.

See, you thought you could use all 256 lanes and you were quite emphatic about 1+1 =2 for a very annoyingly long time, and instead of reading the replies on how there were no dedicated interconnects and how it was a amazingly scalable programmable fabric... where half the lanes of naples point at each other in 4 xgmi links and with the doubling of bandwidth in rome they enable you to use 1 xgmi link for 32 more lanes. (one person said 2 but I have not seen that in the wild nor think it's a good idea.) so 128 for 2p with 4 xgmi links and 160 for 2p with 3 xgmi links.

You had correct information that 1 cpu has 128 lanes, and no one disagreed with it. But you kept pointing at it when people told you that 2x didn't have 256 usable. but I am glad we are on the same page now.

And not one person that posted will ever even see this CPU, SMMH! OK back to why AMD CPU'S wont over clock...................:clap:
I actually have a 7601 that I am playing with and I will get rome, though probably not the 64c as I don't need that many threads, I need clocks.
Fun fact, all Epyc's are unlocked. :) ...Though the msr's for Rome have not been discovered yet nor has an updated BKDG. (bios kernel dev guide)
Also unfun note... many gen 1 epyc boards don't support rome due to same fucking cheap issue as desktop. Too small of soldered rom chip.
 
Last edited:
Top