• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI Drops First Hint of AMD Increasing AM4 CPU Core Counts

Joined
Feb 18, 2005
Messages
5,239 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
I reckon we'll see at most 12c/24t parts on AM4, as anything higher would cannibalise Threadripper. Plus it will make the chips even more expensive to produce.
 
Joined
Mar 21, 2016
Messages
2,190 (0.75/day)
The IPC thing with Intel has always been talked about, but never actually proven. They only gained performance from ramping up clocks, just look the Core i7 6700 and 7700. All the performance difference came from higher clock and not IPC.
Memory speed bumps as well had to have made a impact too. That's why most enthusiasts aren't using the speeds they officially support.
 
Joined
Aug 6, 2017
Messages
7,412 (3.07/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
AMD gave more IPC increase between 1st and 2nd gen Ryzen than Intel did between its past 3 generations; despite Zen and Zen+ being the same chip physically. I'm hopeful.
I think it's more a result of reviewers using faster ram in 2018 ryzen reviews than they used back then in 2017 ryzen 1 reviews. This is a huge improvemnt for ryzen 2 over ryzen 1 and also brings ryzen closer to intel's performance. Intel CPUs work on the ring bus, there's little latency. AMD uses CCX's, that's why using 3200 cl14 memory like tpu did in 2700x vs 8700 test usually means a slightly better performance improvemnt for amd than intel. When you test both on budget 2400/2666 CL16 sticks, the gap usually grows the other way, favoring intel.
 
Joined
Aug 13, 2010
Messages
5,369 (1.08/day)
A good improvement i can see in such move would be a 6C\12T APU with Navi in it. That could be one hell of a 7nm powerhouse.
 
Joined
May 24, 2007
Messages
1,101 (0.18/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro
A good improvement i can see in such move would be a 6C\12T APU with Navi in it. That could be one hell of a 7nm powerhouse.
Think about the notebook/mobile segment with such a product, or a console.
 
Joined
Oct 2, 2015
Messages
2,984 (0.97/day)
Location
Argentina
System Name Ciel
Processor AMD Ryzen R5 5600X
Motherboard Asus Tuf Gaming B550 Plus
Cooling ID-Cooling 224-XT Basic
Memory 2x 16GB Kingston Fury 3600MHz@3933MHz
Video Card(s) Gainward Ghost 3060 Ti 8GB + Sapphire Pulse RX 6600 8GB
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB
Display(s) Gigabyte G27Q + AOC 19'
Case Cougar MX410 Mesh-G
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W
Mouse Logitech G203
Keyboard VSG Alnilam
Software Windows 11 x64
They need to design that CPU alwas works preferential to a single CCX unit as much as possible (if they aren't already doing it). To avoid communications between separate CCX units which are slower than within same CCX.
I think that's the OS's fault.
 
Joined
Aug 6, 2017
Messages
7,412 (3.07/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Think about the notebook/mobile segment with such a product, or a console.
a ryzen apu with navi would be basically an xbox inside a pc.

I think that's the OS's fault.
win 10 was never designed to work with ccx cpus in the first place. amd usually comes up with stuff that privides more raw performance, their gpus have more sp and tflops, their cpus have more cores. That performance usually gets lost in many tasks though, since in order for that to work you need compatible software. Not the fault of the os, not the fault of amd, just requires adoption time as it's just very different.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.09/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Joined
Oct 2, 2004
Messages
13,791 (1.94/day)
Memory speed bumps as well had to have made a impact too. That's why most enthusiasts aren't using the speeds they officially support.

Isn't that always the case? On X58 it was 1333MHz if I remember correctly, I was running a 1600MHz RAM. On X99 it's 2133MHz and was later bumped to 2400MHz iirc. I'm running 2666MHz RAM. Usually we run faster memory than specified.
 
Joined
Aug 16, 2016
Messages
1,025 (0.37/day)
Location
Croatistan
System Name 1.21 gigawatts!
Processor Intel Core i7 6700K
Motherboard MSI Z170A Krait Gaming 3X
Cooling Be Quiet! Shadow Rock Slim with Arctic MX-4
Memory 16GB G.Skill Ripjaws V DDR4 3000 MHz
Video Card(s) Palit GTX 1080 Game Rock
Storage Mushkin Triactor 240GB + Toshiba X300 4TB + Team L3 EVO 480GB
Display(s) Philips 237E7QDSB/00 23" FHD AH-IPS
Case Aerocool Aero-1000 white + 4 Arctic F12 PWM Rev.2 fans
Audio Device(s) Onboard Audio Boost 3 with Nahimic Audio Enhancer
Power Supply FSP Hydro G 650W
Mouse Cougar 700M eSports white
Keyboard E-Blue Cobra II
Software Windows 8.1 Pro x64
Benchmark Scores Cinebench R15: 948 (stock) / 1044 (4,7 GHz) FarCry 5 1080p Ultra: min 100, avg 116, max 133 FPS
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
 
Joined
May 2, 2017
Messages
7,762 (3.09/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Yep. Luckily, everything AMD has said since the launch of Ryzen points towards there being noticeable IPC improvements (the "low hanging fruit" quote in particular) coming in short order, and the move away from low-power processes (GF 14nm) to high-speed ones (12nm to a certain degree, 7nm significantly more) more suited for desktop/high-performance parts should help boost clocks even beyond the 1st-to-2nd gen increase.

While I wouldn't mind pushing the maximum amount of cores on the mainstream platform even further (the option for a 12-core doesn't hurt anyone), the gains are mostly fictional at this point. My GF's TR 1920X workstation crushes my R5 1600X gaming build in Adobe Premiere, but mine is just as fast (or faster) in everyday tasks and gaming. Software (and games in particular) really needs to branch out and utilize more cores (and more CPU resources in general - games barely require more CPU power now than 10 years ago, while GPU utilization has skyrocketed), and increasing core counts on CPUs doesn't really get you anything if that increase in utilization doesn't arrive early in the 3-4-year lifespan of the average enthusiast CPU. der8auer made a good point about this in a recent video - game developers need to start looking into what they can do with the current crop of really, really powerful CPUs.
 
Joined
Jan 8, 2017
Messages
8,810 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
They need to design that CPU alwas works preferential to a single CCX unit as much as possible (if they aren't already doing it).

There is no point in doing that , Zen isn't a heterogeneous architecture. Better/faster cache will sort this out , a CPU never talks directly to system memory but rather through each cache level and only then if the instruction/data isn't found it accesses the main memory.

Ryzen 2 has lower cache latency and as a result memory I/O is improved across the board.
 
Joined
Oct 2, 2004
Messages
13,791 (1.94/day)
There is no point in doing that , Zen isn't a heterogeneous architecture. Better/faster cache will sort this out , a CPU never talks directly to system memory but rather through each cache level and only then if the instruction/data isn't found it accesses the main memory.

Ryzen 2 has lower cache latency and as a result memory I/O is improved across the board.

I wasn't talking about system memory. I was talking about preferential communication within single CCX complex whenever that is possible. So that apps/games don't use 2 cores from one CCX and 2 from another. It's best if they use all cores from same CCX and only go into another when all of the CCX ones were used (currently CCX holds 4 cores).
 
Joined
Aug 6, 2017
Messages
7,412 (3.07/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
The fact that it's heterogenous doesn't mean that performing tasks within one ccx isn't better.
 
Joined
Jan 8, 2017
Messages
8,810 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I wasn't talking about system memory. I was talking about preferential communication within single CCX complex whenever that is possible. So that apps/games don't use 2 cores from one CCX and 2 from another. It's best if they use all cores from same CCX and only go into another when all of the CCX ones were used (currently CCX holds 4 cores).

What you are talking about has everything to do with cache and general memory I/O performance, that's why I mentioned it. Faster connections between the distinct L3 cache regions and not using them as victim caches will fix that deficiency. It will also be a much simpler solution versus coming up with complex scheduling that may require complicated hardware blocks who may occupy space than can be otherwise used for something else.
 
D

Deleted member 178884

Guest
10 cores minimum for certain that would be the 2800x probably - they will drop it upon the coffee lake refresh release.
 
Joined
Oct 22, 2014
Messages
13,210 (3.84/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Intel WAS gaining performance … seems the mitigations that are required now have pared a lot of that back.
Perhaps Intel should have put the hard yards in and done real work to improve their IPC, not underhanded tactics to make their product APPEAR faster.
 
Joined
Dec 14, 2013
Messages
2,598 (0.69/day)
Location
Alabama
Processor Ryzen 2700X
Motherboard X470 Tachi Ultimate
Cooling Scythe Big Shuriken 3
Memory C.R.S.
Video Card(s) Radeon VII
Software Win 7
Benchmark Scores Never high enough
Why is no one talking about how incredibly cringey the video is?!

:twitch:

Cheesy video about a board made of cheap-n-cheesy components.....
Yeah, not suprised here. :ohwell:
Next time I need a new MSI board I'll grab a jar of whiz cheese and dump it into the case.

I know some love MSI and that's fine, even Asus has their fair share of crap dropped at times and admittedly as of late they too have been slipping.
I've still had MUCH better use experience from an Asus than anything I've ever had by MSI before in both what it could do and how long it lasted.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Intel WAS gaining performance … seems the mitigations that are required now have pared a lot of that back.
Perhaps Intel should have put the hard yards in and done real work to improve their IPC, not underhanded tactics to make their product APPEAR faster.

Optimizing an architecture is nothing more than "underhanded" tricks to make the product faster. That is what branch cache prediction was, it was a great way to optimize architectures. That's why pretty much every processor maker uses it in one form or another.

Thre reason Intel was hit so bad by the security issues is because they relied on it the most, and that is because they have had the most time to optimize a single architecture. Because lets face it, Intel has been doing nothing but optimizing the same architecture since Sandybridge(arguably Nehalem).
 
Joined
Dec 28, 2012
Messages
3,475 (0.85/day)
System Name Skunkworks
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software openSUSE tumbleweed/Mint 21.2
Yep. Luckily, everything AMD has said since the launch of Ryzen points towards there being noticeable IPC improvements (the "low hanging fruit" quote in particular) coming in short order, and the move away from low-power processes (GF 14nm) to high-speed ones (12nm to a certain degree, 7nm significantly more) more suited for desktop/high-performance parts should help boost clocks even beyond the 1st-to-2nd gen increase.

While I wouldn't mind pushing the maximum amount of cores on the mainstream platform even further (the option for a 12-core doesn't hurt anyone), the gains are mostly fictional at this point. My GF's TR 1920X workstation crushes my R5 1600X gaming build in Adobe Premiere, but mine is just as fast (or faster) in everyday tasks and gaming. Software (and games in particular) really needs to branch out and utilize more cores (and more CPU resources in general - games barely require more CPU power now than 10 years ago, while GPU utilization has skyrocketed), and increasing core counts on CPUs doesn't really get you anything if that increase in utilization doesn't arrive early in the 3-4-year lifespan of the average enthusiast CPU. der8auer made a good point about this in a recent video - game developers need to start looking into what they can do with the current crop of really, really powerful CPUs.
Games are already doing that. Look at battlefield, hapilly gobbles up as much CPU hardware as you throw at it....in multiplayer.

In singleplayer, the game really only lads 2 or 3 to any significant degree.

The problem is that games are naturally more single thread oriented. Some can benefit from more cores, like multiplayer, but if you are expecting single player or low player count multiplayer games to effectively use 5+ threads, you are going to be dissapointed. The reason CPU requirements havent shot up is simple- there is no need for them, most games are script heavy, and current CPUs are already good enough for these tasks. Graphics are much easier to push higher (and more demanding) year to year.

This is why IPC is just as important as MOAR CORES, some things simply will not be able to take advantage of 8+ cores, and will need that single ore performance.
 
Joined
Dec 10, 2015
Messages
545 (0.18/day)
Location
Here
System Name Skypas
Processor Intel Core i7-6700
Motherboard Asus H170 Pro Gaming
Cooling Cooler Master Hyper 212X Turbo
Memory Corsair Vengeance LPX 16GB
Video Card(s) MSI GTX 1060 Gaming X 6GB
Storage Corsair Neutron GTX 120GB + WD Blue 1TB
Display(s) LG 22EA63V
Case Corsair Carbide 400Q
Power Supply Seasonic SS-460FL2 w/ Deepcool XFan 120
Mouse Logitech B100
Keyboard Corsair Vengeance K70
Software Windows 10 Pro (to be replaced by 2025)
Not surprising if AMD actually bring 12 core to AM4, their server division need the core increase to offer more options and by trickling it down they also increasing the consumer product range

It's a win-win situation
 
Joined
May 2, 2017
Messages
7,762 (3.09/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Games are already doing that. Look at battlefield, hapilly gobbles up as much CPU hardware as you throw at it....in multiplayer.

In singleplayer, the game really only lads 2 or 3 to any significant degree.

The problem is that games are naturally more single thread oriented. Some can benefit from more cores, like multiplayer, but if you are expecting single player or low player count multiplayer games to effectively use 5+ threads, you are going to be dissapointed. The reason CPU requirements havent shot up is simple- there is no need for them, most games are script heavy, and current CPUs are already good enough for these tasks. Graphics are much easier to push higher (and more demanding) year to year.

This is why IPC is just as important as MOAR CORES, some things simply will not be able to take advantage of 8+ cores, and will need that single ore performance.
You're not entirely wrong, but I don't completely agree with you either. What you're describing is the current state of AAA game development and the system load of the features present in these games. What I'm saying is that it's about time early development resources are reallocated from developing new ways of melting your GPU (which has been the key focus for a decade or more) to finding new uses for the abundant CPU power in modern PCs. Sure, CPUs are worse than GPUs for graphics, physics and lighting. Probably for spatial audio too. But is that really all there is? What about improving in-game AI? Making game worlds and NPCs more dynamic in various ways? Making player-to-world interactions more complex, deeper and more significant? That's just stuff I can come up with off the top of my head in two minutes. I'd bet a team of game or engine developers could find quite a lot to spend CPU power on that would tangibly improve game experiences in single-player. It's there for the taking, they just need to find interesting stuff to do with it.

Of course, this runs the risk of breaking the game for people with weak CPUs - scaling graphics is easy and generally accepted ("my GPU is crap so the game doesn't look good, but at least I can play"), scaling AI or other non-graphical features is far more challenging. "Sorry, your CPU is too slow, so now the AI is really dumb and there are all these nifty/cool/fun things you can no longer do" won't fly with a lot of gamers. Which I'm willing to bet the focus on improving graphics and little else comes from, and will continue to come from for a while still.
 
Joined
Jun 1, 2011
Messages
3,811 (0.82/day)
Location
in a van down by the river
Processor faster than yours
Motherboard better than yours
Cooling cooler than yours
Memory smarter than yours
Video Card(s) better performance than yours
Storage stronger than yours
Display(s) bigger than yous
Case fancier than yours
Audio Device(s) clearer than yours
Power Supply more powerful than yours
Mouse lighter than yours
Keyboard less clicky than yours
Benchmark Scores up yours
wake me when they finally break 185 points on the cinebench single thread test

 
Joined
Sep 17, 2014
Messages
20,692 (5.96/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
You're not entirely wrong, but I don't completely agree with you either. What you're describing is the current state of AAA game development and the system load of the features present in these games. What I'm saying is that it's about time early development resources are reallocated from developing new ways of melting your GPU (which has been the key focus for a decade or more) to finding new uses for the abundant CPU power in modern PCs. Sure, CPUs are worse than GPUs for graphics, physics and lighting. Probably for spatial audio too. But is that really all there is? What about improving in-game AI? Making game worlds and NPCs more dynamic in various ways? Making player-to-world interactions more complex, deeper and more significant? That's just stuff I can come up with off the top of my head in two minutes. I'd bet a team of game or engine developers could find quite a lot to spend CPU power on that would tangibly improve game experiences in single-player. It's there for the taking, they just need to find interesting stuff to do with it.

Of course, this runs the risk of breaking the game for people with weak CPUs - scaling graphics is easy and generally accepted ("my GPU is crap so the game doesn't look good, but at least I can play"), scaling AI or other non-graphical features is far more challenging. "Sorry, your CPU is too slow, so now the AI is really dumb and there are all these nifty/cool/fun things you can no longer do" won't fly with a lot of gamers. Which I'm willing to bet the focus on improving graphics and little else comes from, and will continue to come from for a while still.

You're right and examples like Star Swarm and Ashes are early attempts at that. Not very good ones in terms of a 'game' but... nice tech demos. The APIs are there for this now. I think the main thing we're waiting for is mass adoption because such games will run like a PITA on anything that doesn't use most feature levels of DX12 or Vulkan. There is still not a single killer-app to push those APIs forward while they really do need it or this will easily take 2-3 more years.

As for AI: writing a good AI in fact doesn't take all that much in terms of CPU. Look at UT'99 for good examples of that - those bots were insane. The main thing a good AI requires is expert knowledge and control of game mechanics combined with knowledge of how players play and act. Ironically, the best AI that doesn't 'cheat' or completely overpowers the player in every situation is one that also makes mistakes and acts upon player interaction and not pre-coded stuff. And for that, we now have big data and deep/machine learning but that is still super early adopter stage... and the fun thing about thát is that its done on.... GPU.

AMD gave more IPC increase between 1st and 2nd gen Ryzen than Intel did between its past 3 generations; despite Zen and Zen+ being the same chip physically. I'm hopeful.

I will be highly surprised if AMD manages to structurally surpass Intel IPC. They already do it on specific workloads but that is not enough. Only when they can get past Intel's IPC on all fronts, only then will I buy the Intel bash of 'they're just sitting on Skylake'. I'm more of a believer in the idea that all the fruits are picked by now for x86 and any kind of improvement requires a radically different approach altogether. GPU is currently suffering a similar fate by the way, as the main source of improvements there is found in node shrinks and dedicated resources for specific tasks, clock bumps and 'going faster or wider' (HBM, GDDR6 etc.). I also view that as the main reason GPU makers are pushing things like ray tracing, VR and higher res support, they are really scouring the land for new USPs.

Realistically, the only low hanging fruit in CPU land right now IS adding cores.
 
Last edited:
Top