Wednesday, July 14th 2021

AMD Zen 4 Desktop Processors Likely Limited to 16 Cores, 170 W TDP

We have recently seen several reputable rumors confirming that AMD's Zen 4 Raphael desktop processors will be limited to 16 cores with 2 compute units. There were previous rumors of a 24 core model with 3 compute units however that now seems unlikely. While the core counts won't increase some skews may see a TDP increase up to 170 W which should offer some performance uplift. AMD is expected to debut their 5 nm Zen 4 Raphael desktop processors in 2022 which will come with support for PCIe 5.0 and DDR5. The processors will switch to a new AM5 LGA1718 socket and will compete with Intel's Alder Lake-S successor Raptor Lake which could feature 24 cores.
Source: @patrickschur_
Add your own comment

75 Comments on AMD Zen 4 Desktop Processors Likely Limited to 16 Cores, 170 W TDP

#26
Tartaros
sam_86314AMD pulling an Intel here?

Sounds like they're satisfied with their place in the market and have switched to stagnation mode.
You really need more than 8 cores on your daily home and gaming machine? Holy fuck.
Posted on Reply
#27
TumbleGeorge
Hmm, the only reason ZEN4 Raphael to be with 16 cores probably is iGPU part in left from I/O chiplet. There is(was) rumors or leaks that all mainstream processors from AMD will have iGPU.
Posted on Reply
#28
phanbuey
sam_86314AMD pulling an Intel here?

Sounds like they're satisfied with their place in the market and have switched to stagnation mode.
Yes they are... just like they did last time when they were ahead.

We aren't going to see 32 cores on the desktop until Apple or Intel start creeping back up.
Posted on Reply
#29
kruk
MusselsI can see a 170W TDP 16 core 'extreme, water cooled only' top end part to try and claim the king of the hill in reviews and media, with the rest being a lot more power efficient and better suited for actual gamers
Yes, this is probably it. Further leaks from Patrick Schur:


Also, regarding the news: at the moment, they should focus on single core performance and efficiency for the first gen on AM5 and increase the core count on second/third gen. At the current moment, I simply can't see how pushing more cores would benefit the average consumer. If you need more cores, why not go Threadripper?
Posted on Reply
#30
Punkenjoy
I think the key here is the power envelope and the intended buyers of these CPU. AMD already offer up to 64 cores with Threadripper so we could say they limit consumers desktops to 16 cores, but that is more than enough for most people.

A key selling point for desktop CPU right now is still gaming performance. I suppose that AMD want to use 2 CCD (2x8 cores) and gave them more power to go higher clock than having another chiplet that would need to use a significant portion of the power envelope preventing the others core to achieve the higher frequency.

Single thread performance is still the key performance metrics everyone need to get on no matter what people say here. The higher it is, the higher performance you will get with each core you add. You have to always balance working on getting your single thread performance as good as possible then leveraging to more cores.

Also, no matter what architecture or ISA you are on (x86-64, ARM, RISC-V), the key is to feed the big cores. 2 channels (or even the 4 channel (2x2x32) of DDR5 might not be enough to feed another full CCD. Larger cache might help but not fix the problem as a whole.

Also AMD is benefits a lot from its chiplet architecture, but there are drawback. I suppose that they would have released smaller cpu with Zen 2 (like 10-12 core) but they had to go to 2 full CCD. The next step is 24 witch is another big jump.


We will see, pretty sure that if that become a problem, they will adapt, but I am not sure if that it would be something to keep the crown but not really something widely available. AMD already have to give the best chips to the *950 and *900 cpu to stay in the AM4 power envelope. They would have to do crazy binning to do the same thing with 3 chiplet instead of 2.

But i would be more interested as a someone that primarily need performance for gaming at 8 core with 96MB of L3 than a 24 cores with the same amount of L3 cache. Even more if that 8 cores run at higher frequency (or is just buyable).
Posted on Reply
#31
Makaveli
TheinsanegamerNLots of people here want to bag on intel for "muh quad cores" without realizing that 6+ cores were available in HDET and, more importantly, they offered 0 performance improvement.
You are right 6 cores was available in HDET.

I had a 6 core i7-970 however you are incorrect when saying it offered no performance improvement. That cpu can still play most recent BF games with a decent gpu due to the extra cores a i7-920 cannot. Anything that can use those extra cores will including handbrake, any compression applications etc.
Posted on Reply
#32
moproblems99
sam_86314AMD pulling an Intel here?

Sounds like they're satisfied with their place in the market and have switched to stagnation mode.
I thought only single core mattered and moar cores were for fools?
Posted on Reply
#33
TheUn4seen
Be warned, this is a rant. With an inquisitive part at the end.
What is going on with CPUs? Since the Pentium 4 stupidity (I know, NetBurst was not bad in some scenarios, but for consumers it was mostly not great.) and then Bloomfield being a horrible space heater it was, not to mention Bulldozer which was actually one of the worst CPUs I ever owned, after all this we were going in a good direction with TDP going down, efficiency going up and got to a point where even the top end consumer CPUs were reasonable for home use - I know several people who still use slightly overclocked 6700k/7700k/8700k with something like RTX 3060/3070 with no disturbing bottlenecking. Nowadays it's somehow acceptable to sell a 170W, 16 core monstrosity as a consumer product, so we're back to the days of pumping more power and creating more heat just to get a higher number on a cardboard box. I know there are people who want this, I'm sure AMD did their due diligence in market research, so: Who are you people and what do you use such consumer hardware for? This is honest curiosity, not condescension.

I know there is e-peen enlargement, also known as "bragging rights" (or conspicuous consumption known otherwise as a Veblen effect if you're more academically inclined), which is probably the main reason for high-end consumer hardware existence since before home computers were a thing. If you need performance for work you don't fart around, buy big boy stuff and write it off your taxes as a business expense, that's where your Xeons, Threadrippers and such live. I know people with multi-CPU servers with Tesla cards in their basements, but they use them to earn money or do scientific research, but a 170W 16 core consumer CPU? To do what, games can't even saturate eight cores properly without GPU becoming the limiting factor and consumer software is just not that demanding. What, you do time-critical building modeling with Revit as a hobby or something? I'd like to again stress the fact that his is honest curiosity. I personally have a Xeon/Quadro machine at home, but I use it do do actual work and earn money, and that's not consumer hardware, hence my curiosity regarding the viability of such products in a consumer setting. Maybe I'm just oblivious to a fundamental change in consumer usage patterns and people really, really need to have six hundred browser windows opened at once or a full suite of Autodesk software is now a must-have for everyone.
Posted on Reply
#34
Richards
Tsmc 5nm is not that different to 7nm.. that power consumption is big
Posted on Reply
#35
maze100
sam_86314AMD pulling an Intel here?

Sounds like they're satisfied with their place in the market and have switched to stagnation mode.
the difference between a 1800x (zen 1 8 core) and 5800x (zen 3 8 core, without the optional v cache the arch support) is night and day, AMD improved ST performance by more than 50% and gaming perf by 2x - 3x

intel sandy to skylake (3 newer architectures) is no where near that
Posted on Reply
#36
Vayra86
TheUn4seenBe warned, this is a rant. With an inquisitive part at the end.
What is going on with CPUs? Since the Pentium 4 stupidity (I know, NetBurst was not bad in some scenarios, but for consumers it was mostly not great.) and then Bloomfield being a horrible space heater it was, not to mention Bulldozer which was actually one of the worst CPUs I ever owned, after all this we were going in a good direction with TDP going down, efficiency going up and got to a point where even the top end consumer CPUs were reasonable for home use - I know several people who still use slightly overclocked 6700k/7700k/8700k with something like RTX 3060/3070 with no disturbing bottlenecking. Nowadays it's somehow acceptable to sell a 170W, 16 core monstrosity as a consumer product, so we're back to the days of pumping more power and creating more heat just to get a higher number on a cardboard box. I know there are people who want this, I'm sure AMD did their due diligence in market research, so: Who are you people and what do you use such consumer hardware for? This is honest curiosity, not condescension.

I know there is e-peen enlargement, also known as "bragging rights" (or conspicuous consumption known otherwise as a Veblen effect if you're more academically inclined), which is probably the main reason for high-end consumer hardware existence since before home computers were a thing. If you need performance for work you don't fart around, buy big boy stuff and write it off your taxes as a business expense, that's where your Xeons, Threadrippers and such live. I know people with multi-CPU servers with Tesla cards in their basements, but they use them to earn money or do scientific research, but a 170W 16 core consumer CPU? To do what, games can't even saturate eight cores properly without GPU becoming the limiting factor and consumer software is just not that demanding. What, you do time-critical building modeling with Revit as a hobby or something? I'd like to again stress the fact that his is honest curiosity. I personally have a Xeon/Quadro machine at home, but I use it do do actual work and earn money, and that's not consumer hardware, hence my curiosity regarding the viability of such products in a consumer setting. Maybe I'm just oblivious to a fundamental change in consumer usage patterns and people really, really need to have six hundred browser windows opened at once or a full suite of Autodesk software is now a must-have for everyone.
Dunno I think the streamer and creator groups can now easily work without having to go into workstations, and with that bar being so low, the market has exploded. Many a 'pro' gamer or young creator is easy to talk into more is better. And the more mainstream this becomes, the more you get demand for bigger and better. Even up to the point of nonsensical; look at top end GPU land and its the same thing.

And yes they can use those vacant threads. Browser, stream(s) , some transcode and a bunchbof other stuff will likely be used concurrently.
Posted on Reply
#37
MxPhenom 216
ASIC Engineer
ZoneDymoJust because they arn't pushing more cores that means they stagnated?

Not to mention that threadripper exists?
This.

If they release chips on mainstream platform with more cores thats getting into TR territory and will totally castrate their own product line.
Posted on Reply
#38
maxfly
TheUn4seenBe warned, this is a rant. With an inquisitive part at the end.
What is going on with CPUs? Since the Pentium 4 stupidity (I know, NetBurst was not bad in some scenarios, but for consumers it was mostly not great.) and then Bloomfield being a horrible space heater it was, not to mention Bulldozer which was actually one of the worst CPUs I ever owned, after all this we were going in a good direction with TDP going down, efficiency going up and got to a point where even the top end consumer CPUs were reasonable for home use - I know several people who still use slightly overclocked 6700k/7700k/8700k with something like RTX 3060/3070 with no disturbing bottlenecking. Nowadays it's somehow acceptable to sell a 170W, 16 core monstrosity as a consumer product, so we're back to the days of pumping more power and creating more heat just to get a higher number on a cardboard box. I know there are people who want this, I'm sure AMD did their due diligence in market research, so: Who are you people and what do you use such consumer hardware for? This is honest curiosity, not condescension.

I know there is e-peen enlargement, also known as "bragging rights" (or conspicuous consumption known otherwise as a Veblen effect if you're more academically inclined), which is probably the main reason for high-end consumer hardware existence since before home computers were a thing. If you need performance for work you don't fart around, buy big boy stuff and write it off your taxes as a business expense, that's where your Xeons, Threadrippers and such live. I know people with multi-CPU servers with Tesla cards in their basements, but they use them to earn money or do scientific research, but a 170W 16 core consumer CPU? To do what, games can't even saturate eight cores properly without GPU becoming the limiting factor and consumer software is just not that demanding. What, you do time-critical building modeling with Revit as a hobby or something? I'd like to again stress the fact that his is honest curiosity. I personally have a Xeon/Quadro machine at home, but I use it do do actual work and earn money, and that's not consumer hardware, hence my curiosity regarding the viability of such products in a consumer setting. Maybe I'm just oblivious to a fundamental change in consumer usage patterns and people really, really need to have six hundred browser windows opened at once or a full suite of Autodesk software is now a must-have for everyone.
More than anything its cost. Its far cheaper to stay in the consumer realm with a 5950x system vs a threadripper rig.
Posted on Reply
#39
TechLurker
TheUn4seenBe warned, this is a rant. With an inquisitive part at the end.
What is going on with CPUs? -Snipped for space- Nowadays it's somehow acceptable to sell a 170W, 16 core monstrosity as a consumer product, so we're back to the days of pumping more power and creating more heat just to get a higher number on a cardboard box. I know there are people who want this, I'm sure AMD did their due diligence in market research, so: Who are you people and what do you use such consumer hardware for? This is honest curiosity, not condescension.

-Snipped for space- What, you do time-critical building modeling with Revit as a hobby or something? I'd like to again stress the fact that his is honest curiosity. I personally have a Xeon/Quadro machine at home, but I use it do do actual work and earn money, and that's not consumer hardware, hence my curiosity regarding the viability of such products in a consumer setting. Maybe I'm just oblivious to a fundamental change in consumer usage patterns and people really, really need to have six hundred browser windows opened at once or a full suite of Autodesk software is now a must-have for everyone.
I believe Jayz or Linus touched upon this once or twice, but a part of it is that the increased core count and performance has led to an increase in accessibility to do real-time or recorded HD, semi-professional video and photo editing (notably small businesses that can't afford to pay for, or don't need, prosumer-level hardware and software), as well as more easily enable aspiring "content creators" who game, livestream, and encode all at once on one PC. For livestreaming + PC gaming especially, both of them mentioned how it used to take two PCs; one configured to do the encoding/recording/editing/streaming, and the other being dedicated solely to gaming, to do a livestream, but was priced out of range for individuals until AMD blew open the core counts (heck, 8 cores was already good to do it, depending on title, and was one of the bigger selling points of the 1700X and 1800X).

And now with gaming studios taking advantage of more cores, scaling mostly up to 8, with a rare few offering options up to 16, the bigger livestreamers and content creators who make their living off such now start buying 12 and 16 core CPUs both to future-proof their rigs for a few years, and still have reserve cores for the real-time editing/encoding and other software they run. Sure, GPU makes have helped to off-load some of the encoding from the CPU to the GPU (NVENC notably comes to mind), but not everyone uses GPU encoding for one reason or another.

I myself am not familiar with all that goes into Livestreaming so I don't know the full extent of CPU reqs for a high-end, do-everything rig, but I have a passing familiarity with using consumer level hardware for "small, local business" ventures, since one of my friends uses a 5950X + 6700XT desktop for their 3-man photography and videography side-business. As they're the one mostly doing the editing and post-processing for photos and videos taken by the group, they like not having to wait for extended periods trying to scrub through 4k video footage or wait just as long for sampling and testing applied effects to large photos.

A less-used, but noted case was one Linus made; being able to take a high-end CPU + multi GPU combo, and split its cores to separate workstations. While Linus meme'd using a TR or EPYC CPU and multiple GPUs, he mentioned the viability of using say, a 16c CPU split 4 ways into "4c terminals" in an environment that doesn't need much more than CPU processing and basic graphics (provided by the terminal's basic display output), or a 16c CPU + 2 GPU rig split into 2 "8c+GPU" terminals.
Posted on Reply
#40
TumbleGeorge
RichardsTsmc 5nm is not that different to 7nm.. that power consumption is big
U learn physics? Is not enough to know only to count number to has right opinion for this case I think :rolleyes:
But wait there's more: for to made 7nm and 5nm TSMC use different models machines from ASML. LoL :)
Posted on Reply
#41
R0H1T
phanbueyWe aren't going to see 32 cores on the desktop until Apple or Intel start creeping back up.
Then I'm sure you'll probably to have to wait for a decade, if not more :laugh:

Just an FYI 32 core mainstream chips should become a reality come zen5/6 on 3nm TSMC.

AMD doesn't need to pull an Intel & I doubt they'll do so given the competitive landscape today, not to mention real threat of ARM eating x86 everywhere for breakfast, lunch & dinner. If ARM was even remotely competitive back in 2005 (on PC) then you can bet every dollar in crypto exchanges that Intel would either have released 8 core chips early 2010's or gone bankrupt today!
Posted on Reply
#42
sam_86314
TartarosYou really need more than 8 cores on your daily home and gaming machine? Holy fuck.
2006: You really need more than two cores on your daily home and gaming machine?

1981: You really need more than 640K of memory on your daily home and gaming machine?

What I'm saying is we need to keep an eye on what AMD is or isn't doing. Sure, eight cores may be overkill nowadays, but what about in a few years?

Intel perpetuated the idea that four cores were plenty from at least 2007 all the way up until Zen happened in 2017. Also notice that when Zen came out, all of a sudden, "regular" programs could take advantage of it.

I hate Intel as much as the next guy, but AMD isn't perfect either. They're both companies, and when they've gotten far enough ahead of their competitors, they'll slow down innovation until we get another Intel from Sandy Bridge to Kaby Lake situation.

The price increase Zen 3 received was the first red flag for me (but that didn't stop me from buying it!).
Posted on Reply
#43
Punkenjoy
sam_863142006: You really need more than two cores on your daily home and gaming machine?

1981: You really need more than 640K of memory on your daily home and gaming machine?

What I'm saying is we need to keep an eye on what AMD is or isn't doing. Sure, eight cores may be overkill nowadays, but what about in a few years?

Intel perpetuated the idea that four cores were plenty from at least 2007 all the way up until Zen happened in 2017. Also notice that when Zen came out, all of a sudden, "regular" programs could take advantage of it.

I hate Intel as much as the next guy, but AMD isn't perfect either. They're both companies, and when they've gotten far enough ahead of their competitors, they'll slow down innovation until we get another Intel from Sandy Bridge to Kaby Lake situation.

The price increase Zen 3 received was the first red flag for me (but that didn't stop me from buying it!).
Those 2 quote can't be compared. There is no law that limit how much data it's worth storing.

But there is a rules about multicore/multithread scaling. Amdahl law en.wikipedia.org/wiki/Amdahl%27s_law

The thing is a lot of the thing that can easily be parallelized can be done by GPU, or can be done by larger cores with larger SIMD. So for desktop users, there won't be a lot of benefits of having very large number of cores.

it have been shown in the past and it will be shown again in the future, you better have less faster core than more slower cores. At least on desktop. On server, well depending on the workload and stuff that might be another story.

In the end, all that are child game, it do not really matter. The only things that really matter is how much performance you get from the chips and that all it matter. If someone could release a 1 core CPU that blow out of the water everything we have currently, who would really care?
Posted on Reply
#44
The red spirit
eidairaman1The same was said about 8 cores in 2012
And nothing has meaningfully changed. If you are a gamer, 4 cores will be utilized well, 6 not so well and 8 are useless. Loads of software still can't parallelize properly either. 8 core today is still generally pointless, unless you are 100% sure that your software utilizes is.
Posted on Reply
#45
dragontamer5788
PunkenjoyBut there is a rules about multicore/multithread scaling. Amdahl law en.wikipedia.org/wiki/Amdahl's_law
There's a 2nd law: en.wikipedia.org/wiki/Gustafson's_law

When you can't scale any higher, you make the work harder. Back in the 00s, why would you ever go above 768p ?? But today, we are doing 1080p regularly and enthusiasts are all in 4k. With literally 16x the pixels, you suddenly have 16x more work and computers can scale to 16x the size.

4k seems to be the limit for how far is reasonable (at least for now). But there's also raytracing coming up: instead of making more pixels, we're simply making "each pixel" much harder to compute. I expect Gustafson's law to continue to apply until people are satisfied with computer-generated graphics. (and considering how much money Disney spends on Pixar rendering farms... you might be surprised at how much compute power is needed to get a "good cartoony" look like Pixar's Wreck it Ralph, let alone CGI Thanos in the movies).

Video game graphics are very far away from what people want still. The amount of work done per frame can continue to increase according to Gustafson's law for the foreseeable future. More realistic shadows, even on non-realistic games (ex: Minecraft with Raytracing or Quake with Raytracing) wows people.

--------------

For those who aren't in the know: Amdahl's law roughly states that a given task can only be parallelized to a certain amount. For example: if you're looking forward 1-move in Chess, the maximum parallelization you can get is roughly 20 (there are only ~20 moves available at any given position).

Therefore, when the "chess problem" is described as "improve the speed of looking forward 1-move ahead", you run into Ahmdal's law. You literally can't get any faster than 20x speedup (one core looking at each of the 20-moves available).

But... Gustafson's law is the opposite, and is the secret to scaling. If the "chess problem" is described as "improve the number of positions you look at within 30 seconds", this is Gustafson's law. If you have 400-cores, you have those 400-cores analyze 2-moves ahead (20x for the 1st move, 20x that for the 2nd move). All 400-cores will have work to do for the time period. When you get 8000 cores, you have one core look at 3 moves ahead (20x20x20 positions), and now you can look at 8000 positions in a given timeframe. Etc. etc. As you can see: "increasing the work done" leads to successful scaling.

----------

Some problems are Amdahl style (render this 768p image as fast as possible). Some other problems are Gustafson's (render as many pixels as you can within 1/60th of a second). The truth for any given problem lies somewhere in between the two laws.
Posted on Reply
#46
The red spirit
TheUn4seenBe warned, this is a rant. With an inquisitive part at the end.
What is going on with CPUs? Since the Pentium 4 stupidity (I know, NetBurst was not bad in some scenarios, but for consumers it was mostly not great.) and then Bloomfield being a horrible space heater it was, not to mention Bulldozer which was actually one of the worst CPUs I ever owned, after all this we were going in a good direction with TDP going down, efficiency going up and got to a point where even the top end consumer CPUs were reasonable for home use - I know several people who still use slightly overclocked 6700k/7700k/8700k with something like RTX 3060/3070 with no disturbing bottlenecking. Nowadays it's somehow acceptable to sell a 170W, 16 core monstrosity as a consumer product, so we're back to the days of pumping more power and creating more heat just to get a higher number on a cardboard box. I know there are people who want this, I'm sure AMD did their due diligence in market research, so: Who are you people and what do you use such consumer hardware for? This is honest curiosity, not condescension.

I know there is e-peen enlargement, also known as "bragging rights" (or conspicuous consumption known otherwise as a Veblen effect if you're more academically inclined), which is probably the main reason for high-end consumer hardware existence since before home computers were a thing. If you need performance for work you don't fart around, buy big boy stuff and write it off your taxes as a business expense, that's where your Xeons, Threadrippers and such live. I know people with multi-CPU servers with Tesla cards in their basements, but they use them to earn money or do scientific research, but a 170W 16 core consumer CPU? To do what, games can't even saturate eight cores properly without GPU becoming the limiting factor and consumer software is just not that demanding. What, you do time-critical building modeling with Revit as a hobby or something? I'd like to again stress the fact that his is honest curiosity. I personally have a Xeon/Quadro machine at home, but I use it do do actual work and earn money, and that's not consumer hardware, hence my curiosity regarding the viability of such products in a consumer setting. Maybe I'm just oblivious to a fundamental change in consumer usage patterns and people really, really need to have six hundred browser windows opened at once or a full suite of Autodesk software is now a must-have for everyone.
I'm pretty sure that 170 watt part is just basic part with PPT jacked up. Really pointless chip.
Posted on Reply
#47
TumbleGeorge
PunkenjoyBut there is a rules about multicore/multithread scaling. Amdahl law
This is temporary. Theoretically Amdahl is right...for now
Newton also is right and his elaborations on gravity have not been repealed. However, many other scientists lived and worked after him, including Einstein, and we now have an improved view of the laws of nature. Just for example. If we relied only on Newton's formulas, we would not have exactly global positioning, and maybe even good navigation in space. For this purpose it is necessary to use quantum mechanics. It is possible that a genius has already been born who, without disproving Amdahl ... Just writes other rules and laws that will also be valid and less restrictive.
Posted on Reply
#48
TheinsanegamerN
MakaveliYou are right 6 cores was available in HDET.

I had a 6 core i7-970 however you are incorrect when saying it offered no performance improvement. That cpu can still play most recent BF games with a decent gpu due to the extra cores a i7-920 cannot. Anything that can use those extra cores will including handbrake, any compression applications etc.
Were the most recent BF games benchmarked in 2010? I dont think so. Unless you came back from the future nobody was doing so. For games of 2010, the 970 was no faster, and oftentimes was a bit slower due to lower clock speeds then the consumer quad cores. Most consumer software of the time could not use more then 4 cores, or even more then 2 cores.

And while your 970 can still play games, I'd bet good money that a current gen i3 quad core would absolutely smoke it. Single core performance has come a long way in the last decade, as has multicore performance. One can look up benchmarks for an i3-10320 and see it is ranked slightly higher then a i7-980.
MusselsI said the core count hadnt changed, i literally said slow increases per generation... Kentsfield to haswell was 2006 to 2013 - and what, 5 generations?

And this is what we got



Then the change zen managed in 2 gens... (including a 4 core equivalent and then GASP there is no zen 3 4 core chip! how dare they progress!)


(images look different, got them from different sources, but both passmark)
You suggested that the fundamental change for intel performance wise was due to DRAM speed increases instead of core changes. That, you yourself, have proven incorrect, as both haswell and the core 2 quad can use DDR3. You said it, not me.
sam_863142006: You really need more than two cores on your daily home and gaming machine?

1981: You really need more than 640K of memory on your daily home and gaming machine?

What I'm saying is we need to keep an eye on what AMD is or isn't doing. Sure, eight cores may be overkill nowadays, but what about in a few years?

Intel perpetuated the idea that four cores were plenty from at least 2007 all the way up until Zen happened in 2017. Also notice that when Zen came out, all of a sudden, "regular" programs could take advantage of it.

I hate Intel as much as the next guy, but AMD isn't perfect either. They're both companies, and when they've gotten far enough ahead of their competitors, they'll slow down innovation until we get another Intel from Sandy Bridge to Kaby Lake situation.

The price increase Zen 3 received was the first red flag for me (but that didn't stop me from buying it!).
Well, when 8 cores finally isnt enough, AMD already has 12 and 16 cores available for you! and by that time, AMD will have likely have higher then 16 core parts available.

Funny how when the 2000 series came out and still features 8 cores nobody was loosing their mind on AMD "stagnating", yet after the same amount of time at 16 cores suddenly AMD is worse then intel. This would be like people complaining that intel was stagnating right after sandy bridge came out because it was still 4 cores when the vast majority of games were still single threaded. By the time it's relevant this AM4/AM5 platform will be ancient history.
Posted on Reply
#49
GoldenX
TheDeeGeeWhat 11th gen disaster?

My 11700 uses 40-45 watts during gaming.
Try a K model, with that new "last ditch single core boost" enabled, easily over 200w.
Regular models like the vanilla 11700 are fine, the moment Intel tries to force 14nm to be more than it is, you get CPUs that can't be cooled with a 360mm radiator.
Posted on Reply
#50
Tartaros
sam_863142006: You really need more than two cores on your daily home and gaming machine?

1981: You really need more than 640K of memory on your daily home and gaming machine?

What I'm saying is we need to keep an eye on what AMD is or isn't doing. Sure, eight cores may be overkill nowadays, but what about in a few years?
Things advance when consumers and makers find there is need and knowhow to advance, sometimes is need, sometime is knowhow, sometimes consumers push and sometimes makers. Your examples are off because the daily software hasn't catched up to what an 8 core can offer, and there is always the top of the line 16 core and threadripper line if you really need that many cores, we are talking more down to earth cpus here. Why would we need more cores to watch videos or social media? And not every single piece of software can or should scale on multithread.

2006 was already ripe for multithreaded software and still took like 2 or 3 years to take advantage. So the 640k memory limit. Is not even close to the situation we are now, we are far from exploiting actively 8 cores for daily tasks or even gaming.
Posted on Reply
Add your own comment
May 14th, 2024 21:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts