• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Zen 5 "Strix Point" Processors Rumored To Feature big.LITTLE Core Design

Joined
Oct 10, 2009
Messages
786 (0.15/day)
Location
Madrid, Spain
System Name Rectangulote
Processor Core I9-9900KF
Motherboard Asus TUF Z390M
Cooling Alphacool Eisbaer Aurora 280 + Eisblock RTX 3090 RE + 2 x 240 ST30
Memory 32 GB DDR4 3600mhz CL16 Crucial Ballistix
Video Card(s) KFA2 RTX 3090 SG
Storage WD Blue 3D 2TB + 2 x WD Black SN750 1TB
Display(s) 2 x Asus ROG Swift PG278QR / Samsung Q60R
Case Corsair 5000D Airflow
Audio Device(s) Evga Nu Audio + Sennheiser HD599SE + Trust GTX 258
Power Supply Corsair RMX850
Mouse Razer Naga Wireless Pro / Logitech MX Master
Keyboard Keychron K4 / Dierya DK61 Pro
Software Windows 11 Pro
No there is two reasons why they are doing it. To unify their code base between their two OSs IOS and Mac OS. And secondly because they can design a better chip than Intel and they sick of being dragged down by Intel's internal issues. Talk shit about Apple if you want but they design solid hardware and the M1 chip is very a impressive CPU. They probably could have worked with AMD as partner for desktop CPUs if wasn't for the first point.
There has been intentions of changing into RISC in general cpus since some quite long time and having everything made in house and there is the developement of the IOS into a unified platform for a long time now. They would have switched to their M1 even if Intel delivered, thinking they would have change for AMD at some point it's just wishful thinking, they could have done that in the best scenario for AMD which has been the last 3 years and they didn't.

The RISC boat sailed a long time ago for Apple.
 
Joined
Jan 28, 2021
Messages
845 (0.72/day)
There has been intentions of changing into RISC in general cpus since some quite long time and having everything made in house and there is the developement of the IOS into a unified platform for a long time now. They would have switched to their M1 even if Intel delivered, thinking they would have change for AMD at some point it's just wishful thinking, they could have done that in the best scenario for AMD which has been the last 3 years and they didn't.

The RISC boat sailed a long time ago for Apple.
Oh, yeah I totally agree it wouldn't have made any sense for them to adopt AMD when they already have so much invested in a different ISA. I don't think they ever wanted to ditch PowerPC in the first place (RISC) but back then IBM was having issues of their own, Apple didn't have a CPU design of their own, and AMD was probably too small, so they had little choice but to go with Intel.

Maybe in parallel universe where Apple's own silicon isn't as strong as it is or M1 wasn't ready it could have happened and would have been interesting to see but it would have been a stop-gap effort at most.
 
Joined
Oct 10, 2009
Messages
786 (0.15/day)
Location
Madrid, Spain
System Name Rectangulote
Processor Core I9-9900KF
Motherboard Asus TUF Z390M
Cooling Alphacool Eisbaer Aurora 280 + Eisblock RTX 3090 RE + 2 x 240 ST30
Memory 32 GB DDR4 3600mhz CL16 Crucial Ballistix
Video Card(s) KFA2 RTX 3090 SG
Storage WD Blue 3D 2TB + 2 x WD Black SN750 1TB
Display(s) 2 x Asus ROG Swift PG278QR / Samsung Q60R
Case Corsair 5000D Airflow
Audio Device(s) Evga Nu Audio + Sennheiser HD599SE + Trust GTX 258
Power Supply Corsair RMX850
Mouse Razer Naga Wireless Pro / Logitech MX Master
Keyboard Keychron K4 / Dierya DK61 Pro
Software Windows 11 Pro
Oh, yeah I totally agree it wouldn't have made any sense for them to adopt AMD when they already have so much invested in a different ISA. I don't think they ever wanted to ditch PowerPC in the first place (RISC) but back then IBM was having issues of their own, Apple didn't have a CPU design of their own, and AMD was probably too small, so they had little choice but to go with Intel.

Maybe in parallel universe where Apple's own silicon isn't as strong as it is or M1 wasn't ready it could have happened and would have been interesting to see but it would have been a stop-gap effort at most.
Rather than being small, they didn't have a unified platform like Intel had at the time. Intel was some years into the Centrino thing they started with Pentium M and offered everything a computer needed with little hassle while AMD was doing the old thing of manufacturing cpus and some chipsets that weren't very good, ironically the best AMD chipsets were from Nvidia at that time. They also didn't offer an I/O solution, storage was dodgy, mobile gpu was still a separate thing and not even today they offer networking. Intel did all of that in one package, it's obvious they were the best choice for a swift transition.

And also there is the Intel resurrection with Core Duo, AMD was on its downward spiral into Bulldozer and ATI's adquisition also was hurting. But if it was in the Pentium III/Athlon or Pentium 4/Athlon XP and 64, Apple would have gone AMD almost for sure. Even today Intel is not into a full NetBurst situation, those were hellish years for them.
 
Last edited:
Joined
Jan 3, 2021
Messages
2,657 (2.21/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
So the general issue is that Windows (and Linux / Android) don't really have much to gain from little cores yet. What kind of thread/process should go on a little core? Its a complicated question: it seems that big cores are more efficient at higher GHz, while little cores really benefit if you downclock them to 200MHz or slower.

The scheduler has to not only pick the threads/processes that go on big vs little cores, but also pick what GHz (or MHz) to run the processor at. There's a bit of a "race to idle" problem. If you spend 1000ms running a task on a little core, you probably would use more power than if you had run the task on a big-core for 100ms and then slept for 900ms afterwards.

But if the sleep is interrupted after 500ms, your calculus changes yet again. So you need to predict the future and know when to turn on big cores (or turn them off), and same thing with when to clock little cores up or down.

---------

Answering these scheduling problems adequately for sizable gains in efficiency is not very easy. Its a lot easier when you have all the same kind of core: the answer in that case is simple. Just estimate the power usage per core and keep the clock as low as reasonable.
True, and it gets more complicated when you combine cores with different capabilities, like the presence or absence of multithreading or AVX-512.

When you need to predict the future, you use predictors based on statistics from recent past, like branch predictors. For scheduling, I can imagine a solution that's based on both HW and SW. There would have to be some dedicated hardware on the CPU that collects some statistics about program execution. For example, how much time is spent executing/emulating AVX, or how much time is spent waiting for I/O or memory while the core is gobbling up power, or how much time is spent waiting because of the other thread on the same core is using some shared resource. The scheduler would then use these statistics to determine if the execution is optimal and move it to another core if it isn't.
The executable code itself could contain some metadata, provided by the compiler or manually, for a whole DLL/library or more detailed, and the scheduler would use that data as a hint when picking the best core for that code.
As it's based on statistics, it would be called "AI scheduler", of course.
 
Joined
Sep 14, 2017
Messages
610 (0.25/day)
What if the OS see's patterns of usage for certain background tasks and services, so makes sure those run on the little cores more often, as they don't need high performance but instead consistency?
 
Joined
Oct 10, 2009
Messages
786 (0.15/day)
Location
Madrid, Spain
System Name Rectangulote
Processor Core I9-9900KF
Motherboard Asus TUF Z390M
Cooling Alphacool Eisbaer Aurora 280 + Eisblock RTX 3090 RE + 2 x 240 ST30
Memory 32 GB DDR4 3600mhz CL16 Crucial Ballistix
Video Card(s) KFA2 RTX 3090 SG
Storage WD Blue 3D 2TB + 2 x WD Black SN750 1TB
Display(s) 2 x Asus ROG Swift PG278QR / Samsung Q60R
Case Corsair 5000D Airflow
Audio Device(s) Evga Nu Audio + Sennheiser HD599SE + Trust GTX 258
Power Supply Corsair RMX850
Mouse Razer Naga Wireless Pro / Logitech MX Master
Keyboard Keychron K4 / Dierya DK61 Pro
Software Windows 11 Pro
True, and it gets more complicated when you combine cores with different capabilities, like the presence or absence of multithreading or AVX-512.

When you need to predict the future, you use predictors based on statistics from recent past, like branch predictors. For scheduling, I can imagine a solution that's based on both HW and SW. There would have to be some dedicated hardware on the CPU that collects some statistics about program execution. For example, how much time is spent executing/emulating AVX, or how much time is spent waiting for I/O or memory while the core is gobbling up power, or how much time is spent waiting because of the other thread on the same core is using some shared resource. The scheduler would then use these statistics to determine if the execution is optimal and move it to another core if it isn't.
The executable code itself could contain some metadata, provided by the compiler or manually, for a whole DLL/library or more detailed, and the scheduler would use that data as a hint when picking the best core for that code.
As it's based on statistics, it would be called "AI scheduler", of course.
I'm not versed in these matters, so maybe you can help me to understand this: it depends just on the OS maker or the software programmer can influence how their program might take care advantage of the little cores, like identifying them and telling them to use the little ones? And if they can, there is a situation that the software requires more horsepower, can the programmer tell the software to switch to a better core on the fly?
 
D

Deleted member 205776

Guest
Nah.
  1. The strix (plural striges or strixes), in the mythology of classical antiquity, was a bird of ill omen, the product of metamorphosis, that fed on human flesh and blood. It also referred to witches and related malevolent folkloric beings.
Sounds like AMD is about get midevil on Intel's ass, lol.
Yeah, they ran them into the ground so much they decided to also destroy them in the dumbass pricing department
 
Joined
Jan 28, 2021
Messages
845 (0.72/day)
Yeah, they ran them into the ground so much they decided to also destroy them in the dumbass pricing department
Not sure what that means. There isn't anyone in the industry that can hold a candle to Intel's pricing over the last 10+ years.
 
D

Deleted member 205776

Guest
Not sure what that means. There isn't anyone in the industry that can hold a candle to Intel's pricing over the last 10+ years.
They showed them who's the boss when it comes to CPUs and market disruption, now they showed them, and will continue to show them (unless blue boys do something about it) who's the boss when it comes to dumb pricing.
 
Joined
Jan 28, 2021
Messages
845 (0.72/day)
Yeah, they ran them into the ground so much they decided to also destroy them in the dumbass pricing department
Not sure what that means. There isn't anyone in the industry that can hold a candle to Intel's pricing over the last 10 years.
I'm not versed in these matters, so maybe you can help me to understand this: it depends just on the OS maker or the software programmer can influence how their program might take care advantage of the little cores, like identifying them and telling them to use the little ones? And if they can, there is a situation that the software requires more horsepower, can the programmer tell the software to switch to a better core on the fly?

I'm not expert either but I know there are and should be optimizations that should be done on both the OS and software level.

I know certain game engines would completely fall apart on on Bulldozer if the game wasn't made aware of the clustered nature of threads. I'm sure something similar will have to be done with big/little cores as you can't have a rendering thread for example go from a high performance big core to little one with half the performance without drastic performance issues.

They showed them who's the boss when it comes to CPUs and market disruption, now they showed them, and will continue to show them (unless blue boys do something about it) who's the boss when it comes to dumb pricing.
Yeah, AMD could go lower for sure but thats not how you run a company. Competition is weak but performance is pretty in line with price, especially when you consider that Intel has nothing in their stack that can do what AMDs top end parts do, nothing about AMD's pricing is abusive. The same cannot be said for Intel over the years with halo HEDT CPUs costing orders of magnitude more than the desktop units of the same family with absolutely no performance to justify it.
 
Last edited:
D

Deleted member 205776

Guest
Yeah, competition is weak but performance is pretty in line with price.
Not really when I can get a 11400f and a capable cheap B560 board for it for the price of a single 5600X and have comparable performance. I'm going to find it hard to recommend people Zen 3 CPUs with their pricing and availability, when there are 10850Ks going for like $350. And it's why I want Alder Lake to be good, so that AMD comes back to their senses. They're clearly taking advantage of that dominant market position while they can.
 
Joined
Jan 28, 2021
Messages
845 (0.72/day)
Not really when I can get a 11400f and a capable cheap B560 board for it for the price of a single 5600X and have comparable performance. I'm going to find it hard to recommend people Zen 3 CPUs with their pricing and availability, when there are 10850Ks going for like $350. And it's why I want Alder Lake to be good, so that AMD comes back to their senses. They're clearly taking advantage of that dominant market position while they can.
Yeah the 11400 / 600 are good deals but competing on price is what you do when you don't have the technology lead. The other thing to keep in mind is AMD is supply limited and will have no problem selling every CPU they make. Also, TSMC yields are really good on 7nm so most what is being produced is going to fully enabled 8 core dies which further puts a supply limit on 6 core 5600x CPUs.

So yeah... AMD is being sensible.
 
Joined
Jan 3, 2021
Messages
2,657 (2.21/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
I'm not versed in these matters, so maybe you can help me to understand this: it depends just on the OS maker or the software programmer can influence how their program might take care advantage of the little cores, like identifying them and telling them to use the little ones? And if they can, there is a situation that the software requires more horsepower, can the programmer tell the software to switch to a better core on the fly?
Hey, I'm just waving a napkin with some ideas handwritten on both sides. I don't think that the application programmer should have absolute control over these details. The program needs to run efficiently on a wide range of processors with various number of big and little cores, the user can choose one of several power plans, and the programmer can't predict any of that. But the programmer could include some additional info that says, for example, "this DLL/this procedure usually doesn't benefit at all from multithreading". The scheduler would use this, as well as other information: statistics that I mentioned before, number of cores, total CPU load, power plan, process priority etc. to choose the most appropriate core.

Which box? Haswell is already long overdue.
Only if Intel decides to reverse the flow of time, which at this point isn't completely out of question, and isn't illegal under Moore's Law. But then you'd need to wait for Comet Lake, Coffee Lake Refresh, Coffee Lake, Kaby Lake, Skylake and Broadwell to come first. All of them still on 14 nm, mind you.
 
Joined
Apr 24, 2020
Messages
2,559 (1.76/day)
What if the OS see's patterns of usage for certain background tasks and services, so makes sure those run on the little cores more often, as they don't need high performance but instead consistency?
Most background tasks are like 'check I/O for virus signatures', which only runs when you read or write to disk.

What this really looks like is a 5ms (for the hard drive to respond) and then 4096 bytes (one sector of the hard drive loaded). A millisecond is very slow for a computer (4 GHz is 0.25 nanoseconds. 5ms is 5000000 nanoseconds or 20 million cycles).

Even if done on a slow 200MHz little core, this kind of background task almost certainly will be 'run and sleep'. Sleeping faster (by executing on a big core) could very well be the more efficient decision.
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
Some of the commenters are hypocrites. That's why I made a meme that describes it well ;)
I'm glad you said some.
If AMD adopt the same methods of implementation then the meme is relevant, but if their path differs to Intel's, then it's not.
So basically it's not what they do, but how they do it that matters.
Oh and welcome.
 
Joined
Apr 15, 2021
Messages
847 (0.77/day)
They showed them who's the boss when it comes to CPUs and market disruption, now they showed them, and will continue to show them (unless blue boys do something about it) who's the boss when it comes to dumb pricing.
What were those blue characters suppose to be, some kind of grotesque humanoid version of hatless smurfs? Melancholy paves the way for Intel? Just my opinion, but I think Intel could've chosen a better color.
 
Joined
May 3, 2018
Messages
2,281 (1.05/day)
I thought AMD had some arrangement with ARM in the past, so I would not be surprised to see ARM architecture cores for the little. Maybe they will resurrect the concept behind project Skybridge from around 2015.

Anyway does this story imply there will be no more 12 and 16 core desktop, as it said all Zen5 will be APU design.
 
Joined
Jan 28, 2021
Messages
845 (0.72/day)
I thought AMD had some arrangement with ARM in the past, so I would not be surprised to see ARM architecture cores for the little. Maybe they will resurrect the concept behind project Skybridge from around 2015.

Anyway does this story imply there will be no more 12 and 16 core desktop, as it said all Zen5 will be APU design.
Zen is x86; you can't just throw different ISAs around in the same CPU and it expect it to work as was previously pointed out. AMD was working on something ARM before all the resources got dumped into Zen and I think they licensed the ARM ISA so it was a from scratch core similar to what Apple is doing so yeah, maybe they'll pick it back up but it won't be a Zen CPU.

And this is Zen 5, supposedly on 3nm. At that point a 16 core APU will be nothing, particularly if use multiple chiplets.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.19/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Why would you even want to have two cluster with entirely different ISAs ?

purely a theoretical thing because of the way the world is moving with apple causing a shift over to arm

ARM hardware is a lot more power efficient so what if the OS could run on 15W of ARM hardware while the x86 cores slept?
 
Joined
Jan 28, 2021
Messages
845 (0.72/day)
purely a theoretical thing because of the way the world is moving with apple causing a shift over to arm

ARM hardware is a lot more power efficient so what if the OS could run on 15W of ARM hardware while the x86 cores slept?
You don't have the slightest idea of how CPUs execute code. You can't mix and match ISAs in one CPU simply because you think one is better at one thing or another (which is false btw). You might as well say you want a CPU that runs on 8 big steam punk mechanical horse cores and 16 little gnome with slide rulers cores, makes about the same level of sense...
 
Joined
May 31, 2016
Messages
4,324 (1.50/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Why would AMD use Big.little? They still have the power efficiency advantage and the cores are pretty fast.
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
purely a theoretical thing because of the way the world is moving with apple causing a shift over to arm

ARM hardware is a lot more power efficient so what if the OS could run on 15W of ARM hardware while the x86 cores slept?
More likely to use 15W x86 cores in association with full sized x 86 cores
 
Joined
Apr 24, 2020
Messages
2,559 (1.76/day)
Why would AMD use Big.little? They still have the power efficiency advantage and the cores are pretty fast.

big cores are surprisingly energy efficient these days. LITTLE cores manage to win in some applications however: low and slow. In particular, very low MHz (like 200) and much lower voltages with non-CPU heavy tasks. If the CPU is the bottleneck, you probably want the big-core. DDR4 uses a good chunk of power, including the L3, L2, L1 caches and memory controller. As long as there's work to do, a bigger core can beat LITTLE cores in efficiency.

If you're doing render farms or other "big" tasks, a bigger core at a lower frequency (think EPYC) is the best bet. But if you're streaming data from a hard drive out of a NIC into the Internet... LITTLE cores probably win (very low CPU requirements). "Schedulers aren't smart enough" to make these decisions. Heck, I don't think anyone is really smart enough to figure out the problem right now.
 
Top