• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Alder Lake-S Processor with 16c/32t (Hybrid) Spotted on SANDRA Database

Multithreaded, how about CB20 or blender? Intel probably just wants the benchmark wins with this one, seeing how AMD is just demolishing them in the mindshare & tasks "other than gaming" space.

But in a 16 cores vs 16 cores scenario, AMD would still beat them. It would bring them closer to the top of chart but still no where near AMD for stuff like CB20 or blender.
 
Yes & that's what they're likely aiming for, at least till they can get the performance "leadership" crown back in the desktop space which btw is not a guarantee even on 7nm or 5nm a few years down the line. Think about it, this product (8c/16c) will never make it to tablets & isn't much useful in laptops either. So this is a desktop product for sure, now how close can they get to AMD is anyone's guess at this point in time. But you could argue it's better than a 12c radiator smoking 300W at 5.5Ghz & only slightly beating say the 5900x at 4k or 8k, by 5-10 fps.
 
The question is under what sort of scenarios would that be really useful. Again, in a phone it makes sense because the large cores often throttle heavily under most conditions so if you need multi-threaded performance the small cores offer a decent boost.
Well, it would be nice if you could use the low power cores for your browser or music player while the fast cores crunch number or run a game. It would cut back on context switching/cache thrashing.

That's about the only useful thing I can think of.
 
But in a 16 cores vs 16 cores scenario, AMD would still beat them. It would bring them closer to the top of chart but still no where near AMD for stuff like CB20 or blender.

If they can get back at being the Kings of IPC, they would be faster in game Intel 16 cores vs AMD 16 cores. I still think it's a smart move from them specially for people that don't know processor stuff as we do.
 
This isn't so much about idle or sleep - current desktop processors can do this very well at low power as demonstrated in this thread already.

This is about light workloads - simple web browsing, MS Office, light application usage. These can all be done on a weaker processor without affecting user experience. If something needs more power, then switch to a powerful core and ramp up the clock until it's done.

The theory is that the weaker core will use less power than the bigger core, even if the bigger core was in a lower clock state because the workload is light.

Plenty of big.LITTLE graphs around that show these curves and power usage.

BUT - on a desktop, do you care about a few watts of power? No.

So maybe this is for laptops (and quiet SFF), and it's to improve office-style workloads there. In fact there are supposed to be SKUs with 0, 2, 4 large cores and the 8 atom cores.

Desktops will get a different product, or they'll get this at higher clocks so that Intel can sell it as 16C against AMD's Zen 4, which will be 16 real cores, but will consumers know?
 
  • Like
Reactions: bug
If they can get back at being the Kings of IPC, they would be faster in game Intel 16 cores vs AMD 16 cores. I still think it's a smart move from them specially for people that don't know processor stuff as we do.
Even if they get back to being the kings of "IPC" they'll have to do a lot more to take the absolute performance crown in the desktop space! What did Zen2 show vs SKL ~ that absolute clock speeds don't matter at the top end with 8c-64c chips. You're never going to cool them in a reasonable manner, secondly ~ SMT, Intel has yet to match AMD's implementation & till they get close any IPC advantages will mostly be negated in MT tasks.

Lastly & arguably the crown jewel of AMD's CPU team ~ IF, yup the glue which Intel derided so much & yet is desperate to get their hands on a half decent one :slap:

Intel needs to do more cores+IPC+SMT (better HT) to beat AMD definitely in the desktop space. I don't see them pulling this off anytime soon with the physics of Si reaching its limits.
 
This garbage Big.little trash by Intel is a move to try to gain their SMT performance back, AMD's SMT is much superior to Intel in the existing Zen2 uArch solutions on EPYC/TR & Ryzen with the upcoming Zen 3 AMD is going to wipe the floor clean with Intel 10900K from the rumors to be really honest. And Intel's RKL is not going to have 10C/20T as well, so their ST performance is what is the last trick they have left probably the Ring Bus scales, still that 14nm++ for 5GHz+ is going to eat up a lot of power on the 11th gen processors even if the uncore is on 10nm.

And nobody cares about their Desktop processors idling, wtf is that ? small x86 cores for what ? I can use Throttlestop to set profiles and be done with it If I really want to make the CPU run slower. I think it's just a move that Intel wants to try to deal with the Ryzen U series and the ARM processors on the Windows platform and the OEMs like Dell/HP/Lenovo along with the SMT, for their Thin and Light BGA garbage use and throw machines. As those laptops are higher in shipments I guess considering the Enterprise contracts and the users as well.

I just hope AMD puts a silver bullet to this b.s Intel, AoTS benchmark leak on 4K didn't show massive improvements vs the Intel chip, that's a concern because with RKL Intel might again get their "Gaming" performance back, I never wanted AMD to destroy Intel bad because of my personal opinions on already maxed out processors out of box leaving nothing for user and complications surrounding their new technologies like IF clk and others for Memories and Clock behavior like GPUs without stable rock solid clocks & their CPU monitoring tools etc. But Intel's pathetic attempts like EOLing the Z390 for no reason and still having DMI3.0 but gaining parity to the Chipset bus (Due to X570) on the Z590 for RKL is a big fcking shame, esp when PCIe 4.0 is available direct through the CPU for RKL.

Just a few more days, Zen 3 will be unleashed. Followed by Threadripper and EPYC Milan.
 
Last edited:
Good for you, but 95% of everyday users would actually like to have a processor that idles at 10 watts rather than 40 watts and saves a few dollars on the power bills. The power-saving implications for large-scale computing like data centers is a big deal.

As far as customers go, power users like yourself are a small drip in a large pond.



The world is ending! Everyone grab your tin foil hats!
Can anyone post anymore stellar definition of cluelessness?

If you care about saving $10 a year, you shouldn't own a PC in the first place since you have much bigger problems in your life.
If "computers" in large-scales data centres are idling (=making use of this technology), something is very very wrong as well.

Maybe you shouldn't post when you not only don't have any knowledge about, but also lack any common sense.
 
They already idle in single digit W figures. I am absolutely convinced 95% of users wouldn't at all care about a processor that idles at 1 or 2 Watts less. They probably don't even have a clue what that power consumption is to begin with.

You got this 100% backwards, only someone really knowledgeable like a power user would even think of or measure these things.



Data centers run at full blast most of the time, in fact their goal is to maximize usage as much as possible. If your CPU nodes are idling, you're losing money. Again, this is a product for consumers, not for companies.

BS on the data centres claiming they don't care about their power requirements. Gargantuan power bills and in case you haven't noticed becuase you live in the USA, climate change is running amok. Plenty of big tech firms are doing everything they can to reduce their power consumption irrespective of what their idiot in chief says.
 
BS on the data centres claiming they don't care about their power requirements. Gargantuan power bills and in case you haven't noticed becuase you live in the USA, climate change is running amok. Plenty of big tech firms are doing everything they can to reduce their power consumption irrespective of what their idiot in chief says.
Well, if you're a data center and have numerous CPUs idling around, you're doing it wrong ;)
 
Back
Top