• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel 12th Gen Core Alder Lake to Launch Alongside Next-Gen Windows This Halloween

Joined
Aug 20, 2007
Messages
20,787 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64
not going to lie... this sounds like a disaster waiting to happen. Why in the world would you allow thread migration to things with different ISA support...
 
Joined
Aug 14, 2009
Messages
216 (0.04/day)
Location
Denmark
System Name Bongfjaes
Processor AMD 3700x
Motherboard Assus Crosshair VII Hero
Cooling Dark Rock Pro 4
Memory 2x8GB G.Skill FlareX 3200MT/s CL14
Video Card(s) GTX 970
Storage Adata SX8200 Pro 1TB + Lots of spinning rust
Display(s) Viewsonic VX2268wm
Case Fractal Design R6
Audio Device(s) Creative SoundBlaster AE-5
Power Supply Seasonic TTR-1000
Mouse Pro Intellimouse
Keyboard SteelKeys 6G
Oh how i cant wait for them to mess up the UI even more, with even more Telemetry on top!, probably locked down a tad more too.

I wonder if Open Shell will support it..
 
Joined
Dec 17, 2020
Messages
118 (0.10/day)
Make no mistake. This design decision is not for us. It's for mobile users, tablets, and other users on battery power. Here's hoping AMD has the sense to make actual desktop chips for desktop users. Or rather keep making them.
Hard to see where little core chips would fit on a Thread ripper.

The OS was mainly the problem of the previous concepts like Dual-Core CPU or Bulldozer or multi-NUMA Threadripper or Lakefield ... where Linux was able to adapt relatively fast, microsoft failed on all sheets, hefty on consumer OS and not so hefty on enterprise OS.
I hope MS did their homework and that Intel did not put too much trust on MS.
I would hope that Turbo boost 3 allows Intel users to assign cores for software. It does now.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
not going to lie... this sounds like a disaster waiting to happen.
Perhaps, or at least a bumpy road. There are so many pitfalls, I would be surprised if they get everything right at the first try.

Considering MS' track record, who would want to be their "beta testers" for the first 6-12 months?
 
Joined
Sep 3, 2010
Messages
3,527 (0.71/day)
Location
Netherlands
System Name desktop | Odroid N2+ |
Processor AMD Ryzen 5 3600 | Amlogic S922X |
Motherboard Gigabyte B550M DS3H |Odroid N2+ |
Cooling Inter-Tech Argus SU-200, 3x Arctic P12 case fans | stock heatsink + fan |
Memory Gskill Aegis DDR4 32GB | 4 GB DDR4 |
Video Card(s) Sapphire Pulse RX 6600 (8GB) | Arm Mali G52 |
Storage SK Hynix SSD 240GB, Samsung 840 EVO 250 GB, Toshiba DT01ACA100 1T | Samsung 850 Evo 500GB |
Display(s) AOC G2260VWQ6 | LG 24MT57D |
Case Asus Prime 201 | Stock case (black version) |
Audio Device(s) integrated
Power Supply BeQuiet! Pure Power 11 400W | 12v barrel jack |
Mouse Logitech G500 |Steelseries Rival 300
Keyboard Qpad MK-50 (Cherry MX brown)| Blaze Keyboard
Software Windows 10, Various Linux distros | Gentoo Linux
[..] if "Windows 11" is going to be re-made groud up to be optimized for little.BIG approch [..]

It does not have to be. Yes, the scheduler would need to be rewritten but that "just" means partially rewriting the kernel.
 
Joined
Feb 20, 2020
Messages
9,340 (6.12/day)
Location
Louisiana
System Name Ghetto Rigs z490|x99|Acer 17 Nitro 7840hs/ 5600c40-2x16/ 4060/ 1tb acer stock m.2/ 4tb sn850x
Processor 10900k w/Optimus Foundation | 5930k w/Black Noctua D15
Motherboard z490 Maximus XII Apex | x99 Sabertooth
Cooling oCool D5 res-combo/280 GTX/ Optimus Foundation/ gpu water block | Blk D15
Memory Trident-Z Royal 4000c16 2x16gb | Trident-Z 3200c14 4x8gb
Video Card(s) Titan Xp-water | evga 980ti gaming-w/ air
Storage 970evo+500gb & sn850x 4tb | 860 pro 256gb | Acer m.2 1tb/ sn850x 4tb| Many2.5" sata's ssd 3.5hdd's
Display(s) 1-AOC G2460PG 24"G-Sync 144Hz/ 2nd 1-ASUS VG248QE 24"/ 3rd LG 43" series
Case D450 | Cherry Entertainment center on Test bench
Audio Device(s) Built in Realtek x2 with 2-Insignia 2.0 sound bars & 1-LG sound bar
Power Supply EVGA 1000P2 with APC AX1500 | 850P2 with CyberPower-GX1325U
Mouse Redragon 901 Perdition x3
Keyboard G710+x3
Software Win-7 pro x3 and win-10 & 11pro x3
Benchmark Scores Are in the benchmark section
Hi,
Trick or treat seems appropriate for any thing from either of them but reality is always trickery lol
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
12,945 (2.60/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Razer Huntsman Tournament Edition
Software Windows 11 Pro 64-Bit
Make no mistake. This design decision is not for us. It's for mobile users, tablets, and other users on battery power. Here's hoping AMD has the sense to make actual desktop chips for desktop users. Or rather keep making them.
Though there is already talk AMD could go the same route. About 2 weeks ago TPU had a news article on it.
 
Joined
May 8, 2020
Messages
578 (0.40/day)
System Name Mini efficient rig.
Processor R9 3900, @4ghz -0.05v offset. 110W peak.
Motherboard Gigabyte B450M DS3H, bios f41 pcie 4.0 unlocked.
Cooling some server blower @1500rpm
Memory 2x16GB oem Samsung D-Die. 3200MHz
Video Card(s) RX 6600 Pulse w/conductonaut @65C hotspot
Storage 1x 128gb nvme Samsung 950 Pro - 4x 1tb sata Hitachi 2.5" hdds
Display(s) Samsung C24RG50FQI
Case Jonsbo C2 (almost itx sized)
Audio Device(s) integrated Realtek crap
Power Supply Seasonic SSR-750FX
Mouse Logitech G502
Keyboard Redragon K539 brown switches
Software Windows 7 Ultimate SP1 + Windows 10 21H2 LTSC (patched).
Benchmark Scores Cinebench: R15 3050 pts, R20 7000 pts, R23 17800 pts, r2024 1050 pts.
Maan i would love to upgrade, cuz i am on Win7 and desperately waiting for Win11. So far Win10 was such a crappy os.
 
Joined
Jun 1, 2021
Messages
194 (0.18/day)
I haven't seen the specs of how Windows intend to do scheduling of these new hybrid designs, but pretty much any user application (especially games) would have to run on high-performance cores all the time. Games are heavily synchronized, and even if a low load thread is causing delays, it may cause serious stutter, or in some cases even cause the game to glitch.
At the same time there's some benefit if the OS end up interrupting less the high priority game threads instead delegating the lower priority to the little cores. Might also help in question of cache pressure for the big cores as there's no chance of some background thread running on it and end up discarding some important stuff in to the higher level caches.
Probably not a big improvement, but nevertheless, it's not all bad as you claim to be.
And not everything in games is synchronized.
Network threads per example are very likely not to be, with a little core being enough to do most network tasks.
Nor is a lot of tasks like graphics. Ever seen assets pooping up?

Why?
HEDT/high-end workstation and server CPUs will not have a hybrid design, so do you think Microsoft want to sabotage the performance of power users and servers then?
If AMD feels "forced" to do it, it's because of marketing.
He said for mobile, which is certainly true, as high efficiency cores will mean that mobile will enjoy a much better experience with higher battery life.
Windows schedule for big/little cores would only work when you have those obviously, so absolutely no clue to what you are talking about when saying it would 'sabotage the performance'.

Where is the spec for this?
How would the OS detect the "type" of an application? (And how would a programmer set a flag(?) to make it run differently?)
There's some hardware in the chips to help with that. How it works though hasn't been revealed and might never be.
However, this problem is already known and has already has some solutions. Linux has supported BIG.little in their scheduler for a long time and there are information that the OS has of the tasks and it's enough to make some heuristics out of it.
Very likely the hardware that Intel put is just that, more task statistics to guide the OS into making the right choice.
The small cores will be very slow, and if the details we've seen is right will share L2 cache, which means high latency. They will be totally unsuitable for even an "older" game.
They might actually be quite decent. The previous generation of the little cores, Tremont turbos to 3.3 GHz, and the rumours are that Gracemont will have a similar IPC to Skylake.
Very possibly that they will turbo higher than Tremont as enhanced version of 3.3 GHz and higher tdp in desktop, etc etc. Though 3.3GHz is already on par with the original Skylake, i.e. the i5 6500 had an all-core turbo of 3.3GHz.
The L2 is shared among a cluster of 4 small cores and not all. Plus, 2 MB of L2, so basically 512 kb/core. Very likely that it's multi-ported.
1622960355462.png

Obviously that they aren't going to be high power, but if Intel manages to get them to turbo to 3.5+GHz or so, it would be more than good enough and could be pretty similar to an i7 from 2015~2016(original Skylake). If they manage to make it clock better, then well, that's good.
The minimum would likely be around a i5-6500, considering that Tremont is already pretty close to it and the new atom architecture might bring it to IPC parity to it or close enough.

That is, assuming that the rumor are sort of true. If they hold no truth whatsoever then we have no clue.
I sure hope AMD would be smarter and skip it all together. I would rather have more cache.
Little cores would be a pretty small part of the die anyway. The L3 cache already takes most of the die, so that isn't an issue.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
At the same time there's some benefit if the OS end up interrupting less the high priority game threads instead delegating the lower priority to the little cores. Might also help in question of cache pressure for the big cores as there's no chance of some background thread running on it and end up discarding some important stuff in to the higher level caches.
Probably not a big improvement, but nevertheless, it's not all bad as you claim to be.
That totally depends on how Windows intend to detect which threads can be moved to the slow cores. If for instance these cores are only used for low-priority background threads, then the implications will be very low, but so will the efficiency gains. There are usually thousands of these threads, but they only add up to a few percent of a single core in average load.
But in any user application and especially games, all threads will have medium or high priority, even if the load is low. So just using statistics to determine where to run a thread will risk causing serious latencies.

And not everything in games is synchronized.
Network threads per example are very likely not to be, with a little core being enough to do most network tasks.
Nor is a lot of tasks like graphics.
Most things in a game is synchronized, some with the game simulation, some with rendering etc. If one thread is causing delays, it will have cascading effects ultimately causing increased frame times (stutter) or even worse delays to the game tick which may cause game breaking bugs.
Networking may or may not be a big deal, it will at least risk having higher latencies.
Graphics are super sensitive. Most people will be able to spot small fluctuations in frame times.

Ever seen assets pooping up?
I see fairly little poop on my screens ;)
Asset popping is mostly a result of "poor" engine design, since many engines relies feedback from the GPU to determine which higher detail textures or meshes to load, which will inevitably lead to several frames of latency. This is of course more noticeable if the asset loading is slower, but it's still there, no matter how fast your SSD and CPU may be. The only proper way to solve this is to pre-cache assets, which a well tailored engine easily can do, but the GPU will not be able to predict this.

There's some hardware in the chips to help with that. How it works though hasn't been revealed and might never be.
However, this problem is already known and has already has some solutions. Linux has supported BIG.little in their scheduler for a long time and there are information that the OS has of the tasks and it's enough to make some heuristics out of it.
Very likely the hardware that Intel put is just that, more task statistics to guide the OS into making the right choice.
Don't forget that most apps in Android is laggy anyway, and it's impossible to the end-user to know what causes the individual cases of stutter or misinterpreted user input.
So I wouldn't say that this is a good case study that hybrid CPUs work well.
Heuristics helps the average, but does little for the worst case, and the worst case is usually what causes latency.

They might actually be quite decent. The previous generation of the little cores, Tremont turbos to 3.3 GHz, and the rumours are that Gracemont will have a similar IPC to Skylake.

Obviously that they aren't going to be high power, but if Intel manages to get them to turbo to 3.5+GHz or so, it would be more than good enough and could be pretty similar to an i7 from 2015~2016(original Skylake).
There is a very key detail that you are missing. Even if the small cores have IPC comparable to Skylake, it's important to understand that IPC does not equate performance, and especially when multiple cores may be sharing resources. If they are sharing L2, then the real world impact of that will vary a lot, especially since L2 is very closely tied to the pipeline, so any delays here is way more costly than a delay in e.g. L3.
 
Joined
Jun 1, 2021
Messages
194 (0.18/day)
That totally depends on how Windows intend to detect which threads can be moved to the slow cores. If for instance these cores are only used for low-priority background threads, then the implications will be very low, but so will the efficiency gains. There are usually thousands of these threads, but they only add up to a few percent of a single core in average load.
The issue isn't just the load, having a lot of context switches for the threads is a slow process and again, it can cause the CPU to trash some important(to the game) cache data from L1/L2(or maybe L3) into the higher hierarchy. In the end, it can cause big latencies hits for games.
But in any user application and especially games, all threads will have medium or high priority, even if the load is low. So just using statistics to determine where to run a thread will risk causing serious latencies.
Most user tasks aren't highly demanding either.
Most things in a game is synchronized, some with the game simulation, some with rendering etc. If one thread is causing delays, it will have cascading effects ultimately causing increased frame times (stutter) or even worse delays to the game tick which may cause game breaking bugs.
That's for the game logic/simulation side of things really. Because yes, that could cause issues if not synchronized, many things however wouldn't be.
Networking may or may not be a big deal, it will at least risk having higher latencies.
Physical medium latency for networking is much higher than anything that the little core would likely provide.
Graphics are super sensitive. Most people will be able to spot small fluctuations in frame times.
Frametimes fluctuations isn't only CPU dependent and might be an issue with the GPU too.
I see fairly little poop on my screens ;)
Asset popping is mostly a result of "poor" engine design, since many engines relies feedback from the GPU to determine which higher detail textures or meshes to load, which will inevitably lead to several frames of latency. This is of course more noticeable if the asset loading is slower, but it's still there, no matter how fast your SSD and CPU may be. The only proper way to solve this is to pre-cache assets, which a well tailored engine easily can do, but the GPU will not be able to predict this.
Still not synchronized. The main game loop won't have a mutex/semaphore waiting for assets to load.
Volatile memory is a finite resource and it's less than what HDDs/SSDs have, even a 'well tailored engine' might not have all assets that it needs cached.
Don't forget that most apps in Android is laggy anyway, and it's impossible to the end-user to know what causes the individual cases of stutter or misinterpreted user input.
So I wouldn't say that this is a good case study that hybrid CPUs work well.
Not talking about android specifically. Android is the worst case scenario since the architecture is really made to support a vast array of devices, with each OEM providing the HALs needed to support the devices. Per example, audio was something that had an unacceptable high latency because of those abstraction. Per example, just check this article https://superpowered.com/androidaudiopathlatency
Linux however is doing a good job at it though. They have energy aware scheduling and a lot of features really
or patches like this
Heuristics helps the average, but does little for the worst case, and the worst case is usually what causes latency.
It all depends. If the scheduler has a flag that says like 'this task cannot be put into a little core', then it shouldn't cause the worst case scenario. That of course was just an example, we don't know how exactly the scheduler and the hardware that Intel will put into the chip to help with it, actually works.
There is a very key detail that you are missing. Even if the small cores have IPC comparable to Skylake, it's important to understand that IPC does not equate performance, and especially when multiple cores may be sharing resources. If they are sharing L2, then the real world impact of that will vary a lot, especially since L2 is very closely tied to the pipeline, so any delays here is way more costly than a delay in e.g. L3.
Of course a delay in L2 is way more costly than one in L3, they have pretty different latencies. And no, L2 generally aren't super tied to the pipeline, L1I and L1D would be more tied to it.
Now about sharing resources, yes that's true. But also keep in mind that the L2 is very big with it having 2 MB per 4 Gracemont cores, more than what Skylake(and it's optimizations like Comet Lake) had per core, which was 256 kb/core. I find it unlikely that it would cause an issue as assuming that all 4 cores are competing for resources, they will have 512kb for each. The worrying part isn't resource starving, as it has more than enough to feed 4 little cores, it's issues like how a big L2 slice like that acts in question of latency. It will obviously be considerably bigger than 512kb of L2, but then, those aren't high performance cores, so might not end up being noticeable.
Another thing that reduces a little strain on the L2 is that each little core has 96 kb of L1, 64kb being L1I and 32kb being L1D.

Anyway, we can't know for sure until Intel releases Alder Lake, this is all unsupported speculation until then. And it all depends on how good 10 nm ESF ends up being and how they clock those little cores. Knowing Intel, they might do it pretty aggressively for desktop parts, so really, 3.6GHz or more could have a chance of happening, or they could just clock it Tremont and keep it at 3.3 GHz. Nobody knows.
 
Joined
Nov 15, 2016
Messages
454 (0.17/day)
System Name Sillicon Nightmares
Processor Intel i7 9700KF 5ghz (5.1ghz 4 core load, no avx offset), 4.7ghz ring, 1.412vcore 1.3vcio 1.264vcsa
Motherboard Asus Z390 Strix F
Cooling DEEPCOOL Gamer Storm CAPTAIN 360
Memory 2x8GB G.Skill Trident Z RGB (B-Die) 3600 14-14-14-28 1t, tRFC 220 tREFI 65535, tFAW 16, 1.545vddq
Video Card(s) ASUS GTX 1060 Strix 6GB XOC, Core: 2202-2240, Vcore: 1.075v, Mem: 9818mhz (Sillicon Lottery Jackpot)
Storage Samsung 840 EVO 1TB SSD, WD Blue 1TB, Seagate 3TB, Samsung 970 Evo Plus 512GB
Display(s) BenQ XL2430 1080p 144HZ + (2) Samsung SyncMaster 913v 1280x1024 75HZ + A Shitty TV For Movies
Case Deepcool Genome ROG Edition
Audio Device(s) Bunta Sniff Speakers From The Tip Edition With Extra Kenwoods
Power Supply Corsair AX860i/Cable Mod Cables
Mouse Logitech G602 Spilled Beer Edition
Keyboard Dell KB4021
Software Windows 10 x64
Benchmark Scores 13543 Firestrike (3dmark.com/fs/22336777) 601 points CPU-Z ST 37.4ns AIDA Memory
Is this for real? He's the opposite of reliable.
he leaked this, he leaked zen 3, he leaked rdna2 and he leaked pcb shots of dg2. can i try your drugs mine suck
 
Joined
Nov 4, 2019
Messages
6 (0.00/day)
System Name Archelon
Processor Ryzen 1600AF (4.0Ghz)
Motherboard X570M PRO4
Cooling Noctua NH-U14D
Memory Trident Z Neo 32gb (3200Mhz)
Video Card(s) RTX 4000
Storage WD Blue 2.0TB M.2
Case TJ08B-E
Is this for real? He's the opposite of reliable.

thank you! This is exactly what I was thinking. If I recall right, his strategy is just to fling as much crap against the wall and if it sticks, he’s heralded as some prophet
 
Joined
Sep 15, 2011
Messages
6,471 (1.40/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Maan i would love to upgrade, cuz i am on Win7 and desperately waiting for Win11. So far Win10 was such a crappy os.
Feel free to elaborate? How come Win10 is a crappy OS compared to 7? In what way?
 
Top