• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K


I'm not saying it can't, but Intel has some catching up to beat 5950X.
 
Well that's kind of the point ~ it's (mostly) useless unless mobos support it & even then it could be sketchy if not implemented properly!
1632616634654.png

This one supported both DDR2 and DDR3 (not both at once though)
I believe AL DDR4/5 support is more for flexibility of MoBo manufacturers, until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts so the manufacturer can just make ddr4 and ddr5 variants of their boards
 
I feel people should not expect Intel to have a miracle inthe form of Alder Lake thag will allow them to pull themselves away from competition significantly. While its got 16 cores in the form of 8 pef, and 8 eff, it is not the same as 16 pef cores. Under load that favors multithread well, the big little config may not perform as well. They are not competing with 16 Bulldozer cores. The Zen3 cores are quite potent and easily beats the Skylake based cores easily.
 
I feel people should not expect Intel to have a miracle inthe form of Alder Lake thag will allow them to pull themselves away from competition significantly. While its got 16 cores in the form of 8 pef, and 8 eff, it is not the same as 16 pef cores. Under load that favors multithread well, the big little config may not perform as well. They are not competing with 16 Bulldozer cores. The Zen3 cores are quite potent and easily beats the Skylake based cores easily.
All they really need to do is have better single threaded performance and beat AMDs 12c in multithreaded, IF they price it accordingly.
Then the AMD 16c becomes a tradeoff rather than clear winner.

Rocket lake was competitive in single threaded and lost heavily in multi, and used more power...
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
 
All they really need to do is have better single threaded performance and beat AMDs 12c in multithreaded, IF they price it accordingly.
Then the AMD 16c becomes a tradeoff rather than clear winner.

Rocket lake was competitive in single threaded and lost heavily in multi, and used more power...
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
 
View attachment 218235
This one supported both DDR2 and DDR3 (not both at once though)
I believe AL DDR4/5 support is more for flexibility of MoBo manufacturers, until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts so the manufacturer can just make ddr4 and ddr5 variants of their boards
Mine was P45-8D memory lover (with Q9550) still working normally the last time i remember, almost identical layout with DDR2 & 3.
The box says The world First 8 dimms mobo, with bonus Global Star cpu cooler, i like it a lot :D
 
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
Perhaps. Didn't the 11-series pretty much see a price cut from MSRP almost right out of the gate?
 
In all my years, I can’t recall many boards supporting 2 kinds of memory. Granted, I might have missed something! The last time I could recall, it was when SDRAM came out over 25 years ago. I had a board that supported both RAM and SDRAM, though you could only use one or the other. Usually the dual support means the CPU will support both, but it’s vendor’s choice on which one to implement. I think Apollo Lake also had DDR3/DDR4 support, but no boards supported both. It would probably be a really complex design, too.

I had a Foxconn board back during the Core 2 Duo/Quad days that supported DDR2 and DDR3 it had two DIMM slots of each
 
quite certain this is run only on a single cluster (that is 8 cores). Either the efficiency or the performance ones, but I would be inclined to think that these were run on the little cores.
 
Suddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

Thank You for that!

Considering Alder Lake was touted as the next coming of Jesus
It wasn't. But the results do show that there is merit to the design on X86.

until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts
This. As with every new generation of memory, it takes time for the industry to refine it's offerings to surpass the previous generation.
 
Last edited:
Company I work for is very close to Intel. And everyone hates them. The devs, the devops, guys from the datacenters and even the business side is starting to argue that very high power consumption isn't worth the subsidies Intel gives us.
 
quite certain this is run only on a single cluster (that is 8 cores). Either the efficiency or the performance ones, but I would be inclined to think that these were run on the little cores.
Please read the provided link to the screen grab of the original source that I added, it explains a bit more.
 
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.

idk, depends entirely on what Intel wants to do, if they want to win some mindshare back they might want to be faster yet cheaper then 5900x, I mean it is "only" 8 big cores so I would think it would be a power-move as well.
 
I just hope ADL is going to make up for Rocket Lake disaster in gaming performance and add it's own gain on top, making it around 40% faster in games vs Comet Lake, as it should be. Benchmarks doesn't matter anymore, Rocket Lake launch illustrated very well how irrelevant all of those benchmarks are for gaming CPUs, with 20% gains left and right and then 0% in actual games. Generational gains are just not across the board anymore, it will now be very common to see new CPU getting massive gains in some bechmarks and then no gains in others.
 
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
I just hope ADL is going to make up for Rocket Lake disaster in gaming performance and add it's own gain on top, making it around 40% faster in games vs Comet Lake, as it should be. Benchmarks doesn't matter, Rocket Lake launch illustrated very well how irrelevant all of those benchmarks are for gaming CPUs, with 20% gains left and right and then 0% in actual games.
To you both:
Zen3+'s 15% FPS gains was when running the CPUs at a lower clock speed.
FPS gains flatten out when the CPU is no longer the bottleneck, with most current games that happens a little over 4 GHz for Skylake family CPUs. This is also the reason why Rocket Lake showed little gains in gaming over Skylake. So until games become more CPU demanding, we should expect Alder Lake and other new CPUs to show minimal gains in games, despite having much faster cores.
 
To you both:
Zen3+'s 15% FPS gains was when running the CPUs at a lower clock speed.
FPS gains flatten out when the CPU is no longer the bottleneck, with most current games that happens a little over 4 GHz for Skylake family CPUs. This is also the reason why Rocket Lake showed little gains in gaming over Skylake. So until games become more CPU demanding, we should expect Alder Lake and other new CPUs to show minimal gains in games, despite having much faster cores.
I've heard that before. You are making up imaginary bottlenecks.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades. The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
 
I've heard that before. You are making up imaginary bottlenecks.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades. The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.


You must write for CNN.

It's a known fact that at higher resolution CPUs aren't the bottleneck for FPS. It's the GPU that is.

We have entered the Era where only scientific work, benchmarks, and poorly optimized software are the reasons CPUs aren't "fast enough".
 
You must write for CNN.

It's a known fact that at higher resolution CPUs aren't the bottleneck for FPS. It's the GPU that is.

We have entered the Era where only scientific work, benchmarks, and poorly optimized software are the reasons CPUs aren't "fast enough".

This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck. The behavior of each game individually is what matters. If say 10900K is able to pull only 45 FPS in one scene because of CPU bottleneck (for example the game uses only one core) then it will have the same 45 FPS at 1080p, 1440p, 2160p or any other solution of the same aspect ratio (aspect ratio affects performance in CPU bound scenarios) as long as your GPU can handle it. Saying that CPU simply isn't a bottleneck at higher resolution period is very basic misunderstanding of how things work, somewhat bizarre for someone who was posting 2 posts a day on a tech forum for the last 16 years. What have you been doing all that time?

Again, I understand why this generalization is used when talking to people who are just getting into this for example, but to bring something like that up in a more advanced discussion is rather embarrassing.
 
Last edited:
This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck. The behavior of each game individually is what matters. If say 10900K is able to pull only 45 FPS in one scene because of CPU bottleneck (for example the game uses only one core) then it will have the same 45 FPS at 1080p, 1440p, 2160p or any other solution of the same aspect ratio (aspect ratio affects performance in CPU bound scenarios) as long as your GPU can handle it. Saying that CPU simply isn't a bottleneck at higher resolution period is very basic misunderstanding of how things work, somewhat bizarre for someone who was posting 2 posts a day on a tech forum for the last 16 years. What have you been doing all that time?

Again, I understand why this generalization is used when talking to people who are just getting into this for example, but to bring something like that up in a more advanced discussion is rather embarrassing.


Durrrrrr......




"On popular demand from comments over the past several CPU reviews, we are including game tests at 720p (1280x720 pixels) resolution. All games from our CPU test suite are put through 720p using a RTX 2080 Ti graphics card and Ultra settings. This low resolution serves to highlight theoretical CPU performance because games are extremely CPU-limited at this resolution. Of course, nobody buys a PC with an RTX 2080 Ti to game at 720p, but the results are of academic value because a CPU that can't do 144 frames per second at 720p will never reach that mark at higher resolutions. So, these numbers could interest high-refresh-rate gaming PC builders with fast 120 Hz and 144 Hz monitors. Our 720p tests hence serve as synthetic tests in that they are not real world (720p isn't a real-world PC-gaming resolution anymore) even though the game tests themselves are not synthetic (they're real games, not 3D benchmarks)."

The architecture difference in out of order execution between AMD and Intel are so close that cache hits and cache size seem to be the determining factor. Neither is going to be able to pull a huge win unless they have some never before seen act of pre-determined calculations so cache hits are always 100%, or they find a way to reduce latency to 0ns. Either are impossible so it's down to cache and latency to cache.


What have I been doing for 16 years? Building servers for medical offices, working, having kids that have grown into teens able to drive not that it's any of your fucking business. I have also been paying attention to the evolution and revolution of tech, you are merely getting your fingers wet, let me know when you wake-up at 6AM the first time you fall asleep.at a on site install waiting for a RAID array to rebuild.
 
I've heard that before. You are making up imaginary bottlenecks.
That's nonsense.
FPS is a mesure of GPU performance, not CPU performance. And when the CPU is fast enough to fully saturate the GPU, the GPU becomes the bottleneck. This should be elementary knowledge, even to those without a CS degree.
If someone released a CPU with 10x faster cores tomorrow, it would not change the performance in most current games.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades.
That's an absurd statement.
Rocket Lake does in fact have higher single threaded performance. What the developers thought and felt during the development is irrelevant.

The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
Clearly you are not familiar with Sunny Cove's design, or other CPU designs for that matter.
Both Sunny Cove and Golden Cove offer 19% IPC gains over their predecessor. The issues with Rocket Lake are tied to the inferior production node, not the underlying architecture.

This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck.
CPU overhead is mostly linear with frame rate.
When the resolution is lower, the GPU bottleneck becomes less and the CPU bottleneck greater since the frame rate increases. With most games today you have to run them at 720p and/or low details with an absurdly high frame rate to show a significant difference between CPUs. This becomes a pointless and futile effort when actual gamers will not run the hardware under such conditions.
 
Durrrrrr......




"On popular demand from comments over the past several CPU reviews, we are including game tests at 720p (1280x720 pixels) resolution. All games from our CPU test suite are put through 720p using a RTX 2080 Ti graphics card and Ultra settings. This low resolution serves to highlight theoretical CPU performance because games are extremely CPU-limited at this resolution. Of course, nobody buys a PC with an RTX 2080 Ti to game at 720p, but the results are of academic value because a CPU that can't do 144 frames per second at 720p will never reach that mark at higher resolutions. So, these numbers could interest high-refresh-rate gaming PC builders with fast 120 Hz and 144 Hz monitors. Our 720p tests hence serve as synthetic tests in that they are not real world (720p isn't a real-world PC-gaming resolution anymore) even though the game tests themselves are not synthetic (they're real games, not 3D benchmarks)."

The architecture difference in out of order execution between AMD and Intel are so close that cache hits and cache size seem to be the determining factor. Neither is going to be able to pull a huge win unless they have some never before seen act of pre-determined calculations so cache hits are always 100%, or they find a way to reduce latency to 0ns. Either are impossible so it's down to cache and latency to cache.

This test only proves my point, it tests maximum framerate that CPUs can hit in tested games and clearly states that those CPUs are never going to reach higher FPS than that, regardless of resolution, which makes the statement of "CPU is no longer a bottleneck at higher resolutions" obviously incorrect. How often that actually happens in practice is whole another matter entirely, because you will be GPU bound in many games, that is where this oversimplification is coming from, but it is still objectively incorrect.

What have I been doing for 16 years? Building servers for medical offices, working, having kids that have grown into teens able to drive not that it's any of your fucking business. I have also been paying attention to the evolution and revolution of tech, you are merely getting your fingers wet, let me know when you wake-up at 6AM the first time you fall asleep.at a on site install waiting for a RAID array to rebuild.

That's an unnecessary overreaction. Obviously I didn't ask about your personal life, I assumed this goes without saying. All I asked was how is it that you don't understand the behavior of games and still throwing around oversimplified and incorrect principles despite almost 2 decades of experience around PCs and presumably games, if you enter the discussion about that specifically. There is no need for your personal information. Maybe I shouldn't have said that, but it was you starting the discussion with "CNN", which I assume is US TV news channel, so that was a huge insult right off the bat.

That's nonsense.
FPS is a mesure of GPU performance, not CPU performance. And when the CPU is fast enough to fully saturate the GPU, the GPU becomes the bottleneck. This should be elementary knowledge, even to those without a CS degree.
If someone released a CPU with 10x faster cores tomorrow, it would not change the performance in most current games.

CPU overhead is mostly linear with frame rate.
When the resolution is lower, the GPU bottleneck becomes less and the CPU bottleneck greater since the frame rate increases. With most games today you have to run them at 720p and/or low details with an absurdly high frame rate to show a significant difference between CPUs. This becomes a pointless and futile effort when actual gamers will not run the hardware under such conditions

And yet again you are creating imaginary bottlenecks and going by child logic of "you are going to be GPU bound anyway so there is no point in having faster CPU", as if a few latest AAA games were all that you have ever seen and played. I don't really see a point of further replying, you are just flat out wasting my time at this point.
 
Last edited:
This is a very basic overgeneralization
No, his assessment was spot-on.
that was created for the sake of easier explanation and then turned into a myth over time.
No, this is reality. Being the owner of several PCs with a wide range of CPU's, I can confirm definitively that for the last 12 years CPU's have reached a point were most are more than enough to do most general computing tasks efficiently. Anything 4cores and up will get the job done quickly, but even most dual cores do well. And for the last 6 years, there are no examples of mainstream(6core+) CPU's that can't do any task asked of them and do them in a speedy fashion. At this point in time, CPU makers are competing more against themselves than they are trying to meet the demands of software tasks.
you are just flat out wasting my time at this point.
No, you did that to yourself...
 
No, his assessment was spot-on.

No, this is reality. Being the owner of several PCs with a wide range of CPU's, I can confirm definitively that for the last 12 years CPU's have reached a point were most are more than enough to do most general computing tasks efficiently. Anything 4cores and up will get the job done quickly, but even most dual cores do well. And for the last 6 years, there are no examples of mainstream(6core+) CPU's that can't do any task asked of them and do them in a speedy fashion. At this point in time, CPU makers are competing more against themselves than they are trying to meet the demands of software tasks.
You are making the same mistake as he does, looking at this with child logic and through your subjective experience. Reality is that resolution has nothing to do with CPU bottleneck and the point of CPU bottleneck for a given game and CPU combination is the same for every resolution of the same aspect ratio, that is all I have been saying and you keep bringing your subjective technically incorrect oversimplified versions of that, or trying to bring in software other than what was originally discussed - games and only games.

No, you did that to yourself...
That much is true, but I am stuck for few hours anyway, so what to do... :p What I really meant by that though is that the discussion is normally supposed to go somewhere but if we cannot get past entry-level oversimplifications and for something as basic as CPU bottlenecking in games then it won't.

Nevermind I guess.
 
Last edited:
Back
Top