• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K

You are making the same mistake as he does, looking at this with child logic
LOL@"child logic" :rolleyes: Way to keep it classy there. :slap:
through your subjective experience.
No, this is an experience LOTS of people are having. Just because YOU have not experienced it doesn't mean that it is not being experienced by others. Try be less self-centered, eh.
Reality is that resolution has nothing to do with CPU bottleneck
Yes, it does. And MASSIVE amounts of benchmarking proves that point. It is VERY common knowledge. Your ignorance to reality does not alter reality. Context much?
that is all I have been saying and you keep bringing your subjective technically incorrect oversimplified versions
Oh that's adorable. That "you keep bringing" statement directly implies I have repeatedly responded to you. I responded to you once before you made that statement. That does not qualify as repeated effort.

That much is true, but I am stuck for few hours anyway, so what to do...
Ahh, so you admit you're trolling everyone. Now where is that button.. CLICK!
 
Last edited:
All the current graphics APIs (OpenGL, Vulkan, Direct3D) works by submitting a queue of operations for the GPU to process, and the rendering thread will wait until the GPU is ready to accept more. If the CPU is not fast enough to keep up with the GPU, we call it CPU bottlenecked, then the GPU will spend a lot of cycles idling waiting for more work to do. When the CPU is fully able to keep the GPU saturated, the workload is GPU bottlenecked, and this is what we want, since you now should get the scaling potential of your graphics hardware. This is about how well I can explain it without diving into technical details and code examples, but I hope most of you should get the point.

Currently an i5-11600K or a Ryzen 5 5600X is able to keep a RTX 3090 saturated in most games, and that's actually a good spot to be in, so gamers can put as much money into their graphics card as they can. So unless new and more demanding games arrive soon, a lot of people will get disappointed when Alder Lake arrives, despite it being a very performant CPU architecture. It will probably show great gains in most workloads except gaming, and that's not a bad thing. Ideally, games shouldn't be CPU bottlenecked at all. But with pretty much everything else, including web browsing, office work and general responsiveness, Alder Lake is likely to provide noticeable improvements. Considering how much bloated everything is getting these days, this should be exciting. I'm surprised to see how laggy even a simple spreadsheet in MS Office have gotten, not to mention the CPU load of basic web pages.
 
Behave!
And stop the insults and arguing.
Move on.

Quotes from the Guidelines:
Posting in a thread
Be polite and Constructive, if you have nothing nice to say then don't say anything at all.
This includes trolling, continuous use of bad language (ie. cussing), flaming, baiting, retaliatory comments, system feature abuse, and insulting others.
Do not get involved in any off-topic banter or arguments. Please report them and avoid instead.

Thank You, and, Have a Good Morning.
 
Hopefully when a new CPU-Z leak arrives (tomorrow?), the comments will be more objective.

Suddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

I can only judge this benchmark by using the results of the 3 already known CPU's. Well in the first 2 slides of a multicore oriented test, 11900K beats 10 core 10900K (easily) and 8+8 core 12900K and especially in the second slide it performs better even compared to the 12 core 5900X. Most of us would expect different CPU testing results for 10900K, 5900X and 11900K.
 
Hopefully when a new CPU-Z leak arrives (tomorrow?), the comments will be more objective.
You mean this one?

CPU-Z is not as trustworthy as SiSoft imho and this screenshot even less so.
 
Behave!
And stop the insults and arguing.
Move on. Thank You, and, Have a Good Morning.
Yes you are right all the way. Arguing or getting smart is most certainly not productive. Informed productivity is what tech-channels should be all about. As to the subject discussion: It’s not over until it’s over and probably not until November when all the much contested (AMD & Intel) data is in and has been regurgitated a few times over here on the tech-channels. For me right now the most important thing to know is what the Intel stock will look like if they come out to be the clear winner in this particular race over essentially hairline performance differences. Sheer product availability will also have its song played as well. Interesting debate times ahead and then with WIN 11 in tow at least the tech-channels have something really to talk about besides AIO, memory, SSD and headphones upgrades, etc.
 
You mean this one?

CPU-Z is not as trustworthy as SiSoft imho and this screenshot even less so.
Yes, and I'm afraid we'll see again comments like in this thread. Personally I consider CPU-Z one simple and reliable benchmark (maybe except for RL, which could not totally convert increased IPC to real world gains for several reasons).
 
Yes, and I'm afraid we'll see again comments like in this thread. Personally I consider CPU-Z one simple and reliable benchmark (maybe except for RL, which could not totally convert increased IPC to real world gains for several reasons).
CPU-Z reliable benchmark...Νope.
I have seen 5950x with pbo and curve optimizer at 5.2ghz to catch 725 but honestly on which planet the 5600x can catch 790 we are not talking about oc with some kind of exotic cooling this number is with a water cooling 280mm.
 

Attachments

  • Screenshot 2021-09-26 170754.png
    Screenshot 2021-09-26 170754.png
    129 KB · Views: 79
The CPUZ benchmark is fine. It is but one among many useful metrics that can be used to gauge performance. Let's stop with the crapping on it, which is not the purpose of this thread.
 
The CPUZ benchmark is fine. It is but one among many useful metrics that can be used to gauge performance. Let's stop with the crapping on it, which is not the purpose of this thread.

Yeah back to the fanboi war. I need something to watch while I eat my popcorn.

I hope intel returns to competition...and reduces power usage to sane levels.... And doesn't use win 11 scheduler and compiler bs to do it.
I want price competition and performance competition. Then the consumer wins.
 
CPU-Z reliable benchmark...Νope.
I have seen 5950x with pbo and curve optimizer at 5.2ghz to catch 725 but honestly on which planet the 5600x can catch 790 we are not talking about oc with some kind of exotic cooling this number is with a water cooling 280mm.
This must be a 280x280 water cooler...
Seriously if the result is real, you should contact TPU so that can find and fix the bug.
 
I hope intel returns to competition...and reduces power usage to sane levels.... And doesn't use win 11 scheduler and compiler bs to do it.
I don’t think most of those will be true. I think they will return to performance competition, but I’m afraid power consumption is here to stay, and any ADL SKU that uses both core types will absolutely depend on the scheduler in Win11. I hope reviews comprehensively test the CPU on both Win10 and Win11 so that we can see just how important the new scheduler will be on multi-threaded workloads. In many multi-threaded workloads, the threads jump around from core to core. Imagine playing a game where the efficiency cores start getting tasks inadvertently. I don’t see how Windows 10 will avoid this unless the scheduler is backported.

It reminds me of Bulldozer, where an 8 core CPU was 8ALU/4FPU design. The problem was 2 ALUs were joined to an FPU and Windows didn’t know what to do, so the CPU often underperformed due to resource mismanagement. Lakefield was a precursor to Adler Lake, and its 1/4 big.LITTLE configuration performed quite terribly. You can bet Intel and MS worked closely together to get Windows11 working right, and it’s no coincidence that Windows 11 went from “just recently announced” to launching ahead of Adler Lake.
 
Why would they contact TPU? Do you think TPU makes CPUZ? They do not, W1zzard makes GPUZ, but that's not the same utility..
Correct, but the meaning remains the same.
 
Yes, and I'm afraid we'll see again comments like in this thread. Personally I consider CPU-Z one simple and reliable benchmark (maybe except for RL, which could not totally convert increased IPC to real world gains for several reasons).
i remember when 1st zen Ryzen was a beast on this bench, then they updated it and the scores dropped a LOT... lets wait and see folks
 
Last edited:
i remember when 1st zen Ryzen was a beast on this bench, then they updated it and the scores dropped a LOT... lets wait and see folks
Agreed.
 
Wait, what? Are you saying CPUZ and GPUZ are the same?
No, what I meant is if the result is real, the developer should be informed, so the bug can be found and fixed.
 
i remember when 1st zen Ryzen was a beast on this bench, then they updated it and the scores dropped a LOT... lets wait and see folks
No, what I meant is if the result is real, the developer should be informed, so the bug can be found and fixed.
To you both;
Several synthetic benchmarks changed after Zen launched, they changed because the developers decided to change the weighting of the benchmark scores. I would assume they run the same code across different CPUs (otherwise a direct comparison would be pointless), which would mean it can't be a software bug. I'm fairly sure they changed the weighting because these benchmarks made Zen/Zen 2 CPUs look way better than reflected in real world benchmarks.

This exposes one of the fundamental problems of synthetic benchmarking; there is really no fair way to do it, especially if you want to generate some kind of total score for the CPU. There will always be disagreements on how to weight different workloads to create a total score. In reality, synthetic benchmarks are only interesting for theoretical discussions, and no one should base their purchasing decisions on them. What you should look for is real world benchmarks matching your needs, and if that can't be found, a weighted average of real world benchmarks.

-----

Doing good benchmarking is actually fairly hard. Developers who try to optimize code face this challenge regularly, usually to see which code changes perform better, not to see which hardware is better. This usually means to isolate small changes, run them through millions of iterations to exaggerate a difference enough to make it measurable. Then combine a bunch of these small changes into something that may make a whole algorithm 20-50% faster, or a whole application.

I find it very fascinating to see how much it matters to write optimized code on newer architectures. I have some 3D math code that I've been using and improving for many years, and seen how my optimized implementations keep getting proportionally faster than their baseline counterparts on newer architectures, e.g. Sandy-Bridge -> Haswell -> Skylake. And it's obvious to me that the benefits of writing good code is growing, not shrinking, with faster and more superscalar architectures. So even though faster CPU front-ends helps extract more from existing code, increasingly superscalar architectures can extract even more parallelization from better code. The other day I was comparing an optimization across a lot of code that got ~5-10% extra performance on Skylake. Then I tested it on Zen 3 and got similar improvements vs. unoptimized code, except in one edge case I got like 100% extra performance, and this from just a tiny change. Examples like this makes me more excited than ever to see what Golden Cove(Alder Lake) and Zen 4 brings to the table. We are nowhere near the end of what performance we can extract per thread, and the upcoming architectures in the next 10 years or so should bring exciting performance improvements.
 
Suddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

Sisoft Sandra does not represent well the performance for a PC. It's more inclined toward HPC, serving as a basic common metrics. If you have been following Sisoft Sandra, I think you should know this. For years almost no one has used Sisoft Sandra in an MSDT review.

The problem is that they actually compiled their data and pretend it as a product comparison. It's like one searched disinfectant chemistry database and started to give out body injection advice. Again, okay to have unuseful data, not okay to pretend it to be useful. I'm glad they took down that article.

I read the original article you posted. Pretty interesting to see that they acknowledge the presence of hardware AES accelarator, while they try to analyze something out of the cryptographic data. The writer knew very well this analysis was going nowhere.
 
Last edited:
Back
Top