• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i5-3570K vs. i7-3770K Ivy Bridge

Let's not derail this thread by arguing please. Take it to PM's or agree to disagree and move on.
 
I Second Crysis 3. Never seen a game get the cpu that hot, it's like stress testing. I think they're dumping a full on physics engine on it.
 
I Second Crysis 3. Never seen a game get the cpu that hot, it's like stress testing. I think they're dumping a full on physics engine on it.

Indeed, I was surprised too. All the cores were running full on all kind of CPUs. And it's also optimized for AMD very well, runs really smooth on the top Visheras, reaching Intel kind of performances in a game, finally.
 
WTF, those temperatures are mental. Are you using the stock coolers?

I have two 3570k's in my office, and they both reach 50 degrees at full load, in NZXT phantoms, with stock heatsink and fans.
 
WTF, those temperatures are mental. Are you using the stock coolers?

I have two 3570k's in my office, and they both reach 50 degrees at full load, in NZXT phantoms, with stock heatsink and fans.


I guess that the temp in the room is important.
The lack of info about the cooling method is not well documented becouse there is no
info about the temp in the room at the moment of the review nor the cooler used.

Probably the temp in the room is also high and you can not be under the environment temp
or at least not with air coolers.
 
For future reference i would like to see something like this.
shows min. avg. would be nice to see max
Capture1145.jpg

Capture1146.jpg
 
I would really like if you would include the ARMA 2 OA in the benchmarks. This game is quite demanding ;)

ARMA 2 OA benches would be interesting,though i don't think it can use more than 2 cores.
 
Nice review. My delidded 3570K is the best thing ever... at 4.5gHz it never goes about 60c on water. They desporatly need to be delidded.
 
Nice review. My delidded 3570K is the best thing ever... at 4.5gHz it never goes about 60c on water. They desporatly need to be delidded.

My 3820 at 4.625mhz never goes above 60c on air.
 
For future reference i would like to see something like this.
shows min. avg. would be nice to see max

I don't think that's necessary. I mean look at the chart, they're in step with each other for every proc. There's no new information being revealed there.
 
For future reference i would like to see something like this.
shows min. avg. would be nice to see max

Minimum is for serious gaming and for enthusiasts, average is kinda self explanatory, but why would you need max? If there is a point in the bench scenario when the camera just looks right into the ground/wall/sky/etc, you would get ridiculously high numbers that way, what would say nothing about performance.

ps.: funny I still have the letters somewhere when I had to convince Tom's authors about including min fps in their test, and the replies about how much they didn't want to do it at the beginning, satisfying memories:)
 
well, just a thought, but if we wanna compare cpu power, why not bench using a powerfull gpu in the lowest resolution and graphic settings, like using a gtx 680 but using 1024x768 resolution and such... this will make sure that the game will never run out of gpu power, and the benchmark result can be limited totally by the cpu power ???

and the idea of comparing lv3 cache different by disabling HT on the 3770k is quite good too :)
 
well, just a thought, but if we wanna compare cpu power, why not bench using a powerfull gpu in the lowest resolution and graphic settings, like using a gtx 680 but using 1024x768 resolution and such... this will make sure that the game will never run out of gpu power, and the benchmark result can be limited totally by the cpu power ???

The problem is that it would make the test similar to synthetic benchmarks, because powerful systems run games at 140-180fps in 1024x768, so it mostly doesn't really matter if one CPU does 150fps and the other can achieve 185. You simply don't need that kind of framerate with current display technologies.
 
WTF, those temperatures are mental. Are you using the stock coolers?

I have two 3570k's in my office, and they both reach 50 degrees at full load, in NZXT phantoms, with stock heatsink and fans.
If i was the coss 70 degrees was max on foll load by 5000 hz 1.394 v normal 63 degrees 70 was max ! at 1 moment and by difault i got 48 deg. max by fool load Top of i stell need som tuning :)
 
Sorry but I'm not entirely sure what the point of this review even is.... It appears to be a clock-for-clock comparison which of course the processor with a larger L3 cache and hyperthreading is going to win (given the same architecture). What would have made it interesting to me would be to also turn off the hyperthreading of the i7 and add those results to the mix as well. At least that way you can see the benefit of of going from the i5 @ 4.5GHz to the i7 (no HT @ 4.5GHz) but with its larger L3. Then from there we can also see the percentage diff when you turn on HT. Otherwise, more expensive processor is faster.

Edit: Don't get me wrong, nice review and all and I'd like to see more processor reviews on TPU; however, these results I can get just about anywhere.

Although a new reviewer is bound to make several mistakes i too fail to see the reason for this review/comparison. Perhaps if it was released when it should (a long time ago) it would make some sense, now it just doesn't (or at least i can't see it). Good try nevertheless so good luck to your future reviews :)
 
Thanks for nice review.

I have couple suggestions for the future:
1. gaming tests are a complete fail. Selected resolution is way too big (1920x1080). In that resolution is GPU limiting framerate. That's why all cpu frequencies show same framerate. Next time I want to see something like this:
NFS Most Wanted
Dirt showdown
it's in czech language but the importatnt thing is that they test in 1280x1024 where framerate is limited by CPU and not by GPU.

2. include games that are more cpu intensive: Civilization V, Starcraft II, Arma 2, etc.
 
Although a new reviewer is bound to make several mistakes i too fail to see the reason for this review/comparison. Perhaps if it was released when it should (a long time ago) it would make some sense, now it just doesn't (or at least i can't see it). Good try nevertheless so good luck to your future reviews :)

It was just to work on the testing methology and get some additional experience.
Do not worry, will review processors earlier next time ;)
 
1. gaming tests are a complete fail. Selected resolution is way too big (1920x1080). In that resolution is GPU limiting framerate. That's why all cpu frequencies show same framerate. Next time I want to see something like this:

This is the best feature of the gaming review. People may like seeing big differences brought on by low res game benchmarks but they're utterly useless for us. I'd even like to see it done on 1440p displays. I'd like numbers that can actually be applied to the real world. When you think about it it's a shame a lot of people have probably run out and bought new cpus because of some exaggerated low res test and then ended up only gaining a frame or two at their native res.
 
This is the best feature of the gaming review. People may like seeing big differences brought on by low res game benchmarks but they're utterly useless for us. I'd even like to see it done on 1440p displays. I'd like numbers that can actually be applied to the real world. When you think about it it's a shame a lot of people have probably run out and bought new cpus because of some exaggerated low res test and then ended up only gaining a frame or two at their native res.

Are we testing CPUs or GPUs? If you were testing in 1440p than in gaming tests is needed only one sentence: All games are limited by GPU, it doesn't matter which CPU you have.

If you still want to test in 1920x1080 it would be needed to test with Dual high end graphics cards (Crossfire/SLI)
 
Last edited:
Every cpu is different; and due to the TIM issue the difference is even higher.
I used Prolimatech Megahalems with a Sunon 120mmx38mm :)

That's for sure.

The i5-3570K I mentioned earlier, with a Gemini S524:
Asus-H77-ITX-Gemini-S524.jpg


It has never seen temps higher than 63 degrees :D

I know, totally different setup and un-scientific testing but in it's current real world configuration I can run Prime95 and it'll stay at a steady 3.6Ghz (four-core Turbo speed) for hours.
 
If you still want to test in 1920x1080 it would be needed to test with Dual high end graphics cards (Crossfire/SLI)

You would add more CPU overhead in the graphics drivers for not needed graphics power to tests that are already CPU sensitive by design?
That's not necessary because of how the games work.
When there is huge number of dynamic objects on screen every frame, CPU has to make a "draw call" to GPU for every object every frame.
When number of these objects gets over say 10k GPU gets flooded with huge number of simple draw calls.
For the GPU that's much less efficient than small number of complex geometry draw calls.
In these cases frame buffer resolution has so much lesser effect on frame rate than huge amount of draw calls and time spent of physics/animation. Games are all about balance of static and dynamic geometry.
To test this with different screen resolutions in GTA4, go to times square and push vehicle number and object view distance to the max.
This old demo is also good for testing CPU bound scenario with single render thread http://www.geeks3d.com/20100629/test-opengl-geometry-instancing-geforce-gtx-480-vs-radeon-hd-5870/2/
 
Last edited:
not really a cpu benchmark but the "GpuTest 0.2.0" is a great cross platform stress test... i.e. furmark for linux!.
http://www.geeks3d.com/20121113/gpu...gl-benchmark-furmark-lands-on-linux-and-os-x/

more Importantly, the furmark test is slightly less taxing than the furmark 10.4(meaning The throttling that happens with furmark) does not happen with this furmark version...
also as the gimark which stresses the gpu at about 80%, the tessark at about 105%...
 
Are we testing CPUs or GPUs? If you were testing in 1440p than in gaming tests is needed only one sentence: All games are limited by GPU, it doesn't matter which CPU you have.

If you still want to test in 1920x1080 it would be needed to test with Dual high end graphics cards (Crossfire/SLI)

No, sorry.


:banghead:


I mean, yes, you do have a point, and if you simply wanted to compare useless CPU performance that's fine, but nearly no one games @ 1280x1024. As such, the numbers used reflect what differences the end users would get in their rig, and show accurate REAL WORLD testing, rather than SYNTHETIC TESTING like you suggested. This is TPU, not XS.


I mean, I could do my memory reviews and board reviews posting obscene LN2 clocks only. But very few users run such config, so doing reviews like that also have little importance.


That said, the only thing I can suggest is more CPUs in testing, and more tests. CPus are not really needed in this review, since it's a direct compare of those chips themselves, so that's covered.

The one useful feature not really covered, to me, is power consumption via 8-pin EPS connector, when OC'd to the same clocks, and stock.
 
this makes me want a 3770k even more now!
 
Back
Top