Monday, February 20th 2012

Core i5-3570K Graphics 67% Faster Than Core i5-2500K, 36% Slower Than GeForce GT 240

An Expreview community member ran benchmarks comparing the performance of the Intel HD 4000 graphics embedded into its upcoming 22 nm "Ivy Bridge" Core i5-3570K, comparing it to the integrated graphics of Core i5-2500K, and discrete graphics NVIDIA GeForce GT 240. These tests are endorsed by the site. The suite of benchmarks included games that aren't quite known to be very taxing on graphics hardware by today's standards, yet are extremely popular; games such as StarCraft II, Left4Dead 2, DiRT 3, Street Fighter IV. Some of the slightly more graphics-intensive benchmarks included Far Cry 2 and 3DMark Vantage. All benchmarks were run at 1280 x 720 resolution.

The Intel HD 4000 graphics core beats the HD 3000 hands down, with performance leads as high as 122% in a particular test. The chip produces more than playable frame-rates with Left4Dead 2 and Street Fighter IV, both well above 50 FPS, even DiRT 3 and Far Cry 2 run strictly OK, over 30 FPS. StarCraft II is where it produced under 30 FPS, so the chip might get bogged down in intense battles. A mainstream discrete GeForce or Radeon is a must. On average, the graphics core embedded into the Core i5-3570K was found to be 67.25% faster than the one on the Core i5-2500K.
When pitted against a 2+ year old GeForce GT 240, the Core i5-3570K struggles. In StarCraft II, it's 53.64% slower. On average, the GT 240 emerged 56.25% faster. A decent effort by Intel to cash in on the entry-level graphics. We are hearing nice things about the HD video playback and GPU acceleration capabilities of Intel's HD 4000 core, and so there's still something to look out for. Agreed, comparing the i5-3570K to the i5-2500K isn't a 100% scientific comparison since the CPU performance also factors in, but it was done purely to assess how far along Intel has come with its graphics.Source: Expreview
Add your own comment

62 Comments on Core i5-3570K Graphics 67% Faster Than Core i5-2500K, 36% Slower Than GeForce GT 240

#1
claylomax
Do you think it can play Hard Reset maxed out at 1900x1200?
Posted on Reply
#2
btarunr
Editor & Senior Moderator
claylomax said:
Do you think it can play Hard Reset maxed out at 1900x1200?
I don't think so. Hard Reset at that resolution, maxed out, can be sufficiently taxing on even $200 graphics cards.
Posted on Reply
#3
OneCool
Why is it being compared to nvidia not AMDs apu?

Doesnt make much sense to me.
Posted on Reply
#4
Casecutter
67.25% faster than that of a 2007 IGP is good? Then to compare to a 2009 GT240 and probably a DDR3, if it is 36% is adequately respectable now in 2012. So, it’s maybe like a HD5550, which could now almost achieving modern entry leval so not bad.
Posted on Reply
#5
ZoneDymo
OneCool said:
Why is it being compared to nvidia not AMDs apu?

Doesnt make much sense to me.
maybe those comparisons hurt Intel's ego, I dont know.
Posted on Reply
#7
KRONOSFX
btarunr You made a mistake.
On average, the GT 240 emerged 36% faster.
HD4000 is 36% slower than GT240 but on the other hand 100-36=64 -> 100/64=56.25
GT240 is 56.25% faster than HD4000.
Posted on Reply
#9
WarraWarra
The IGP graphics is okay for most office use or internet cafe internet surfing pc / headless server use. It helps if you use webcl.nokia.com and intel webcl software.
Still have no clue why Intel is not involved in some scam with a IGP company to force the use of their IGP's vs Intel wasting time on IGP's.

The best part is when you run a i7 cpu with IGP (no dedicated amd / nvidia gpu's) then the cpu's suck much more.

Example:
Mobile i7-2670 runs 7-zip 8/8 at avg. 12500 where a i7-2630 with nvidia mobile gpu runs at 15244~15384 8/8, same hardware except for gpu used and swapped out cpu for testing. You would expect it to be the opposite results.
Posted on Reply
#10
faramir
So how does HD4000 compare to A8-3850/3870 graphics then ?

I understand Trinity figures aren't avaliable yet, I can't wait for the rumored performance of Kaveri though, finally a decent enough integrated GPU with absolutely no need for additional GPU, if only they can get the CPU IPC performance up from the 2006 first generation Phenom level ...
Posted on Reply
#11
DarkOCean
and the big differnce comes from optimization for 3dmark , games alone its only 56% still ok for an intel igp.
Posted on Reply
#12
Borc
Casecutter said:
67.25% faster than that of a 2007 IGP is good?
2007? Are you joking? HD3000 was launched in 2011 last year.
Posted on Reply
#14
HTC
faramir said:
So how does HD4000 compare to A8-3850/3870 graphics then ?

I understand Trinity figures aren't avaliable yet, I can't wait for the rumored performance of Kaveri though, finally a decent enough integrated GPU with absolutely no need for additional GPU, if only they can get the CPU IPC performance up from the 2006 first generation Phenom level ...
Since there shouldn't be any review of this available, are there any of A8-38X0 pitted against GeForce GT 240?
Posted on Reply
#15
Completely Bonkers
Quite impressive. Now if you could JOIN that performance with a budget gaming card, you'd be good to go. What a shame that the company developing that concept left the market. What was it called again?

How about the Xeon dual socket version of this chip. If it could combine combine graphics performance, now that would be decent enough for most people, and every reason for everyone to buy a workstation board and for Intel to sell twice as many CPU chips ;)

In fact, they could go back in time to the 386 and 387 math coprocessor concept. Only this time it would be GPU coprocessor. They could build a sister-chip that had half the CPU cores but double the GPU core/shaders, and it would make a marvellous combination.
Posted on Reply
#16
Inceptor
Borc said:
2007? Are you joking? HD3000 was launched in 2011 last year.
I think he means 2007 era discrete graphics performance.
Posted on Reply
#17
HTC
HTC said:
Since there shouldn't be any review of this available, are there any of A8-38X0 pitted against GeForce GT 240?
Found a review with both an A8-3850 and a GT 240 here

It has mixed results: a GT 240 has performance over the A8-3850 (with the RAM @ 1866) between 91% and 120%.

Would prefer a more comprehensive review for this comparison but was unable to locate one :(

EDIT

With this, it seems that Core i5-3570K Graphics still isn't up to the graphics of an A8-3850. It's a whole new ball game when you factor the CPU portion of the chip.
Posted on Reply
#18
v12dock
Who did they get the design from this time around
Posted on Reply
#19
Casecutter
Borc said:
HD4000 is 36% slower than GT240
Ah, what HD4000? If a 4670 well yes, a GT240 DDR5 that came along 14 months later did better it, but I see it about 8% higher @ 1280x.
http://www.techpowerup.com/reviews/MSI/GeForce_GT_240/30.html

Borc said:
2007? Are you joking? HD3000 was launched in 2011 last year
While didn’t do anything better than a the 780G.

Yo_Wattup said:
Still quite mediocre IMO.
I second that
Posted on Reply
#20
Dent1
What confuses me is, why does AMD put the discrete video cards on the mainstream CPUs. Why don't they put GPUs on the enthusiast range too. Like surely they could put a 6850 on a Bulldozer or Phenom II die if they wanted?
Posted on Reply
#21
nuno_p
Dent1 said:
What confuses me is, why does AMD put the discrete video cards on the mainstream CPUs. Why don't they put GPUs on the enthusiast range too. Like surely they could put a 6850 on a Bulldozer or Phenom II die if they wanted?
Its not that simple.
Posted on Reply
#22
Casecutter
Dent1 said:
What confuses me is, why does AMD put the discrete video cards on the mainstream CPUs. Why don't they put GPUs on the enthusiast range too. Like surely they could put a 6850 on a Bulldozer or Phenom II die if they wanted?
The Trinity A10 lineup will come with Radeon HD 7660D or something that approaches or betters the current 6570 discrete. Now consider that will perform similar to a 9800GTX 512Mb from 2008, which had a TDP of 168W. Today they combine a CPU and the GPU and keep it under 100W. In 4 years that’s pretty amazing wouldn't you say?

As to why they don't... it comes down to power and heat, most anyone or the OEM's that builds and markets the volume of general use computers have to do it for a price and within "green" efficiency. While at this time cooling would need to be developed and I would consider a prohibitive cost. But give it two years and you'll probably be getting 7770 performance with an APU.
Posted on Reply
#23
Thefumigator
Dent1 said:
What confuses me is, why does AMD put the discrete video cards on the mainstream CPUs. Why don't they put GPUs on the enthusiast range too. Like surely they could put a 6850 on a Bulldozer or Phenom II die if they wanted?
Because llano A8 is a solution in the mainstream where you count every penny.
You don't count every penny on the enthusiast range, so a more powerful GPU integrated to a Phenom II or bulldozer won't necesarelly means the enthusiast crowd will buy it, first because you will be tied to that integrated GPU until you buy a discrete card, and second, no matter how powerful the integrated GPU is, it shares memory, and that makes performance drop, also DDR3 is not comparable to GDDR5 in any way.

Yo_Wattup said:
Still quite mediocre IMO.
agree.
And I'm not counting graphics quality in 3D, microstutterings, and compatibility.
Posted on Reply
#24
repman244
Dent1 said:
What confuses me is, why does AMD put the discrete video cards on the mainstream CPUs. Why don't they put GPUs on the enthusiast range too. Like surely they could put a 6850 on a Bulldozer or Phenom II die if they wanted?
The TDP would be too high I imagine.
Posted on Reply
#25
Halk
There's now a significant amount of graphical grunt in these CPUs... does anyone else feel a little touch of regret forking out for a fair portion of silicon that will go unused?
Posted on Reply
Add your own comment