• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Sandy Bridge-E Benchmarks Leaked: Disappointing Gaming Performance?

When benching CPU framerates, one should reduce the resolution as much as possible, to remove the GPU as the bottleneck, otherwise, you just get all the results for the CPUs under test topping out and hence showing the same performance, when they actually all have different performances. If anything 1680x1050 is too high and that's why they look the same in those slides. I would test at 800x600.

Regardless, I'll say it again, wait for the official benchies tomorrow before passing judgment.

But I couldn't get the point. People who buy this series of CPUs will never play at such lower resolutions(whoever afford to buy this, should be already owning at least a top tier GPU & Display). So there is no sense to test it at lower res, even though you have to test it's processing power while gaming. Is it fair to test Today's CPU with a decade old configuration and come to a conclusion according to those results? I didn't stand for SB-E or Intel. I thought it's unfair to come to a conclusion with these low Res benchies. I may be wrong. But anyway we can find out the real story tomorrow.. ;)

And welcome to TPU. :toast:
Thank you.. :toast:
 
But I couldn't get the point. People who buy this series of CPUs will never play at such lower resolutions(whoever afford to buy this, should be already owning at least a top tier GPU & Display). So there is no sense to test it at lower res, even though you have to test it's processing power while gaming. Is it fair to test Today's CPU with a decade old configuration and come to a conclusion according to those results? I didn't stand for SB-E or Intel. I thought it's unfair to come to a conclusion with these low Res benchies. I may be wrong. But anyway we can find out the real story tomorrow.. ;)

You're thinking of it wrong. One can't test true potential of a CPU at high res (hence the GPU). You have to lower down resolution to take GPU out of question, only then you can see how fast a CPU can get. Look at this:

http://www.hardwarecanucks.com/charts/index.php?pid=70,76&tid=3

Then this;

hawx.jpg
 
It is funny how they promote CPUs. When they market Atoms they say how it can do everything why some one would need anything more and for Sandy they are trying to convince us that with anything less you could not surf the web.
 
Besides, we know many games are GPU intensive

That rule was forgotten when Bulldozer was launched. And remembered when Sandy Bridge-E is showing signs of floppping.

gaming performance does not represent a cpu's true potential.

I wish people remembered that saying when Bulldozer was received negatively on launch.
 
This sounds quite logical, how much faster can Intel make its already fast CPU? If these Benchmarks reflect its true performance, then my question was answered.
I would also have agree with Wile E in regards to gaming performance does not represent a CPU’s True Potential, and IMO the blame needs to go to the lazy gaming developers.
 
You're thinking of it wrong. One can't test true potential of a CPU at high res (hence the GPU). You have to lower down resolution to take GPU out of question, only then you can see how fast a CPU can get. Look at this:

http://www.hardwarecanucks.com/charts/index.php?pid=70,76&tid=3

Then this;

http://www.legitreviews.com/images/reviews/1751/hawx.jpg

I understand, dude. You look at those chart again. So a $120 cpu matches $999 cpu and $245 AMD flagship at higher resolution. So what the past and present say is, a $120 CPU matched/matches flagship $999's performance(high res), when it comes to games. So why you see this weird when a $310 matches a probable $999 cpu in gaming performance(low res)?

What I told already is, the people, who buy this cpu will already have a top GPU, if not having a multiple card configuration. So they won't get any performance boost when comparing SB-E with an i3 even. But that is what the story we already know.

The people who buy this CPU, will only buy this for bragging rights(having world fastes CPU) or to work with a heavy threaded environment like photoshop works, encordings, encryptions, etc., (like they did on i7 980x/990x). But one thing is for sure, this thing is not for me. :D
 
I'm liking it, as one of those few who game and get stuff done.

And on a strictly gaming perspective, I'm not disappointed either. Should be able to get four users running at a decent performance on a quad GPU, 32GB RAM, 6-core SB-E box. ;)
 
Disappointment? 6c/12t CPUs are intended for those that require performance for multi-threaded software (and those who can afford it for the sake of it).

These are chips for those who want to set new benchmark records, video editing/encoding, run distributed computing projects, run Virtual Machines... Which these benchmarks don't really show much of.

For gamers, the only realistic criteria is: how well does it run multiple-GPU setups. Which again.. these benchmarks aren't showing.

The one thing this does show is that per-core performance is as expected. So when the NDA is lifted, we should start seeing more meaningful results. And those who have money to spend for fun, won't care either way. If anyone thinks this is disappointing they really need to reconsider their point of view and vent their frustration at game devs ;)
 
One wouldn't game at it, but one would certainly bench at it when comparing CPU performance, to remove the GPU from the equation.

For some reason, there seems to be quite a bit of confusion by various posters here over the above statement, starting with erocker:

Which is a useless way to test a CPU as the data provided shows no useful and/or real world benefit. Benchmarks/applications that actually use the CPU show it as being quite a bit better than SB.

People, I don't see how I can put it any more clearly. The idea is to isolate each individual CPU's true performance, so the last thing you want to do is give the graphics card any significant work to do. Heck, if the card could be switched off altogether (possible in Unreal Tournament 2003/4) then you'd have an even more accurate result.

And it doesn't matter if one CPU achieves 200fps and the other 1200fps (6 times faster) you're measuring performance differences between them. This difference will become plenty obvious as time goes by and games become more demanding, giving the faster one a longer useful life. So for example, when the slower one achieves only a useless 10fps, the faster one will still be achieving 60fps and be highly useable.

Of course, it's also a good idea to supplement these tests with real-world resolutions too, as there can be unexpected results sometimes.

Thanks to John Doe for replying with some excellent answers to this misunderstanding. :toast:



Just go check it's page in Wikipedia, "Touhou Project."
Will do. :)

Depends though. Comparing over 200 fps to another over 200fps is pretty stupid too. :laugh:

It isn't, as I've explained above in this post.


It should have been obvious already what I was trying to point out already. There are "benchmarks" where CPU power doesn't really matter at all. ;)

Touhou is just a very extreme and ridiculous example. :laugh:

Yes, of course, lol. Benchmark a bunch of older games with vsync on and they'll all peg at a solid 60fps.
 
mustangs and bugattis

Um guys....en·thu·si·ast. CHIP. ENTHUSIAST as in sr2-ed quad sli 580 or tri fired 6990s water cooled mountain mod case...you get the idea. We have more money/credit than common sense will spend an extra $300 for a 3% increase... we want the best period. Not price/performance... just the highest high end. yeah we know the speed limit is 65 and the mustang is an adequate sports car but no...give me that Bugatti. Thats the market for this thing. People who run six screens eyefinity. Our 3 screened 6870 xfired logic don't apply.
 
What are people expecting? Its a tweaked Sandybridge with 2 extra cores. Its not going to be miles better just a bit better in some tasks due to the 6 cores.
 
What are people expecting? Its a tweaked Sandybridge with 2 extra cores. Its not going to be miles better just a bit better in some tasks due to the 6 cores.

I wish they'd hurry up and release an 8 core version. I'd totally nerd out to having one of those in my rig!
 
Last edited by a moderator:
I wish they'd hurry up and release an 8 core version. I'd totally nerd out to having one of those in my rig!

People would probably expect double the performance lol. Honestly id get a 2500k now with a decent motherboard and wait for Ivybridge (which im doing). I personally wouldn't even bother with Sandybridge-E.

An 8core version would cost me a house mortgage.
 
People would probably expect double the performance lol. Honestly id get a 2500k now with a decent motherboard and wait for Ivybridge (which im doing). I personally wouldn't even bother with Sandybridge-E.

An 8core version would cost me a house mortgage.

Ah, you're right about the price. :ohwell:

I'm waiting for the SB-E reviews tomorrow to decide. If it really doesn't provide a significant gaming boost and/or prices are sky high (they likely will) then I'll just get a 2700K and be done with it.

Note that while Ivy Bridge is an improvement over SB and has those high res integrated graphics, it's not considered an enthusiast platform by Intel.
 
Last edited by a moderator:
I highly doubt people who can afford a 2011 setup are even reading this with any inkling of giving a crap about what some random chinese benchmark says.

Gaming is almost always GPU limited anyway, especially when you compare Sandy Bridge to Sandy Bridge-E.

Hey chinese people, leak something that actually matters.
 
Can't wait to see some encoding and rendering results other than that (C4D)CB 11.5
Sandy Bridge was already an awesome architecture performance wise, 6 core Sandy Bridge-E will be more than awesome :)

I'm not talking gaming wise though, can't really expect much improvement on them unless games start using 6+ threads :laugh:
 
You're thinking of it wrong. One can't test true potential of a CPU at high res (hence the GPU). You have to lower down resolution to take GPU out of question, only then you can see how fast a CPU can get. Look at this:

http://www.hardwarecanucks.com/charts/index.php?pid=70,76&tid=3

Then this;

http://www.legitreviews.com/images/reviews/1751/hawx.jpg

You're thinking of it wrong. One can't test the true potential of a cpu by using games, period.

Again, I don't care how many people here buy chip specifically for gaming. That has absolutely no bearing on the fact that gaming is still a TERRIBLE cpu benchmark. SB-E isn't the only thing I mention this about.

All current chips will allow you to have essentially the same gaming experience, because they are not the main bottleneck. What I want to know is, who actually expected the 2 extra cores of SB-E to make a difference in gaming?

These results are not disappointing at all, they are expected. Games do not take advantage of this much cpu power. Anybody that expected a significant difference obviously hasn't been paying attention to the gaming industry.

That rule was forgotten when Bulldozer was launched. And remembered when Sandy Bridge-E is showing signs of floppping.



I wish people remembered that saying when Bulldozer was received negatively on launch.

My negativity towards BD had nothing to do with gaming. I am disappointed in it's per core IPC.
 
This is nothing new and the same thing was evident last gen.

Westmere vs Bloomfield/lynnfield.

Unless you are doing something that requires the extra cores there usually isn't much of a benefit.

Why this is a surprise to anyone is beyond me.

Nothing to see here move along folks!
 
One wouldn't game at it, but one would certainly bench at it when comparing CPU performance, to remove the GPU from the equation.

Except most don't realize that depending on VGA, this doesn't move the bottleneck from the GPU to the CPU...it moves it from the GPU to the memory controller. IT does NOT move the bottleneck to JUST the CPU proper.

Triangle and setup data is sent from the CPU to the GPU for every single frame, so really, by lowering resolution and increasing framerate, you are not exactly getting the same effect as it's portrayed by reviewers. You need to lower resolution, and NOT use a high powered-VGA, unless you are just testing CPU-GPU communication. It is NOT 100% just testing CPU performance, and this way of testing is SERIOUSLY flawed.

Falsely increasing the workload turns a real-world benchmark into a synthetic benchmark. Except this synthetic benchmarking practice has no correlation to real-world workloads, at all.


This is why I don't do CPU reviews. I WILL NOT review perforamcne for a CPU in the manner it is currently, by nearly every reviewer out there.
 
Last edited:
For some reason, there seems to be quite a bit of confusion by various posters here over the above statement, starting with erocker:



People, I don't see how I can put it any more clearly.


There's no confusion. The other actual CPU benchmarks give a good indication of what this chip is capable of. You seem to be the one who is confused, afterall, the title of your news post is a question. cadaveca and Wile E have the rest of my thoughts covered.


What are people expecting? Its a tweaked Sandybridge with 2 extra cores. Its not going to be miles better just a bit better in some tasks due to the 6 cores.

Thank you, well said.
 
Last edited by a moderator:
Just a few observations...

The 3960X for all intents and purposes is a slightly slower version of the 2600K if the app uses up to 4C/8T (3.3G vs 3.4G)

Gaming would depend heavily on how much of the coding load is handled by the CPU. Typically compute-heavy games (i.e. RTS- once the maps start filling up esp.) would likely show a small benefit for the hexcore.

Of the three commonly used methods to test CPU gaming performance, all could be said to be in some way flawed:
Testing a low res and/or 0xAA/0xAF takes the GPU out of the equation but isn't indicative of "real world" use, and not likely a scenario that anyone would encounter in actual gaming.

Testing at common resolution and default game i.q. more often than not presents a GPU-bound result...in which case a Core i3 makes as much sense as BD, i7 or Xeon

Testing at common res and standard game i.q. with CFX/SLI removes some of the GPU limiting factor...but can also introduce new factors - drivers and PCI-E bandwidth constraints- the latter probably minor, but would be an influence if you were testing quad GPU (say dual HD6990/GTX590) in a heads-up comparison ( X79 PCI-E @ 2 x 16 versus Z68 PCI-E @ 2 x 8)...adding a lane multiplier such as the NF200 would possibly open the system to increased latency also from what I understand.

From a personal PoV, I would use my system as much for content creation (Handbrake, Photoshop etc.) and productivity (Excel, PowerPoint etc.) as I do gaming. If the performance metric is in favour of all the apps I use then I would definitely consider the platform. I would like a better understanding of where the launch stepping/revision stands before I'd commit - My C0 step for Bloomfield was noticeably inferior to D0 I upgraded to.
 
this processor is ment for SICK gamers

You all forget that there is no z68 Motherboard that will support 4way-sli!

so those of you who want to do only 3 way, than this processor is a waste.

remember that this processor can handle 40 PCIE lanes.
 
lol amd vs Intel

its funny everyones posiutive praise when intel releases early benchies and when AMD does it EVERYONE is critical, typical human population though, unintelligent
 
Back
Top