Thursday, February 25th 2010

GeForce GTX 400 Series Performance Expectations Hit the Web

A little earlier this month, NVIDIA tweeted that it would formally unveil the GeForce GTX 400 series graphics cards, NVIDIA's DirectX 11 generation GPUs, at the PAX East gaming event in Boston (MA), United States, on the 26th of March. That's a little under a month's time from now. In its run up, sources that have access to samples of the graphics cards seem to be drawing their "performance expectations" among other details tricking in.

Both the GeForce GTX 480 and GTX 470 graphics cards are based on NVIDIA's GF100 silicon, which physically packs 512 CUDA cores, 16 geometry units, 64 TMUs, 48 ROPs, and a 384-bit GDDR5 memory interface. While the GTX 480 is a full-featured part, the GTX 470 is slightly watered-down, with probably 448 or 480 CUDA cores enabled, and a slightly narrower memory interface, probably 320-bit GDDR5. Sources tell DonanimHaber that the GeForce GTX 470 performs somewhere between the ATI Radeon HD 5850 and Radeon HD 5870. This part is said to have a power draw of 300W. The GeForce GTX 480, on the other hand, is expected to perform on-par with the GeForce GTX 295 - at least in existing (present-generation) applications. A recent listing by an online store for a pre-order, put the GTX 480 at US $699.Source: DonanimHaber
Add your own comment

114 Comments on GeForce GTX 400 Series Performance Expectations Hit the Web

#1
Wile E
Power User
by: buggalugs
Yes it does. I remember reading about it. Something about diminishing returns and high power draw required to keep it fed.

Thats why after the R600 debacle(with 512bit bus) ATI moved back to 256bit bus with 3870/4870 and now the very powerful 5870 still has a 256bit memory bus.

If it were that easy or worthwhile ATI would have made the 4870 or the upgraded 4890 with a wider memory bus. They already tried it and it wasnt worth it. Nvidia has been using GDDR3 which benefits from a wider memory bus. On GDDR 5 theres already plenty of bandwidth on a 256 bit bus.

As an analogy its like having 4 X 5970's in a computer. After 2 of them theres not much performance increase if any. So you have a hot and power hungry setup that is inefficient. Just like RV600 was.

I'm not surprised the power draw is as high as they say.
No, ATI moved back to 256 because 512 was too pricey to build.

And test have shown that current 58xx cards benefit from more memory bandwidth when OCing, suggesting that the 256bit bus is indeed a bottleneck. That can be achieved either thru higher memory clocks, or a wider bus if they wanted to stamp out a new core. Both add heat and power, so the point is moot.

And btw, the 2900 outperformed the 3870 when OCing, and part of that reason was the wider bus. That's why all of the top ATI scores were still done with 2900 at that time, and not the 3870.

2900 was power inefficient mostly because of the package size, and it's high current leakage, not because of bus width.
Posted on Reply
#2
shevanel
what is a good test to know if you actually NEED to overclock memory in real world gaming use?
Posted on Reply
#3
Wile E
Power User
by: shevanel
what is a good test to know if you actually NEED to overclock memory in real world gaming use?
If overclocking memory gives a performance boost.
Posted on Reply
#4
shevanel
so anything that can be given a performance boost means it was a bottleneck? :D
Posted on Reply
#5
Wile E
Power User
by: shevanel
so anything that can be given a performance boost means it was a bottleneck? :D
Technically speaking? Yes. If it isn't a bottleneck, ocing doesn't help, especially when talking about video card memory.
Posted on Reply
#6
shevanel
I see. Good point.

my cpu is my bottleneck imo... i miss the i7 920 @ 4ghz
Posted on Reply
#7
Imsochobo
by: shevanel
I see. Good point.

my cpu is my bottleneck imo... i miss the i7 920 @ 4ghz
for high fps, yes.
For over 50 fps, nada...

I feel no lagg or stuff like that with a PH II 940 @ stock hoho...
Posted on Reply
#8

by: Wile E

And test have shown that current 58xx cards benefit from more memory bandwidth when OCing, suggesting that the 256bit bus is indeed a bottleneck. That can be achieved either through higher memory clocks, or a wider bus if they wanted to stamp out a new core. Both add heat and power, so the point is moot.

And btw, the 2900 outperformed the 3870 when OCing, and part of that reason was the wider bus. That's why all of the top ATI scores were still done with 2900 at that time, and not the 3870.
Do you have any proof of what you've wrote there??

All the o.c. forums are saying that o.c. the RAM of current 57xx and 58xx is useless, since it provides very little performance gain, even when using liquid cooling or sub-zero custom H2/ He/N2/etc cooling. The main performer is only the GPU, so where did you got your informations, mind if I ask??:shadedshu
#9
shevanel
i guess you havent seen the vantage thread with all the 5850 oc's and scores? unless vanatage is misleading and is not reflective towards real-world gaming?
Posted on Reply
#10
Bjorn_Of_Iceland
there goes the news thread :D

in anycase, waiting is always the hardest part :D
Posted on Reply
#11
$ReaPeR$
i totaly agree on the waiting part,i would very much like to see the performance of this gpu, debating on speculations is a bit pointless imo , funny, but pointless. :)
Posted on Reply
#12
HalfAHertz
Nvidia are arrogant, not stupid - they will not release a 300W single gpu card, you can quote me on that! I call this article total bs, I don't even know why it deserves to be in the news section...
Posted on Reply
#13
X-TeNDeR
Very interesting.. ofcourse this can be a ploy by NVIDIA to dazzle us untill the real deal is here, shocking the gaming community (as AMD did with the "400 shaders" HD4xxx)

If this info is true - they should Ship these cards with Doom 5 ;)
Posted on Reply
#14
Aleksander
I dont really think the new fermi will end up with such low performance. They must be better, cuz the chip is even bigger than the previous and it is supposed to be less bigger cuz it is 40 nm. They will use in GTX400 series the GDDR5 version and i hope that this will help a bit nvidia. But if they want to price it that high, i say to nvidia: Go home you little boy, your mummy is waiting for you
Posted on Reply
#15
Wshlist
No graphics card draws 300Watt for itself, and it can't be they 'advise a 300W PSU' either since that is too low for a system this day an age, so in short I think it's all nonsense.
Posted on Reply
#16
ucanmandaa
yeah i was talking about stock cards... watercooling + overclocking and modding were not included :)
Posted on Reply
#17
kaneda
The one thing, in my opinion which is keeping nVidia afloat in the eyes of enthusiasts (besides fanboyism) is CUDA. ATi stream is far from a rival. AMD/ATi need to sort that out before it takes the last of what nVidia has to offer. in pure gaming price/performance they're dominating, its just in GPGPU they're flopping, and not in a good way. What use is power if there's no way of accessing it?

From a 3d modellers perspective, i want some nice open ATi powered renderer :D, make use of that raw power.

Though, im still stuck, due to monetary reasons, on my old X1950Pro. :(
Posted on Reply
#18
Aleksander
Is your X1950 pro better than my 7600GS?
Posted on Reply
#20
Frick
Fishfaced Nincompoop
by: Aleksander Dishnica
Is your X1950 pro better than my 7600GS?
x1950 pro is on par with 7900gs and 7900gto.
Posted on Reply
#21
kaneda
I love how my gpu got to be the topic of conversation not the point I made XD.

the card I have isn't horrid, can still play quite a few games relatively well, older games obviously but still.
Posted on Reply
#22
Bjorn_Of_Iceland
by: Aleksander Dishnica
Is your X1950 pro better than my 7600GS?
by: Frick
x1950 pro is on par with 7900gs and 7900gto.
you didnt get the meaning he is trying to say :P
Posted on Reply
#23
TheLaughingMan
by: Wshlist
No graphics card draws 300Watt for itself, and it can't be they 'advise a 300W PSU' either since that is too low for a system this day an age, so in short I think it's all nonsense.
Yes, yes they do, at peek power usage. That is not even close to unrealistic. Granted it will most likely never reach that high, but it is capable of it is the point.
Posted on Reply
#24
eidairaman1
by: kaneda
The one thing, in my opinion which is keeping nVidia afloat in the eyes of enthusiasts (besides fanboyism) is CUDA. ATi stream is far from a rival. AMD/ATi need to sort that out before it takes the last of what nVidia has to offer. in pure gaming price/performance they're dominating, its just in GPGPU they're flopping, and not in a good way. What use is power if there's no way of accessing it?

From a 3d modellers perspective, i want some nice open ATi powered renderer :D, make use of that raw power.

Though, im still stuck, due to monetary reasons, on my old X1950Pro. :(
at least you are feeding that card properly. Id have to upgrade to do that.
Posted on Reply
#25
a_ump
hey i'm still kicking a 5 yr old 7800GTX. 1280x768 isn't too pretty on my 21.5in screen thou :/.

Nvidia will poop fermi and it'll either be the shit or be diarrhea that just runs down the drain.
Posted on Reply
Add your own comment