Monday, May 26th 2008

Next-gen NVIDIA GeForce Specs Unveiled, Part 2

Although it's a bit risky to post information taken from unknown for me sources, not to mention that the site is German, it's worth trying. The guys over at Gamezoom (Google translated) reported yesterday that during NVIDIA's Editors Day, the same place where the F@H project for NVIDIA cards and the buyout of RayScale were announced, NVIDIA has also unveiled the final specs for its soon to be released GT200 cards. This information comes to complement our previous Next-gen NVIDIA GeForce Specs Unveiled story:
  • GeForce GTX 280 will feature 602MHz/1296MHz/1107MHz core/shader/memory clocks, 240 stream processors, 512-bit memory interface, and GDDR3 memory as already mentioned.
  • GeForce GTX 260 will come with 576MHz/999MHz/896MHz reference clock speeds, 192 stream processors, 448-bit memory interface and GDDR3 memory
The prices are 449 U.S. dollars for the GTX 260 and more than 600$ for the GeForce 280 GTX. That's all for now.
Source: Gamezoom
Add your own comment

108 Comments on Next-gen NVIDIA GeForce Specs Unveiled, Part 2

#26
magibeg
I'm almost afraid to say it but it is strange why the ati cards do so much better in benchmarks. (once again i may regret saying this) It does seem to be kind of like a lot of games are specifically designed with nvidia cards in mind.
Posted on Reply
#27
laszlo
EastCoasthandleThe problem with this opinion is that back then the performance increase WAS NEEDED. As it stands now, the added performance increase is NOT NEEDED at such a high premium. A lot of high end card owners aren't over-clocking like they did in days past. That's a pure indicator that we have plateau'd above average frame rates in most games. Sure, e-peen dictates higher FPS is better then what you get now. However, having a card that can play games at above average frame rates without even overclocking (in most cases) would be hard pressed to buy a card that cost $600+.
exactly;no matter the gpu brand in the last year the midrange cards 150-250$ proved to be enough for all titles to be played on a decent fps;when via or s3, or intel will have a gpu capable to achieve a decent fps people will buy it because the price/performance ratio, so the best buy card will have the best $/fps ratio,tendency which already rule the market.

i don't really understand why people must be fanboys of Nvidia or Ati ;the point is to buy a card who suits you better and use it for a few years without upgrade; i don't like when mud is thrown from both sides now just to prove that "i have a card from the 1st and best gpu manufacturer in the world"(this can be Nv or Ati) wtf cares about?
Posted on Reply
#28
newtekie1
Semi-Retired Folder
I don't know how accurate this information is, especially since we haven't seen any other reports of it from any reputable sources. I think I'll wait to believe specs until the cards are actually out, but to me the shader speeds on these cards seem a little low to me. I know there is more of them, but to drop the speeds that much seems insane to me.
Posted on Reply
#29
DarkMatter
magibegI'm almost afraid to say it but it is strange why the ati cards do so much better in benchmarks. (once again i may regret saying this) It does seem to be kind of like a lot of games are specifically designed with nvidia cards in mind.
Of course, because it's much easier to make ALMOST ALL developers to make your card run the games faster than to make 2-3 benchmarks run faster. Funny how corrupted all game developers are, but benchmark developers are so incorruptible...
Posted on Reply
#30
Widjaja
No boubt nVidia will end up releasing a GT 280 which ends up being the next gen 8800GT.

So most probably just a wait for the price vs performance minded people after the GTX 280 is released.
Posted on Reply
#31
largon
magibegI'm almost afraid to say it but it is strange why the ati cards do so much better in benchmarks. (once again i may regret saying this) It does seem to be kind of like a lot of games are specifically designed with nvidia cards in mind.
Or that benchmarks are optimized for ATi.
*shrug*

The reason is in the architecture. For a game to run well on ATi R6 gen GPUs some heavy optimizations are required as the architecture of R6 GPUs is so different from earlier, and then ofcourse there's the obvious flaws in R600/RV670 like the absolutely horrible texture filtering capabilities and the innate inefficiency of the superscalar shaders. RV770 will partially fix texture filtering shortcomings but unfortunately the TMUs are only doubled - thus RV770's texturing muscle will still clearly trail that of even G92.
Posted on Reply
#32
Unregistered
well its a business after all. they want to make a profit
Posted on Edit | Reply
#33
trt740
X-TeNDeRSix hundred dollars!!! :eek: awww..

Specs look good, but man.. NVidia is hunting for fat wallets again :twitch:
they are on crack at that price get real
Posted on Reply
#34
Black Hades
DarkMatterOf course, because it's much easier to make ALMOST ALL developers to make your card run the games faster than to make 2-3 benchmarks run faster. Funny how corrupted all game developers are, but benchmark developers are so incorruptible...
This is not about corrupt ppl, it's about the fact that NVIDIA and ATi cards are very different.
When you make a new game for instance you have to optimize the code so that it takes advantage of the hardware the GPU has. This process can take time, you dont generaly have time to cover all aspects and explore all the resources of two cards that work in different ways. So designers have to choose. Nvidia gives them all the help they need in understanding and using the nooks and crannies of their cards. Then games run better on NVIDIA cards.

I'm trying to stay neutral here, the only reason that I like ATi cards right now is because the competition is overcharging.
Posted on Reply
#35
DrPepper
The Doctor is in the house
When the Ultra came out and the price was a lifetime of slavery some people would still try to buy 3 of them.
Posted on Reply
#36
DarkMatter
Black HadesThis is not about corrupt ppl, it's about the fact that NVIDIA and ATi cards are very different.
When you make a new game for instance you have to optimize the code so that it takes advantage of the hardware the GPU has. This process can take time, you dont generaly have time to cover all aspects and explore all the resources of two cards that work in different ways. So designers have to choose. Nvidia gives them all the help they need in understanding and using the nooks and crannies of their cards. Then games run better on NVIDIA cards.

I'm trying to stay neutral here, the only reason that I like ATi cards right now is because the competition is overcharging.
Have you ever read any game developer's blog? They do specific code not only for each brand, but for almost each card architecture. Then they may optimize better the code specific for Nvidia hardware under TWIMTBP, because Nvidia gives them extense support. In that respect the code for Nvidia hardware may be better optimized, but each card has its own code.

Long before TWIMTBP, many first tier game developers said the way 3DMark did things WAS NOT how they were going to do things in the future. AFAIK this never changed, so it's the same with 3DMark 06 too. Saying that Ati architecture is any better based on these kind of benchmarks, involves that those benchmarks are doing things right, which they aren't.

This is something that has always bugged me. People say that game developers are making code "Nvidia's way", but they never stop and think that MAYBE Nvidia is doing their hardware in "game developers way", while Ati may not. TWIMTBP it's a two way relationship. With the time Ati has become better (comparatively) in benchmarks and worse in games. Everybody blames TWIMTBP for this, without taking into account each company's design decisions. For a simple example, Ati's Superscalar, VLIW and SIMD shader processors are A LOT better suited for the HOMOGENEOUS CODE involved in an static benchmark, than for the ever changing code involved in a game. Also in the case of R600 and one of it's biggest flaws, its TMUs, benchmarks are a lot more favorable than games, since you can "guess" which texture comes next and you don't have to care about the textures that have already been used. In games you don't know where the camera will head next, so you don't know which textures you can discard. In reality none of the architectures can "guess" the next texture, but G80/92 with it's bigger texture power can react better to texture changes. On benchmarks R600/670 can mitigate this effect with streaming.
Posted on Reply
#37
TheGuruStud
Well, with 1 billion transistors does it even matter how many ROPs, shaders, etc it has? LOL *sarcasm*

It has way more switching power which means ownage (as long as they don't screw it up like the FX series). Clocks are irrevelant (overall).
Posted on Reply
#38
BigBruser13
latest upcoming nvidia offereings

All I want to know is can it play Crysis on very high at 1920x1200
Posted on Reply
#39
HTC
oli_ramsaywow $600 is a bit steep, wonder has that'll translate into pounds over here. I reckon £400 :(

Here's an interesting comparison chart:


If the 4870 is nearlly as fast as the GTX280 it'll be a much better buy I think.
Even if the nVidia options turn out to be better (by say ... 10% to 15%), if this power consumption is real, i would buy an ATI every time.

Only if the difference turns out more substantial would i consider to buy a nVidia but NOT for that price!
Posted on Reply
#40
TheGuruStud
HTCEven if the nVidia options turn out to be better (by say ... 10% to 15%), if this power consumption is real, i would buy an ATI every time.

Only if the difference turns out more substantial would i consider to buy a nVidia but NOT for that price!
Well, if it's real, then I can say that ATI is missing transistors b/c they're the ones always sucking down the juice :p
Posted on Reply
#41
DarkMatter
HTCEven if the nVidia options turn out to be better (by say ... 10% to 15%), if this power consumption is real, i would buy an ATI every time.

Only if the difference turns out more substantial would i consider to buy a nVidia but NOT for that price!
Thing is that from the looks of the specs, RV770 will have 2x the power of RV670 and GT200 will have 2x that of G92. This means a GTX 280 40-50% faster than the HD4870 and GTX260 could end up being 25% faster.
It's early to say anything, but most probably each card will have its market segment and ALL of them will have a similar price/performance, as it's been the case almost always. Also GTX cards won't lag a lot behind in performance-per-watt, always based on these specs which I don't know if are trusty. But then again we are always talking about these specs so, simple math:

157 watts +50% = 157 +79 = 236 W isn't funny?
Posted on Reply
#42
HTC
DarkMatterThing is that from the looks of the specs, RV770 will have 2x the power of RV670 and GT200 will have 2x that of G92. This means a GTX 280 40-50% faster than the HD4870 and GTX260 could end up being 25% faster.
It's early to say anything, but most probably each card will have its market segment and ALL of them will have a similar price/performance, as it's been the case almost always. Also GTX cards won't lag a lot behind in performance-per-watt, always based on these specs which I don't know if are trusty. But then again we are always talking about these specs so, simple math:

160 watts +50% = 160 +80 = 240
If you're correct, i would buy a nVidia card, but only when the price dropped to a more realistic value.

nVidia's GTX280 could be 10 times faster the ATI's 4870x2 but i wouldn't buy it for this amount of money: NO WAY!!!!
Posted on Reply
#43
DarkMatter
HTCIf you're correct, i would buy a nVidia card, but only when the price dropped to a more realistic value.

nVidia's GTX280 could be 10 times faster the ATI's 4870x2 but i wouldn't buy it for this amount of money: NO WAY!!!!
I will never buy any card above 300$ neither, but the VALUE of the cards is undeniable.

Also I've been looking around and other sources say $400 and $500, for GTX260 and 280 respectively and Nvidia may still have an ace up its sleeve called GTX260 448 MB which would pwn as did GTS 320 in the past, so who knows...
Posted on Reply
#44
Megasty
HTCIf you're correct, i would buy a nVidia card, but only when the price dropped to a more realistic value.

nVidia's GTX280 could be 10 times faster the ATI's 4870x2 but i wouldn't buy it for this amount of money: NO WAY!!!!
That's the full gray zone. On paper the 280 & 260 will be 25-50% faster than the 4870. That alone warrants the higher prices to a point. Then you drop the 4870x2 on them & it creates a simple paradox. On paper the 4870x2 is twice as fast as the 4870. Plus at $500 the winner is clearer than glass.
Posted on Reply
#45
HTC
MegastyThat's the full gray zone. On paper the 280 & 260 will be 25-50% faster than the 4870. That alone warrants the higher prices to a point. Then you drop the 4870x2 on them & it creates a simple paradox. On paper the 4870x2 is twice as fast as the 4870. Plus at $500 the winner is clearer than glass.
Dunno how you figure that, but it doesn't matter!

To me, there are only 2 important aspects for any card: power usage and price (in that order).
Posted on Reply
#46
DarkMatter
MegastyThat's the full gray zone. On paper the 280 & 260 will be 25-50% faster than the 4870. That alone warrants the higher prices to a point. Then you drop the 4870x2 on them & it creates a simple paradox. On paper the 4870x2 is twice as fast as the 4870. Plus at $500 the winner is clearer than glass.
That logic has its flaws:

1- HD3870 X2 is not twice as fast as HD3870, we could guess HD4 x2 won't either.

2- HD3870 X2 price is more than twice than DH3870, will this be different? That puts the HD4870 X2 well above $600. Probably it will be cheaper, but it won't launch until August, we don't know how prices are going to be then...
Posted on Reply
#47
imperialreign
DarkMatterHave you ever read any game developer's blog? They do specific code not only for each brand, but for almost each card architecture. Then they may optimize better the code specific for Nvidia hardware under TWIMTBP, because Nvidia gives them extense support. In that respect the code for Nvidia hardware may be better optimized, but each card has its own code.

Long before TWIMTBP, many first tier game developers said the way 3DMark did things WAS NOT how they were going to do things in the future. AFAIK this never changed, so it's the same with 3DMark 06 too. Saying that Ati architecture is any better based on these kind of benchmarks, involves that those benchmarks are doing things right, which they aren't.

This is something that has always bugged me. People say that game developers are making code "Nvidia's way", but they never stop and think that MAYBE Nvidia is doing their hardware in "game developers way", while Ati may not. TWIMTBP it's a two way relationship. With the time Ati has become better (comparatively) in benchmarks and worse in games. Everybody blames TWIMTBP for this, without taking into account each company's design decisions. For a simple example, Ati's Superscalar, VLIW and SIMD shader processors are A LOT better suited for the HOMOGENEOUS CODE involved in an static benchmark, than for the ever changing code involved in a game. Also in the case of R600 and one of it's biggest flaws, its TMUs, benchmarks are a lot more favorable than games, since you can "guess" which texture comes next and you don't have to care about the textures that have already been used. In games you don't know where the camera will head next, so you don't know which textures you can discard. In reality none of the architectures can "guess" the next texture, but G80/92 with it's bigger texture power can react better to texture changes. On benchmarks R600/670 can mitigate this effect with streaming.
I agree to an extent.


Although, I think there is a major difference in GPU architecture that has been holding ATI back across the board, the few games where ATI has worked closely with developers shows a fairly level playing field, or better ATI performance upon release.

Sadly, the only two games I can think of that I know for certain that ATI worked closely with game developers is Call of Juarez - where we see ATI cards tending to outperform nVidia's; and FEAR - where we see ATI cards continuing to keep pace with nVidias.


I'm sure a certain amount of collaboration does tend to help out nVidia's overall, but yes, GPU architecture does come into play as well; and ATI's GPUs just haven't been suited for the more complex games we've seen over the last 2 years or so.

But, all-in-all, the new ATI R700 series is a brand new design, not a re-hash of an older GPU like the R600 was to R500 - It might be wishful thinking, but . . . I think we might be in for a surprise with the new ATI GPUs. Probably just wishful thinking on my part, though :ohwell:





Anyhow, someone correct me if I'm wrong, but I thought I remember hearing that nVidia's new G200 is another re-hash of G92/G80? :confused:
Posted on Reply
#48
mandelore
lol you know what, for that price you could buy 2 of ATI's single offerings and beat out the NV counterpart ^^

or just go x2 and win that way, and the x2 will no doubt be far cheaper than nv's gx2 alternative, so again, price wise and performance its win win for ATI atm.

Think about it, NV may have one killer card, but if you can almost buy 2 of ATI's own killer cards (dont matter if they are less powerful than NV's offering) for around or just over the price of one of NV's cards, do the math, ATI / customer would win every time?
Posted on Reply
#49
warhammer
Save your money for the DX11 cards:)
Posted on Reply
#50
Megasty
DarkMatterThat logic has its flaws:

1- HD3870 X2 is not twice as fast as HD3870, we could guess HD4 x2 won't either.

2- HD3870 X2 price is more than twice than DH3870, will this be different? That puts the HD4870 X2 well above $600. Probably it will be cheaper, but it won't launch until August, we don't know how prices are going to be then...
I'm not even comparing the R600 to the R700. They're 2 completely different gpu's. The 4870x2 is also using a different method of combining the cores so who klnow what new mess they will present to us. Anyway they said a long time ago that the thing wouldn't be over $499. How accurate that is now with the price of GDDR5 going up is in the air as well. But for now I'll hold them to it until I hear differently.

The only way the 2 monsters won't compete is that the 4870x2 is priced that low. I guess it'll make since to ATi but there's still no way that 280 or 260 is getting my money. I'll stick with the GT model if I had to get one but I'm sure that ppl will still flock to snatch them up. Hopefully it will be worth it to them.
Posted on Reply
Add your own comment
May 28th, 2024 02:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts