• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Next-gen NVIDIA GeForce Specifications Unveiled

Memory manufacturers prefer money. That's all they want. They don't care who is buying their products as long as they pay and as long as they can sell ALL thier STOCK. Since they have low stock and they don't have high production right now, ANY company can buy that amount, so they would sell it to the one that paid more. Could Nvidia pay more than AMD? Maybe (well, sure), but why would they want to do so? It would make their cards more expensive, but what is worse for them is the REALLY LOW AVAILABILITY. Let's face it, Nvidia has a 66% of market share. That's twice of what Ati has. If availability is low for Ati, much more for Nvidia. Contrary to what people think, I don't think Nvidia cares too much about Ati and a lot more about their market audience. GDDR5 would make their product a lot more expensive and scarce. They don't want that. Plain and simple.

And HD4850 WON'T have a GDDR5 version from AMD. They gave partners the choice to use it. That way partners can decide if they want to pay the price premium or not. GDDR5 price is so high, that AMD has decided is not cost effective for HD4850. Now knowing that it's only an underclocked HD4870, think about GDDR5 and tell me in all honesty that it's not just a marketing strategy.

Maybe you're right. But samsung and hynix are going to produce gddr5 too, not only qimonda. I would bet, we will not see a nv card with ddr5 later :) I don't think that the gddr5 is just a marketing strategy (gddr4 was this), but we will see when there will be benchmarks on the net :)

Meanwhile i edited my previous post :)
 
Last edited:
Maybe you're right. But samsung and hynix are going to produce gddr5 too, not only qimonda. I would bet, we will not see a nv card with ddr5 later :) I don't think that the gddr5 is just a marketing strategy (gddr4 was this), but we will see when there will be benchmarks on the net :)

Meanwhile i edited my previous post :)

But availability NOW is low, and they have to release NOW. In the near future I don't know. They have already said there's going to be a 55nm refresh soon and they could add GDDR5 support then, once availability is better.
I know that's something that's going to piss off some people, as if the fact that a 55nm version comes out would make their cards worse, but it's going to happen. People will buy >> people will enjoy >> new 55 nm card will launch >> people will complain "why they didn't release 55 nm in the first place? ****ng nvidiots". Even though they already know it will happen before launch...
 
Nvidia sounds like they are getting comfortable where they are as far as designs go. I don't know about the availability of GDDR5 but I do remember the performance increase of GDDR4 wasn't that much better than GDDR3 so Nvidia may not even see it as worth it untill GDDR5 is standard/common.

Anyone ever think that Nvidia may never go to DX10.1, there are a lot of companys these days that don't like/want to work with MS. 2¢ But I think some of the industry is trying to get away from MS controlled graphics.
 
IMO, by the time we see games that require DirectX 10.1, the D10 GPU will be ancient history.
 
So, i don't get why people are saying the R770 is a just a beefed up R670...

The R7xx was in development before the R600 was even released, AMD said they were taking all there focus to the R7xx. The R770 is all new....AMD has confirmed the above.

And the GT200 is also all-new, the cards both look amazing on paper. Just like the G80 and R600 did. Remember how many people thought the R600 was gonna lay down the law:p, when they saw the specs? This is no different, the specs look much better, just as the R600 looked better on paper....that doesn't always transfer to real-world performance. All we can do is wait and see.

PS: "R770/GT200 rulezzz!!"....is just 97% fanyboy crap...
 
Last edited:
Latency Vs Bandwith ? Its a wait and see story.
I have to purchase 2 x 4870x2's because i decided that a single 3870x2 would do the job in the ATi system i have. That wont stop me from upgrading my Nv system. I wouldnt mind playing with the CUDA on the EN8800GTX before i throw the card away.

I look forward to the GDDR5 bandwith to be utillized efficiently by AMD/ATi because its the way of the future! And suspect the reason Nvidia havnt moved onto GDDR5 is due to;
* Cheaper ram modules for a well aged technology with better latency, hoping to keep price competative with AMD/ATi's cards.
* To allow themselves to make as much money as possible off GDDR3 technology now that they got CUDA working before the public designs crazy C based software for the rest of us that might give them a greater advantage in sales the next round of releases.

Either way, im looking at big bucks, we all are. . .
 
Are the GT200 and R700 new gpu's or not?
Well the basic designs aren't. The actual gpu's are new ofcourse.

History:
R300 => R420 => R520
ATI used the basic R300 design from august 2002 until R600 was released (may 2007 but should have been on the market 6 months earlier without delay).

NV40 => G70
nVidia used NV40 technology from april 2004 until november 2006.

So it's quiet common to use certain technology for a couple generations. This will be even more profound with current generation of gpu's because of the increased complexity of unified shaders.
It takes 2 to 3 years to design a gpu like the R300, NV40, R600 or G80. After that you get the usual updates. Even a process shrink, let's say 65nm to 45nm, takes almost a year without even touching the design. These companies manage to hide this time because they have multiple design teams working in parallel.
The same thing happens with cpu's. Look at K8 and K10. Look at Conroe and Penryn.
Expect really new architectures from ATI and nVidia somewhere in 2009, maybe even later and they will be DX11.
 
Wich ever new card ATI or Nvidia can crack the 100+ FPS in CRYSIS maxred out will be the winner.
The price is going to be the KILLER.:cry:
 
Are the GT200 and R700 new gpu's or not?
Well the basic designs aren't. The actual gpu's are new ofcourse.

History:
R300 => R420 => R520
ATI used the basic R300 design from august 2002 until R600 was released (may 2007 but should have been on the market 6 months earlier without delay).

NV40 => G70
nVidia used NV40 technology from april 2004 until november 2006.

So it's quiet common to use certain technology for a couple generations. This will be even more profound with current generation of gpu's because of the increased complexity of unified shaders.
It takes 2 to 3 years to design a gpu like the R300, NV40, R600 or G80. After that you get the usual updates. Even a process shrink, let's say 65nm to 45nm, takes almost a year without even touching the design. These companies manage to hide this time because they have multiple design teams working in parallel.
The same thing happens with cpu's. Look at K8 and K10. Look at Conroe and Penryn.
Expect really new architectures from ATI and nVidia somewhere in 2009, maybe even later and they will be DX11.

acctualy the conroe was based on core that was based on pentium-m that was based on p3, if u check some cpu id apps the core2 chips come up as pentium3 multi core chips, imagin if they had stuck with p3 insted of moving to pentium4..........
 
The combination of GDDR5 and 512bit would have been too much for the consumer to bear, cost wise. There's plenty of GDDR3, and with twice the bandwidth no need to clock the memory particularly high. Think about it, who's going to be supply constrained?

Once (if?) GDDR5 is plentiful, Nvidia will come out with a lower cost redesign that's GDDR5 and 256bit or some odd bit depth like 320 or 384. Just like G92 was able to take the much more expensive G80 design and get equivalent performance at 256bit, we will see something similar for the GT200. In the meanwhile make no mistake, this is the true successor the G80, going for the high end crown.

I'm sure we'll also see the GX2 version of this before year's end.
 
how you going to get 500+ watts to a 280gx2
 
its own built in psu maybe?
 
how you going to get 500+ watts to a 280gx2

NVidia never made a dual-GPU card using G80, ATI never made one using R600 either. I think there will be a tone-down GPU derived from GT200 that will make it to the next dual-GPU card from NV. By 'tone-down' I'm refering to what G92 and RV670 were to G80 and R600.
 
maybe, but i think till they move to 55nm and if they get power savings like ati did from 2900 to 3870 you won't see a new gx2
 
damn u lemonadesoda, you stole my bit, i been using that 2nd link for weeks/months now, well versions of it.........:)
 
damn u lemonadesoda, you stole my bit, i been using that 2nd link for weeks/months now, well versions of it.........:)

and damn you Rebo, I felt the urge to click on that second link after your post, even though I didn't want to do so. :D

It's been removed BTW.

The combination of GDDR5 and 512bit would have been too much for the consumer to bear, cost wise. There's plenty of GDDR3, and with twice the bandwidth no need to clock the memory particularly high. Think about it, who's going to be supply constrained?

Once (if?) GDDR5 is plentiful, Nvidia will come out with a lower cost redesign that's GDDR5 and 256bit or some odd bit depth like 320 or 384. Just like G92 was able to take the much more expensive G80 design and get equivalent performance at 256bit, we will see something similar for the GT200. In the meanwhile make no mistake, this is the true successor the G80, going for the high end crown.

I'm sure we'll also see the GX2 version of this before year's end.

Exactly what I was saying. For Nvidia supply is a very important thing. 8800 GT was an exception in a long history of delibering plenty of cards at launch. Paper launch is Ati's bussines, not Nvidia's, don't forget this guys.
 
is normal that Amd-Ati will use DDR5 and Nvidia not mainly because Ati has promoted and invested in the research and production and all ram manufacturers will serve 1st ati with the new tech. and this 2 step distance will remain between them i think
 
It's interesting to see that the 280 will be using a 512Mbit memory bus, that alone should help on the performance. ATi should have implemented it in the 4870(X2).
 
acctualy the conroe was based on core that was based on pentium-m that was based on p3, if u check some cpu id apps the core2 chips come up as pentium3 multi core chips, imagin if they had stuck with p3 insted of moving to pentium4..........

Well i didn't want to go into details of cpu's. I'm just making a reference.
 
imagin if they had stuck with p3 insted of moving to pentium4..........

That's something that has been debated a lot. IMO P4 was a good decision at that time, but it stayed too long. P3 had reached a wall and P4 was the only way they saw to overcome this. It's like a jam on the highway, sometimes your lane does not move forward and you find that the next one does, so you change lanes. A bit later your lane stops and you see that your previous lane is moving faster, but you can not change right away again. At the end you always come back, but the question remains whether you would have advanced more if you had stayed in it in the first place. Usually if you're smart and lucky enough, you advance more changing lanes. It didn't work for Intel, or yes, actually there is no way of knowing.
 
Right, it's back to discussing about two companies that are pregnant and of whose baby is better. Like babies, things look better after birth, scans and graphs are always blurry.
 
Back
Top