Sunday, May 17th 2009

NVIDIA GT300 Already Taped Out

NVIDIA's upcoming next-generation graphics processor, codenamed GT300 is on course for launch later this year. Its development seems to have crossed an important milestone, with news emerging that the company has already taped out some of the first engineering samples of the GPU, under the A1 batch. The development of the GPU is significant since it is the first high-end GPU to be designed on the 40 nm silicon process. Both NVIDIA and AMD however, are facing issues with the 40 nm manufacturing node of TSMC, the principal foundry-partner for the two. Due to this reason, the chip might be built by another foundry partner (yet to be known) the two are reaching out to. UMC could be a possibility, as it has recently announced its 40 nm node that is ready for "real, high-performance" designs.

The GT300 comes in three basic forms, which perhaps are differentiated by batch quality processing: G300 (that make it to consumer graphics, GeForce series), GT300 (that make it to high-performance computing products, Tesla series), and G200GL (that make it to professional/enterprise graphics, Quadro series). From what we know so far, the core features 512 shader processors, a revamped data processing model in the form of MIMD, and will feature a 512-bit wide GDDR5 memory interface to churn out around 256 GB/s of memory bandwidth. The GPU is compliant with DirectX 11, which makes its entry with Microsoft Windows 7 later this year, and can be found in release candidate versions of the OS already.Source: Bright Side of News
Add your own comment

96 Comments on NVIDIA GT300 Already Taped Out

#1
DrPepper
The Doctor is in the house
KainXS said:
32 rops
around 2100 shaders
GDDR5

thats what the rumors say but based on ati's previous releases its more than likely real

but if I had to guess I would say at least 2400 shaders
I've only seen estimates of about 1200 for the RV870 so assuming an x2 R800 would be 2400 would be about right.
Posted on Reply
#2
a_ump
i hope the 1200 estimated shaders on the RV870 are fine tuned and better than the ones on the RV770, cause 512 nvidia shaders, even though they are difference from teh 240 on GT200, i can only assume nvidia to improve their shaders not use more but less efficient shaders, and thus it's going to rape. So ATI i hope has done one hell of a job on the RV870, i wonder if the sideport that was never activated on the HD 4870x2 will be on the HD 5870x2 and provide a decent performance jump, it would have to i'd think as then the latency from using the onboard crossfire chip would be negated. Dam i wanna see some benchies ugh
Posted on Reply
#3
djisas
a_ump said:
?dude where have u been? ATI for the past year has been all over nvidia's ass, they had the performance crown for a while with the HD 4870x2, and yea nvidia does have the most powerful single core graphic card but either way it still took them 2 gpu's on a card to get it back. And yea nvidia does tend to have better drivers at launch than ATI but they also have a lot more access to games(TWIMTBP).

Drivers are important but with this release i thk we'll see the past repeat itself. Not like 2900XT vs 8800GTX, but more like 9800GTX vs HD 3870. The GTX 285 is one hell of a card with 240 SPU's 512-bit GDDR3. GT300 is going to have more than twice as many SPU's, and twice the bandwidth by moving to GDDR5. I just can't see the performance from the chip not destroying ATI's RV870 which i estimate by the specs, will only perform 25-35% faster than HD 4870.
The 1GHz HD4890 already proves capable of beating the 285 in several games, i think those are already 25-35% faster that 4870...
Posted on Reply
#4
cooler
gt300 spec look nice

how many percentage jump will this gpu bring compare to current gpu.
Posted on Reply
#5
a_ump
grid, stalker, fallout 3, red alert 3 are the only games i saw on [/url=http://www.xbitlabs.com/articles/video/display/radeon-hd4890_7.html#sect0]xbitlab's review[/url] that the HD 4890 outperformed the GTX 285, and it's performance over the HD 4870 is more like 5-15%. but i see your point as the HD 5870(i thk it'll be that) will have 400 more shaders, twice the rop's, and 8 more TMU's than the HD 4870 performance should be 40% more at least, plus the memory clock bump to 1100 from 900.
Posted on Reply
#6
DonInKansas
40nm? That's hot.

Or not hot? Hope temps are nice........:p
Posted on Reply
#7
iStink
djisas said:
The 1GHz HD4890 already proves capable of beating the 285 in several games, i think those are already 25-35% faster that 4870...
here let me fix that statement for you:

djisas said:
The OVERCLOCKED 1GHz HD4890 already proves capable of beating the STOCK 285 in several games, i think those are already 25-35% faster that 4870...
Posted on Reply
#8
enaher
I think it sounds impressive, but i want to see performance remember the people that got the 260GTX 192 for 350+ and then the 4870 came and woop its ass for less $, something ive learned from the last year dont buy PR and SPECS, remeber Phenom, The 2900XT, and to some extent the GT200 for the price they had and getting there ass handed by the 9800GX2, dont get me wrong i want a awsome product from Nvidia, but id rather have an awsome product from both Ati & Nvidia to drive prices down and quality up:toast:
Posted on Reply
#9
happita
a_ump said:
grid, stalker, fallout 3, red alert 3 are the only games i saw on [/url=http://www.xbitlabs.com/articles/video/display/radeon-hd4890_7.html#sect0]xbitlab's review[/url] that the HD 4890 outperformed the GTX 285, and it's performance over the HD 4870 is more like 5-15%. but i see your point as the HD 5870(i thk it'll be that) will have 400 more shaders, twice the rop's, and 8 more TMU's than the HD 4870 performance should be 40% more at least, plus the memory clock bump to 1100 from 900.
Is that clock confirmed? Because if not, then it could be a new architecture that stays around 900 but with better performance...ie. 4870 to 4890 with the better quality materials used and whatnot, but hopefully to a greater extent where we will really see AA not even affect games at any resolution, thats what I hope :D
Yea and let's not forget the OCability from the newer 40nm process, and seeing that ATI has a bit more experience with 4770, I predict that it will be more efficient with minimal leakage.
Posted on Reply
#10
a_ump
i agree, and 40nm does seem to allow great oc's. HD 4770 750 stock, and most clocks reach 850+ from what i've seen, and the memory from 600-900 is very common as well. I was under the impression the RV870 was a very finely tuned RV770. i hope more info is released soon on the RV870, i could care less what nv brings cause just hearing these spec's makes me weary for ATI.
Posted on Reply
#11
Steevo
All these cards are bandwidth starved, all ATI really needs to do to open up the cores they currently have is to add a larger cache on die,a nd shrink the die. A $99 4770 that performes better than a equal priced "250, AKA 9800, AKA 8800..." and overclocks better, runs with less energy use, and cooler.



Hrm.....seems like Nvidiots are still floating the Nvidia boat.


As to drivers, yes Nvidia makes you feel good by realeasing one driver a week in multiple forms so you can spen all your time uninstalling, reinstalling, and trying to see if it fixes things, and then end up using whatever prevents the issues you have or experiance, and with ATI, you don't get all the fun of a bunch of drivers to try, so you just use the one that works. Damn, using only one driver that works and not trying three or four. Seems stuipd to me to, oh well, you keep trying while I go play games.
Posted on Reply
#12
Steevo
iStink said:
here let me fix that statement for you:





I know words are hard, so here are pictures. The ones above are performance per dollar.

Do you understand this?


Below are the performance at STOCK clocks for the models.






So let me see, spend more money for less performance............ GTX285 FTW!!!!!! ;)
Posted on Reply
#13
Blacksniper87
Fanboi much? geez mate i don't think there is such a thing as a bigger fanboi looser comment. Here's an idea what if, if you think something about nvidia you put in logical sentences and not call people who use there cards Nvidiots.

Hey mate this takes the cake the majority of your graphs are performance per dollar and then you have graphs that are relative performance all of which nvidia wins so do you have a point at all? Lastly i may be relatively new here but from what i know your not allowed to double post.
Posted on Reply
#14
roofsniper
Blacksniper87 said:
Fanboi much? geez mate i don't think there is such a thing as a bigger fanboi looser comment. Here's an idea what if, if you think something about nvidia you put in logical sentences and not call people who use there cards Nvidiots.

Hey mate this takes the cake the majority of your graphs are performance per dollar and then you have graphs that are relative performance all of which nvidia wins so do you have a point at all? Lastly i may be relatively new here but from what i know your not allowed to double post.
you actually have to read the graphs to understand them. 4890 is 21% more performance per dollar than the gtx285. thats a big difference. gtx285 is only 5% more powerful. you pay a lot more for that slight performance bonus.
Posted on Reply
#15
KainXS
I highly doubt ATI would release the 58XX series with only 1200 sp's, I think they will take advantage of the die strink and double up on sp's like they did with the 48XX series before.

I think ATI already predicted that the GT300's would have these specs and 1200 sp's would probably give nowhere near the performance of a GT300 with only 1200sp's.
Posted on Reply
#16
Blacksniper87
yes your point? This is different to when in the history of the computer industry? You always pay significantly more for the highest end product regardless of the slight performance difference. Also i did read the graphs and what i did notice was, there was no overclocks of anything else apart from the 4890 so ..............
Posted on Reply
#17
Darren
Blacksniper87 said:
yes your point? This is different to when in the history of the computer industry? You always pay significantly more for the highest end product regardless of the slight performance difference. Also i did read the graphs and what i did notice was, there was no overclocks of anything else apart from the 4890 so ..............
Would you pay 21% more for a car that runs 5% faster? NO

So why would anyone with a brain do it for a GPU or a CPU?
Posted on Reply
#18
DrPepper
The Doctor is in the house
KainXS said:
I highly doubt ATI would release the 58XX series with only 1200 sp's, I think they will take advantage of the die strink and double up on sp's like they did with the 48XX series before.

I think ATI already predicted that the GT300's would have these specs and 1200 sp's would probably give nowhere near the performance of a GT300 with only 1200sp's.
The number of SP's aren't half as important as the way the shaders handle instructions. For example nvidia uses a different method to ati and thats why nvidia can use less, more powerful shaders than ATI. Whose to say ATI will continue with thier current shader model and change it so they can use 1200sp's and get the same performance as 2400. Anyway back to the point ATI engineers prefer to find cheap methods of adding performance and compared to nvidia's tactic of adding more.
Posted on Reply
#19
theorw
a_ump said:
?dude where have u been? ATI for the past year has been all over nvidia's ass, they had the performance crown for a while with the HD 4870x2, and yea nvidia does have the most powerful single core graphic card but either way it still took them 2 gpu's on a card to get it back. And yea nvidia does tend to have better drivers at launch than ATI but they also have a lot more access to games(TWIMTBP).

Drivers are important but with this release i thk we'll see the past repeat itself. Not like 2900XT vs 8800GTX, but more like 9800GTX vs HD 3870. The GTX 285 is one hell of a card with 240 SPU's 512-bit GDDR3. GT300 is going to have more than twice as many SPU's, and twice the bandwidth by moving to GDDR5. I just can't see the performance from the chip not destroying ATI's RV870 which i estimate by the specs, will only perform 25-35% faster than HD 4870.
+1
Animalpak WAKE UP!!!
Posted on Reply
#21
djisas
Do u guys realize that a chip with HALF, 50% the GT200 die size a few million's less transistors is only 5% slower that such beast??
HD5870 doest need to beat the gt300 on its game, just do like the 4870 did for half the price you get 70-80% of the performance, if its not enough, slap 2 of those together for the same price and get 140-160% the performance...
But what if they surprise us and create something that really competes with it??
Posted on Reply
#22
DrPepper
The Doctor is in the house
I'm going to sit and watch this I haven't seen a fanboy fight in a while :D
Posted on Reply
#23
djisas
Even thought im an ati fanboy, ive already had 2 nvidea cards and 3 ati cards...
When talking about drivers and having experienced both id say i like ati drivers more...
And i say im not gunna waste more than 200 for a card like i did with the 2900XT, its not like there where any cheaper decent card back then, but wasting over 300 i was out of my mind, then i wasted more 200 on 8800GTS G92 but sold for 150 and bought the hd4850 i have now for 180, i had finally done a good deal...
If the next hd5850 is under 200, count me in for a custom one...
I dont think ill side with the green side any more...
Posted on Reply
#24
hat
Enthusiast
Sweet, another hungry-hungry hippo.
Posted on Reply
#25
a_ump
DrPepper said:
The number of SP's aren't half as important as the way the shaders handle instructions. For example nvidia uses a different method to ati and thats why nvidia can use less, more powerful shaders than ATI. Whose to say ATI will continue with thier current shader model and change it so they can use 1200sp's and get the same performance as 2400. Anyway back to the point ATI engineers prefer to find cheap methods of adding performance and compared to nvidia's tactic of adding more.
yep, and with the GT300, nvidia are doing an overhaul on how their shaders compute, so i wonder if doubling their shaders will equate to even +50% performance, dam i just wanna see some benchies nvidia's chip has me all hyped. :laugh:
Posted on Reply
Add your own comment