Sunday, January 18th 2009

GT300 A Leap Forward for NVIDIA GPU Architecture

Every once in a while, comes a GPU by NVIDIA that marks the evolution of GPU architecture for NVIDIA. A good example of this would be the G80, which was a distinct evolution of the GPU architecture for the company. Sources tell Hardware-Infos that the GT300 is on course of being one such GPU that comes with distinct architectural changes. To begin with, GT300 will start the company's DirectX 11 conquest the way its ancestor, the G80 did for DirectX 10, which turned out to be largely a successful one.

The GT300's architecture will be based on a new form of number-crunching machinery. While today's NVIDIA GPUs feature a SIMD (single instruction multiple data) computation mechanism, the GT300 will introduce the GPU to MIMD (multiple instructions multiple data) mechanism. This is expected to boost the computational efficiency of the GPU many-fold. The ALU cluster organization will be dynamic, pooled, and driven by a crossbar switch. Once again, NVIDIA gets to drop clock-speeds and power consumptions, while achieving greater levels of performance than current-generation GPUs. With GT300, NVIDIA will introduce the next major update to CUDA. With the new GPUs being built on the 40nm silicon fabrication process, transistor counts are expected to spiral-up. NVIDIA's GT300 is expected to go to office in Q4 2009, with its launch schedule more or less dependent on that of Microsoft's Windows 7 operating system that brings in DirectX 11 support.
Source: Hardware-Infos
Add your own comment

46 Comments on GT300 A Leap Forward for NVIDIA GPU Architecture

#26
Selene
sexy, this looks like the card to get.
GT212 will be a place holder and only be used to get to 40nm, but there are rumors already thatr its being scraped.
Posted on Reply
#27
oli_ramsay
G-wizI hate what Nvidia and ATi are doing. they can't let the user enjoy their video cards for at least half a year or more without coming out with new cards, whether it's a die shrink or an entirely new card.

This just left me pondering if I should buy a GTX260 216 or wait a whole year for GT300. If this could be described in words, fuckery would be the term used to describe both companies.
I kind of agree with you, it makes one hesitant to buy new hardware when there's always something new and better around the corner. But this is the computer industry and change is always a good thing. If they didn't bother bringing cards out all the time we'd still be on 90nm x850xt's lol.

I hope this new architecture brings some good performance gains and I also hope AMD have something good to counter them.
Posted on Reply
#28
Woody112
Lets see the next gen of ATI should have:
-shader clock
-512 bit bus
-MIMD
-Physics
Here's to my wishful thinking, because I serious doubt it's all going to happen.
I never like the small bus bandwidth of ati cards. And the fact they have never implemented a shader clock sucks. If they would keep pace with Nvidia and use the MIMD architecture and implement physics that would make one hell of a 5870x2. But ya wishful thinking.:rolleyes:
Posted on Reply
#29
leonard_222003
This bullshit Nvidia serves us is to stray away some ATI costumers , yeah we are going to give you life like graphics and 1 billion shaders operations pixels ....etc. whatever sounds good for idiots to take the bait.
End result is some jerky looking games and very few ones that look good.
Except the crysis series we have some ugly games and if we look at what ATI/Nvidia tells us about their graphics cards we should expect an orgasm or some life changing experience.
Probably the next GT300 will run crysis with all details even on full HD , wow , what an achievement.
Don't expect more than this people , it's more likely to see better graphics that will wow everyone on the next xbox or playstation 4 than you will see on computer , because we lack games and good people to bother making anything in the land where a game is pirated in hours after release.
This Nvidia people amaze me how stupid they think we are , well , some are , the ones that cheer CUDA but they don't use it in anything they do or the people that cheer physx but they didn't played a game in their life with physx , the brainwashed people.
Posted on Reply
#30
btarunr
Editor & Senior Moderator
leonard_222003This bullshit Nvidia serves us is to stray away some ATI costumers , yeah we are going to give you life like graphics and 1 billion shaders operations pixels ....etc. whatever sounds good for idiots to take the bait.

Probably the next GT300 will run crysis with all details even on full HD , wow , what an achievement.
If you look carefully, "spicing-up" the specs is what AMD has resorted to, the most in recent past: "320 stream processors", "GDDR4" (in reference to Radeon HD 2900 XT, in which despite specs that looked like the card would eat 8800 GTX (before its release), it fell flat, and eventually lost to even the $230 8800 GT), etc. And yes, that's what usually happens with releases like this, when the upcoming GPU makes minced-meat out of the game that made the previous generation GPUs sweat. We saw that with Doom 3, where a single 8800 GTX was able to max-out Doom 3 at any resolution.
Posted on Reply
#31
lemonadesoda
nVidia HAVE to do this as a forward planning thing. With Intel's Larrabee coming out later 2009 (IIRC) then Intel will have a super scalar math device; so much more flexible than nV or ATI offerings.

With this nVidia will get a lot more flexibility in math, making CUDA much more powerful for GENERAL MATH rather than very specific SIMD math that it does now.

I'm not so sure how MIMD will help GPU rendering though. The "graphics pipeline" remains the same. However, it would allow CUDA AND graphics rendering to happen at the same time. (At the moment IIRC it can't. It can only do ONE thing at a time... so if you mix graphics and CUDA it needs to "swap" between math and graphics processing which is incredibly inefficient.)

If someone can explain how MIMD helps GRAPHICS performance, pls post.
Posted on Reply
#32
AsRock
TPU addict
Dayum going be hard to save money for 9+ months lol....
Posted on Reply
#33
Bjorn_Of_Iceland
nVidia engineers are evil geniuses. They state a concept, ATi would try to simulate it, then ATi will fail. After 2 gens, ATi would get it right, then nVidia would bend time space warp again. lol
Posted on Reply
#34
eidairaman1
The Exiled Airman
cannon fodder to me, aslong as it works good i dont care anymore, but if it comes with a 500 USD price tag, screw that.

lets say this if 1 company or 2 falls out of existence i will probably stop following computer tech and become a hermit on that front, go back to playing Console games.:roll::banghead::shadedshu
Posted on Reply
#35
btarunr
Editor & Senior Moderator
Bjorn_Of_IcelandnVidia engineers are evil geniuses. They state a concept, ATi would try to emulate it, then ATi will fail. After 2 gens, ATi would get it right, then nVidia would bend time space warp again. lol
If you look at the way R600 was built, that's far from emulating NVIDIA's DX10 GPU architecture. The same design, with a simple step-up in transistor counts yielded RV770, which still haunts NVIDIA.
Posted on Reply
#36
eidairaman1
The Exiled Airman
isnt that the truth, AMD loaded theirs up with SPs where NV decided to do a clock trick with their SPs, different techniques.
Posted on Reply
#37
Hayder_Master
looks new generation GPU's , and we wait the until the release it cuz seems come in2010 not in 2009
Posted on Reply
#38
soryuuha
leonard_222003This Nvidia people amaze me how stupid they think we are , well , some are , the ones that cheer CUDA but they don't use it in anything they do or the people that cheer physx but they didn't played a game in their life with physx , the brainwashed people.
CUDA needed for PhysX acceleration. :roll:

As for PhysX, 3DMark Vantage? :banghead: :banghead:
Posted on Reply
#39
Unregistered
I'm getting whiffs of the FX architecture... All the chickens have been counted.
Posted on Edit | Reply
#40
Unregistered
G-wizI hate what Nvidia and ATi are doing. they can't let the user enjoy their video cards for at least half a year or more without coming out with new cards, whether it's a die shrink or an entirely new card.
How dare they push the frontiers of consumer technology, the thoughtless bastards!

:roll:
Posted on Edit | Reply
#41
Binge
Overclocking Surrealism
soryuuhaCUDA needed for PhysX acceleration. :roll:

As for PhysX, 3DMark Vantage? :banghead: :banghead:
I've also used CUDA to assist in 3D rendering. When there's extra features you can't call it anything but a luxury. :p
Posted on Reply
#42
silkstone
Well... just another upgrade. Won't see any difference between this and current cards using dx9 and 10. We gotta wait till dx11 before games start to look better, and by the time that's out Nvidia will be on the 2nd gen and ATI would have brought something better out.
Posted on Reply
#43
Selene
So much wishfull thinking.
LOL some of you are right, we wont even have a need or uses for DX11 when this hits, and when the first WOW factor DX11 games hit, this card will be like the 8800GTX on crysis, lots of bitching going on.
So as cool as it will be to have the first topend DX11 GPU it will be just like all the rest, plays DX9/10 game maxed, but will be hard pressed to cope with the DX11 killers like "Crysis" 2 " The other Island", and its X-pac "Crysis, the cave we forgot about on the other side of the island".
Posted on Reply
#44
LifeOnMars
G-wizI hate what Nvidia and ATi are doing. they can't let the user enjoy their video cards for at least half a year or more without coming out with new cards, whether it's a die shrink or an entirely new card.

This just left me pondering if I should buy a GTX260 216 or wait a whole year for GT300. If this could be described in words, fuckery would be the term used to describe both companies.
I'm hoping you are planning to step up your resolution, because, as it is, your two 4870's in crossfire are drastic overkill for 1280x1024. Especially with that processor.
Posted on Reply
#45
DarkMatter
While it's true all that has been said about DX11, remember that the news are not about that, it's about: "the GT300 will introduce the GPU to MIMD (multiple instructions multiple data) mechanism. This is expect to boost the computational efficiency of the GPU many-fold. The ALU cluster organization will be dynamic, pooled, and driven by a crossbar switch.".

That's not related to DX11 or any other API, that's how the GPU works internally and it's a HUGE improvement over SIMD.

I agree with lemonadesoda in that this might mostly affect GPGPU and very little to graphics processing, but that's assuming that the load balance is fairly efficient nowadays, which we really don't know. I think that we probably don't know enough about how exactly they work in that front. IMO it is assumed by most that Nvidia's SPs are very efficient, because they are centainly much more efficient than Ati ones when load balancing, "scalar" versus VLIW and all, and IMO that makes us believe that Nvidia's ones have to be above 90-95% efficiency. But it still remains the fact that maybe Nvidia ones are still below a 75% and if MIMD can increase that to around 90-95% that's already a 15-20% increase for free. Add into the mix what lemonade said about graphics+CUDA at the same time and also that the card will probably be able to perform a context change (from vertex to pixel, for example) in the same clock and we might be really getting somewhere.

Maybe this can help answering your last question lemonade? It's funny, because I thought this and almost convinced myself of that possibility, as I was writing... :laugh:
Posted on Reply
#46
vampire622003
There aren't even that many game sout for DX10 yet, lol, nor have we perfected it. :laugh:
Posted on Reply
Add your own comment
Apr 25th, 2024 12:08 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts