Sunday, September 21st 2008

GT200(b), GT300 SKUs Make for Early Sighting

Australian e-tailer Austin Computers has already begun listing two future NVIDIA stock keeping units (SKUs) - which obviously - are yet to arrive. The first listing is that of a GeForce GTX 280+, which preliminary specifications show could be a 55nm variant of the same GeForce GTX 280. Looking at how NVIDIA dealt with the 9800 GTX+, it could be assumed that the new GTX 280+ could feature higher clock speeds in order to make it more competitive.

Second to be listed, which looks rather surprising, is that of a GeForce GTX 350, based on the GT300 graphics processor. Again, whatever little specifications listed, show that the card is based on the 55nm silicon fab. process and holds 2 GB of GDDR5 memory across a 512-bit wide memory bus. It is mentioned that the product could be available any time in Q4 2008. Is NVIDIA gearing up for X'mas?
Add your own comment

33 Comments on GT200(b), GT300 SKUs Make for Early Sighting

#1
ShadowFold
Anyone wanna buy me one for xmas :D I think its fud tho, I think they meant GTX 250 or something, the 200's JUST came out.... But knowing Nvidia lately I highly doubt that...
Posted on Reply
#2
WhiteLotus
surely a typo - either that or Austin Computers is doing some very interesting long wait pre-orders
Posted on Reply
#3
MilkyWay
unless the 300 series is just a remake and a tweak series like the 9000series was
Posted on Reply
#4
Selene
Im thinking the GTX350 will be 2x GTX260 55nm 1gig GDDR 5 cards, so it will be a GX2 card with 2 gigs of GDDR5.
I cant see them making a new card with 512 bus 2gigs of GDDR5, it has to be a duel card like the 9800GX2.
Posted on Reply
#5
PVTCaboose1337
Graphical Hacker
ShadowFold
Anyone wanna buy me one for xmas :D I think its fud tho, I think they meant GTX 250 or something, the 200's JUST came out.... But knowing Nvidia lately I highly doubt that...
I'll buy you one if you send my your 4850!

But seriously, I think it must be a misprint, as the GT300 is probably the GT2xx or something.
Posted on Reply
#6
Megasty
I'm always up for new pc parts. If this holds up or eats the 4870x2 then I'll definitely be snagging one of these up. Bah, I don't even feel like using my 4870x2 since it sounds & feels like there's a small jet in my rig. I have the feeling I won't be using this either until games come out that need all the power.

The only problem I have with this info is that its fud at best (GTX 350). Why would NV suddenly break the mold and use a ton of super expensive memory on a single card. Whatever, I just hopes it costs a reasonable amount. They don't need to go through anymore GTX200 series fiascos.
Posted on Reply
#7
Sent1nel
Thats a hell of a missprint to make...
I doudt that it is.

Logic says it's possible it could be a dual chip varient on the 55nm scale, with a 256bit (or close) bus as apposed to the 512bit bus. This would make sense for the GDDR5 use and its amount.

If that is the case, then the GTX280+ could have a shrink in the die with a changed memory bus in order to allow for the core to be scaled down (and save more money just like the 9800 GTX vs 8800 GTX).

But then again I could easily be wrong.
Posted on Reply
#8
DarkMatter
I don't see why the card should be a dual card of the existing GPUs. If GT300 comes later this year, it would launch just on schedule. GT200 was way too late of schedule. Both graphics companies have different teams for different generation chips that work on the them long before they launch the previous one. GT200 launch date has nothing to do with this one, so GT300 HAD to be done for late 2008 when the project started. If they accomplised, there's no technical reason to postpone the launch.
Posted on Reply
#9
WarEagleAU
Bird of Prey
Wow, that will topple the HD 4870 no problem, hell even the 4870X2 if that is true.
Posted on Reply
#10
nflesher87
Staff
I wonder how much traffic posting these brought to their site
Posted on Reply
#11
hat
Enthusiast
Jesus Christ, 2GB video memory? Is that a monolithic GPU design?
Posted on Reply
#12
AMDCam
Well the schedule's been pretty screwed up with Nvidia anyway, with everyone anticipating the 9000 series, it sucking badly, and out of nowhere the sweetass GTX series coming out about a month later. I've never seen Nvidia do that before. But yeah I doubt a GTX3xx series is gonna be out soon.
Posted on Reply
#13
Binge
Overclocking Surrealism
That would be one hell of a comeback. GDDR5 and that mem bus... with the shader clocks -twitch- I want to see benches :banghead:... NOW.
Posted on Reply
#14
Unregistered
Is it me, or we really missing out on what really here? To me this is about the Tesla cpu gpu motherboard... Nvidia says this will replace cray computers.. Makes me wonder how the i7 going to compare.

NVIDIA Tesla GPU Computing Processor, a dedicated computing board that scales to multiple Tesla GPUs inside a single PC or workstation. The Tesla GPU features 128 parallel processors, and delivers up to 518 gigaflops of parallel computation. The GPU Computing processor can be used in existing systems partnered with high-performance CPUs.

I mean really what more would you ever need?
#15
Hayder_Master
im sure will be too much expensive , and good for nvidia use gddr5 and bad thing is still 55nm
Posted on Reply
#16
R_1
thraxed
Nvidia says this will replace cray computers...
Man, I remember headlines "Nvidia claims G92 will be a 1 Teraflop beast - the answer to Crysis at 2560x1600"
For this Tesla BS - you mean theoretical peak performance : "to sustain half a teraflop and just sending out one 32-bit single precision FP number per flop to the memory - not even counting the reads for the operands - would take two terabytes/s sustained memory speed. GPUs don't have nearly as much cache or other memory on chip as typical CPUs. Tesla offers DP throughput at 1/8 of the SP peak".
In real world applications Tesla will be compatible with two Intel quad core CPUs. And how many real world CUDA coded application are available at this moment?
Posted on Reply
#17
Tatty_One
Senior Moder@tor
Rumour = Speculation divided by... Make beleive multiplied by..... guesswork which results in bull*shit! Just a method of increasing hits on a site and potentially increasing revenue/sales.

Sorry, not in a very speculative mood today :o
Posted on Reply
#18
mdm-adph
Selene
Im thinking the GTX350 will be 2x GTX260 55nm 1gig GDDR 5 cards, so it will be a GX2 card with 2 gigs of GDDR5.
I cant see them making a new card with 512 bus 2gigs of GDDR5, it has to be a duel card like the 9800GX2.
I think this makes the most sense. Nvidia is probably going to do what ATI did with the HD 3000 series and just re-brand modifications of existing chips. It's the quickest and easiest thing to do.
Posted on Reply
#19
DarkMatter
R_1
Man, I remember headlines "Nvidia claims G92 will be a 1 Teraflop beast - the answer to Crysis at 2560x1600"
For this Tesla BS - you mean theoretical peak performance : "to sustain half a teraflop and just sending out one 32-bit single precision FP number per flop to the memory - not even counting the reads for the operands - would take two terabytes/s sustained memory speed. GPUs don't have nearly as much cache or other memory on chip as typical CPUs. Tesla offers DP throughput at 1/8 of the SP peak".
In real world applications Tesla will be compatible with two Intel quad core CPUs. And how many real world CUDA coded application are available at this moment?
Hmm but GT200 already suports DP on some of it's shaders and there are rumors the next chip could be full DP capable. Nvidia is betting strong in CUDA, so they won't let such a minutiae stop them.

The inherent inefficiency is another thing, but seing how GPUs increase their raw power much much faster than CPUs, while staying far superior in peak performance per watt, I bet it won't be a problem. FASTRA (the $4000 supercomputer made with 4x9800GX2) was a very promising solution that has had good acceptance on some circles. I mean there is a place for CUDA and TESLA and the like solutions, even though they might not be as flexible as CPU based supercomputers, cheap specialised solutions is the future for most areas. Why would you want a supercomputer to be able to perform 10 different complex tasks at the same time and cost you millions of dollars+extreme cooling+maintenace, if you can have 10 cheap machines that perform better and for a fraction of the money and will be far more accesible to the researchers?

BTW TESLA on many real world aplications (like tomography, F@H) can be MUCH (10x, 20x, 100x?) faster than two Quads. The aforementioned FASTRA "supercomputer" is indeed faster than a 256 node 3.5 million euro blade supercomputer on certain tasks. And that was based on a pure GPU architecture with no "compromises" in favor of GPU computing.

www.dvhardware.net/article27538.html
Posted on Reply
#20
dalekdukesboy
Megasty
I'm always up for new pc parts. If this holds up or eats the 4870x2 then I'll definitely be snagging one of these up. Bah, I don't even feel like using my 4870x2 since it sounds & feels like there's a small jet in my rig. I have the feeling I won't be using this either until games come out that need all the power.

They don't need to go through anymore GTX200 series fiascos.
well, for one I agree with your first paragraph wholeheartedly...all I have to do is read the reviews on the 4870x2 and yes it's great to game with, but then I get to power consumption and also the heat it generates...holy crap. It's effectively a space heater in the winter and either a good way for a poor computer enthusiast to go broke on electric bills or a rich enthusiast to spend all the extra money they weren't sure what to do with. Anyhow I think the 4870x2 is a nice card for ATI to say yes we hold the crown, but truthfully if you examine it's performance per dollar and performace per watt...it's anything but efficient. NOT to say it's a bad card obviously, I just never understood how it's been raved about so much when to me the real jewel is the 4850...when you get it with a good cooler and good .8 ns ddr3 it can overclock really well and is right at the heels of the 4870 and even the gtx260 in some benches, and yet it only uses about the same amount of power as an 8800gts 512 but beats it by a good margin in every bench you look at. To me, the real excitement should be over the 4850x2...if they actually give it some good cooling and good memory, you then have an extremely fast card, and hopefully a pretty efficient one as well.
Posted on Reply
#21
Darkrealms
Interesting. Other than the "new" GTX260 with the added shaders Nvidia hasn't really released anything lately to muddy the waters.
I really hope Nvidia does the die shrink and the DDR5 memory. That would make for some very nice performance increases I would think. They already OC well with the current die and often have good memory OCing as well.
I will be watching this for an upgrade path through EVGA ; )

I hope ATI will have something to compete. AMD needs everything it can muster to stay afloat right now : (

[AMD and Nvidia fan, no ati for me.]
Posted on Reply
#22
Morgoth
i hope this wil force ati to drop hd4870x2 price
Posted on Reply
#23
newconroer
I wouldn't be surprised if a 2gig DDR5 55nm variant was offered up around late 08, early 09. I said previously that by not showing the 280+ or 200b at Nvision, that Nvidia must have been planning to either scrap the 200b and release the GT300 instead, or keep the 200b, but hold off to release it around the time of GT300, and phase out the old 280s.

This would effectively mean

GT300>X2
280+>4870 1gb

etc.
Posted on Reply
#24
soldier242
mmhm i don'T quite like those "news", i bought a gtx 260 two months ago and now i can't really get a good price of an almost new card if i wanted to resell it, daaaaaaaamn i hope the card would last a bit longer :slap:
Posted on Reply
#25
swaaye
Well I've had a 8800GTX for almost 2 years now, through a few launches. No reason that GTX 260 can't do the same for you as long as you don't just jump at their whim.
Posted on Reply
Add your own comment