Tuesday, December 29th 2009

NVIDIA Fermi-based GeForce GPU Further Delayed?

NVIDIA's next-generation GeForce GPU based on the Fermi architecture is reportedly further delayed to March 2010, up from its originally expected time-frame of January. NVIDIA on its part maintained that Fermi-based GeForce GPUs will be released sometime in Q1 2010, and with a March launch, that would still stand true.

Fermi's development history is marked with late arrivals. The DirectX 11 compliant architecture was announced in October 2009 to counter the market-available DirectX 11 compliant ATI Radeon HD 5800 GPUs. Then in mid-November, the company released the first products based on the architecture - GPGPU accelerators under the NVIDIA Tesla HPC banner. An alleged working prototype GeForce accelerator was spotted around the same time, with word doing rounds that NVIDIA will be ready with the new GeForce GPU in early Q1, probably coinciding with the CES event. Faced with further delays, NVIDIA reportedly notified its partners that the new GPUs will be released to the marked only in March.

NVIDIA plans to launch the 40 nm Fermi-GF100 GPU which is DirectX 11 compliant and supports GDDR5 memory in March, and will launch a GF104 version. Till then, the mainstream-thru-performance segments will be left to be defended by GeForce GTS 250, GT 240, GT 220, 210, 9800 GT, against a fortified mainstream lineup by AMD consisting of ATI Radeon HD 5670/5650 (codenamed "Redwood"), and ATI Radeon HD 5450 (codenamed "Cedar"). These DirectX 11 compliant GPUs from AMD will be released in January.
Source: DigiTimes
Add your own comment

136 Comments on NVIDIA Fermi-based GeForce GPU Further Delayed?

#126
Unregistered
Eva01MasterMaybe I'm being optimistic, but what's the point on being pessimistic and dismiss the new releases believing the new NVidia architecture will be a fail? I can't even recall which was the last architectural fail in the Green Team (Actually I can, to me last failure was 7900GX2 which was corrected in part by 7950GX2) and BTW 4XXX X2 or whatever had nothing to do against the GTX295...
Well, actually that's the reason I don't buy (waste) another 5770 to CF, and I'm waiting 'till march to see the nvidia offerings. I hope the price will be right tho...
#127
ToTTenTranz
hayder.masterbla bla bla , i win my word still no DX11 nvidia cards until Q2 2010
There, I fixed it for you.
2010 doesn't sound that far away anymore, since it'll be in a few hours ;)
Posted on Reply
#128
eidairaman1
The Exiled Airman
Eva01MasterMaybe I'm being optimistic, but what's the point on being pessimistic and dismiss the new releases believing the new NVidia architecture will be a fail? I can't even recall which was the last architectural fail in the Green Team (Actually I can, to me last failure was 7900GX2 which was corrected in part by 7950GX2) and BTW 4XXX X2 or whatever had nothing to do against the GTX295...
9800GX2, Mobility Parts. 8800GTS G92
Posted on Reply
#129
Eva01Master
I had a 9800GX2 and it rocked my gaming rig until I sold it away. Can't say anything about 8800GTS, never had one, and G92 architecture have been reused heavily, but it's still great architecture. I wouldn't dismiss an overhauled '69 Mustang just because it's from 1969...
Posted on Reply
#130
FordGT90Concept
"I go fast!1!11!1!"
If NVIDIA doesn't get rid of those ECC memory controllers on Fermi, it is going to be a crappy video card. ECC is great for Tesla and horrible for GeForce...

I'll just say it: NVIDIA made a bad choice focusing on Tesla. They are expecting to find a huge market but they are disowning an existing huge market in order to infiltrate another. Because only corporations/governments looking to buy super computers would even look at Tesla, I think they overestimate how much money there is to be had in that segment.
Posted on Reply
#131
ToTTenTranz
FordGT90ConceptIf NVIDIA doesn't get rid of those ECC memory controllers on Fermi, it is going to be a crappy video card. ECC is great for Tesla and horrible for GeForce...

I'll just say it: NVIDIA made a bad choice focusing on Tesla. They are expecting to find a huge market but they are disowning an existing huge market in order to infiltrate another. Because only corporations/governments looking to buy super computers would even look at Tesla, I think they overestimate how much money there is to be had in that segment.
The graphics card market will be thrown into oblivion as soon as cloud computing is standardized..

That said, they know that pushing further into the server market will be the only way to "stay alive".
Posted on Reply
#132
Fourstaff
ToTTenTranzThe graphics card market will be thrown into oblivion as soon as cloud computing is standardized..

That said, they know that pushing further into the server market will be the only way to "stay alive".
Cloud computing is still quite far away, and you would need a lot of bandwidth to pump all those pixels. I prefer to think that the line between the CPU and GPU is getting blurred, eg Intel trying to get into the GPU section through larabee (but failed), AMD got ATI and so in the best position, Nvidia realising that they need to migrate to the CPU side through Tesla.
Posted on Reply
#133
Steevo
www.newegg.com/Product/Product.aspx?Item=N82E16814121346

VS


(/-\/-\/-\/-\/-\) Vapor



Mebey I should.



Cloud computing is more focused on small multi threaded jobs that can be reassembled out of order and with little impact on the rest of the job or the continuation of other parts of the job. Like folding at home, seti at home, climate computation etc....

Cloud computing will never have the bandwith, thread handling, or tasking available in the next few years to be a viable force to drive a real time 3D game. Look at DX11, it is a excesize to see how far we can take a single chip/AIB to compute multithreaded applications natively, and even with its huge bandwidth and power we still fail to fully use the core.
Posted on Reply
#134
FordGT90Concept
"I go fast!1!11!1!"
Seeing as Google Chrom OS got mostly frowns and it was to be the first cloud operating system, I'd say cloud computing won't see much use for at least another decade. It really comes down to a simple fact: even the smallest of computers have more processing power than they do bandwidth. Cloud computing, right now, would only work well in a gigabit+ network with servers hosting virtual machines and having client computers running of the LAN. Even then, they are restricted to running simple applications like word processing, spreadsheets, and internet browsing. We also can't forget that if you are looking to running multiple cloud computers, you are better off with multiple core CPUs and buying a cheap video card. GPUs are pitiful at virtualization namely because they have limited access to the hard drives.

Fermi will be dead in 10 years (long replaced by something else). Cloud computing won't see much use for 10 years. Looks like a bad decision to me.
Posted on Reply
#135
Mussels
Freshwater Moderator
FordGT90ConceptIt really comes down to a simple fact: even the smallest of computers have more processing power than they do bandwidth.
exactly my view. the other aspect to it all is the slow write speeds of HDD's, so even in a world with infinite (or 1-10Gb bandwidth), you need enough ram in the machine to fit whatever things you're streaming - and that leaves you with some severe limitations in the low power PC segment that this stuff is designed for
Posted on Reply
#136
FordGT90Concept
"I go fast!1!11!1!"
RAM is more expensive per GiB than even SSDs are. The whole idea, at this time, is ridiculous.

It is mostly Google and Intel pushing the idea of cloud computing. Google wants your information and Intel wants corporations to buy the hardware because they have deeper pockets (8-way, 8-core Xeon platforms, anyone?). There is little benefit here for the consumer.
Posted on Reply
Add your own comment
Apr 23rd, 2024 03:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts