NVIDIA GeForce GTX 480 Fermi

NVIDIA GeForce GTX 480 Fermi

(830 User comments) »

Value and Conclusion

  • According to NVIDIA the suggested retail price of the GeForce GTX 480 is USD 499. Whether this will hold up in the market we will see and depends on supply and demand.
  • Fastest single GPU card to-date
  • DirectX 11 support
  • Substantial performance improvements in DirectX 11
  • GDDR5 memory
  • Software voltage control seems possible
  • Native HDMI output
  • Support for NVIDIA 3D Vision Surround
  • Support for CUDA, PhysX and 3D Vision
  • Improvements to integrated HDMI audio device
  • High power draw
  • Noisy cooler
  • High temperatures
  • Fairly high price
  • Paper launch
  • High temperatures and power draw makes SLI and triple SLI difficult
  • Limited availability
  • Only 480 shaders
  • DirectX 11 won't be relevant for quite a while
8.2The first comparable thing to the NVIDIA GeForce GTX 480 that comes to the mind are the powerful yet gas-guzzling SUVs that are used for cross-country drives with heavy loads, comfortably. Often stereotyped with fuel-inefficiency, those vehicles don't compromise one bit on performance, space, passenger comfort, loads of accessories, utilities and features. Same is the case with GeForce GTX 480. The card sports solid overall build quality. It offers all the features you'd want from a graphics card of this generation, plus a lot of features which surprisingly have takers. This is the most powerful graphics processor till date, and one of the most complex pieces of silicon ever made with NVIDIA packing 3.2 billion transistors into a chip.

The GeForce GTX 480 shines in nearly all games in our test bench, giving out the highest performance figures for a single-GPU graphics card. It especially performs well in some of the newest DirectX 11 generation tests, something worth discerning. The fact that it's a single GPU accelerator works to its advantage. Multi-GPU graphics cards require application profiles specific to the game you're playing, could encounter certain limitations when running in windowed modes, among other more trivial issues. A single-GPU graphics card by design eliminates most of those.
For a larger part of our review, the GTX 480 doles out performance higher than the GeForce GTX 295, and stays in the second place. In DirectX 11 applications, particularly in tests involving heavy geometry processing load using hardware tessellation, the GTX 480 shines due to NVIDIA's distributed and parallelized implementation of hardware tessellation processing units on the GPU.
At the highest resolutions however, we expected more from the GTX 480. With 1.5 GB of memory and over 170 GB/s of memory bandwidth at its disposal, it should have been able to motorboat through high-resolution tests. At 2560 x 1600 pixel resolution, the GeForce GTX 295 takes a slight performance lead. This isn't acceptable for a card of its stature. The performance advantage the GTX 480 has over competing graphics cards seems to go down with increase in resolution. Either that's because it's faster in lower resolutions (around the 1080p mark), or that it's slower with higher resolutions.

The elaborate cooling mechanism does allow for some overclocking, although don't expect to set records with it. Anyone looking to top leaderboards even at the tech-forum level with the GTX 480 should spare some dough on at least water-cooling. This is where a slightly scary flip-side of the GTX 480 starts. The GPU runs extremely hot! NVIDIA set its thermal limits at a vaporizing 105 degrees Celsius. It's a pleasant 22 degrees this time of the year being Spring, but the GTX 480 still scores a scorching 96 degrees Celsius on typical gaming load. Crossing the 100 degrees mark won't be tough for this GPU in even slightly hotter places, especially with Summer coming up.
To keep it from pretty much pulverizing itself, NVIDIA's elaborate two-slot GPU cooler keeps up, respecting NVIDIA's thermal limits. Besides thick heatpipes that make the card easily an inch taller than it should typically be, and the hot grill on the obverse side that doubles up as a heat sink, the fan chips in with heavy air flow. During even the slightest 3D load, the fan spins up. Keeping this in mind NVIDIA gave the fan a more powerful motor than what it used in the past on similar cooling assemblies. The result is a loud cooler which ranks amongst some of the loudest when on load, in our test bed. NVIDIA first publicized its maximum board power as 295W <300 W (see below), retracted it and posted it as "250W" probably fearing bad PR. We disagree with their 250W figure. Investigating maximum board power, we landed at the 320W mark, which is way off NVIDIA's claims. Get ready to have at least a 600W PSU if you plan to run one of these, or a 900~1000 Watter for 2-way SLI. While at 10.5 inches length it's not as long as some of the latest cards from ATI, it still needs a very airy case to work in. For SLI, NVIDIA recommends that you use only NVIDIA-certified cases as only those will be able to keep the two or three cards from fuming up.

Finally, let's look at the value proposition. NVIDIA packs a set of features along with this card, such as CUDA, which is arguably the most popular GPU computation API among consumers. CUDA helps speed up video encoding, image fixing, among other media acceleration features, and that is a nice addition. NVIDIA doesn't leave out support for other APIs either, with support for OpenCL, and DirectCompute 5.0. Some consumers may benefit from Stereo 3D, when seeking higher 3D immersion, although that's a $130 expenditure buying the 3D Vision kit. Another feature is 3D Vision Surround. Comparable to the ATI Eyefinity technology, this allows a single display-head to span across three physical displays. The catch here however, is that you require two of these cards in SLI to be able to connect the 3 displays, unlike with ATI Radeon HD 5700+ accelerators where a single card can handle all three physical displays. Stereoscopic 3D also needs monitors that support 120 Hz refresh rate, so each eye could view images at 60 Hz (fluid and flicker-free). That more than doubles the cost of the hardware, making the technology accessible to a far smaller segment of the market than what ATI is hoping to cash in on.

At US $499, we don't have many complaints with the price tag. Thanks to competition, NVIDIA couldn't price this card any higher, and factoring some of the demerits, the price is just about acceptable for its target buyers. Another problem buyers could face is availability, not because this is some new super-hero action figure that flies off the shelves, but that NVIDIA is making limited number of these initially. There are "tens of thousands" (read: >10,000) cards in all that make it to stores world over. NVIDIA expects online stores to have them stocked by mid-April. If you plan to buy one of these, make sure you have an airy case, a good PSU, optionally a headset if you don't want the fan noise to distract, and some Pagan magic on your friendly neighborhood hardware vendor to have one of these to sell. Go forth bravely.

Mar 29: NVIDIA just sent in a statement regarding the power consumption:
"NVIDIA lists the TDP as 250 Watt. This value is the maximum value measured during PC gaming."

NVIDIA also claims to have never published a power number of "295 W" and insists on that being corrected. The only thing we can find is "<300 W" in one of their leaked slides, looks like everyone was wrong. NVIDIA's explanation for the "<300 W" number is that at that time the final clock numbers [and shader count] were not decided yet. Either way, our measurement of 320 W stands for what we could call "maximum board power".
Page:
Next Page »(830 User comments)