GeForce GTX 480 PCI-Express Scaling Review 25

GeForce GTX 480 PCI-Express Scaling Review

(25 Comments) »

Introduction

NVIDIA Logo




Thanks to the modular, scalable design of PCI-Express, consisting of independent data-paths called 'lanes', motherboard designers can control a limited PCI-Express lane budget, and offer multiple PCI-Express x16 slots. In mid-range motherboards, and particularly in most motherboards based on Intel's newest P55 chipset, 16 PCI-Express lanes are used to drive two PCI-Express slots with bandwidths of 8 lanes each. These two slots can then accommodate two graphics cards for multi-GPU performance upscaling using ATI CrossfireX or NVIDIA SLI technologies. It should also be noted that PCI-Express 2.0 doubles the bandwidth available per lane. So if you read about PCI-E x8 2.0 in this review, these results are representative of a PCI-E x16 link in 1.1 mode - like on many old motherboard. Another important point to make is that PCI-Express is forward and backwards compatible. You can run any PCI-E 1.x card in a 2.0 slot or any 2.0 card in any 1.x slot, with reduced bandwidth of course.



In this review, we test the impact of running the NVIDIA GeForce GTX 480 graphics accelerator on PCI-Express slots that are electrically PCI-Express 2.0 x16, x8 and x4. We wanted to the PCI-Express x1 too, but the card did not boot up in this configuration. The lane change is done by covering the data-paths using thin, insulating adhesive tape, which turns off those data-paths (lanes), while still keeping the device functional. Tests at PCI-Express 2.0 x8 will show you how the GTX 480 with its seemingly heavy system bandwidth requirement fares on systems with PCI-Express 2.0 x16 electrical x8 slots. Tests at PCI-Express 2.0 x4 will give you a rough idea of how it fares on systems with PCI-Express x16 electrical x4 (there are plenty of such motherboards), and also systems with PCI-Express 1.1 x16 electrical 1.1 x8.

For your reference, we also wrote a similar article about the HD 5870, back in 2009.

Test System

Test System
CPU:Intel Core i7 920 @ 3.8 GHz
(Bloomfield, 8192 KB Cache)
Motherboard:Gigabyte X58 Extreme
Intel X58 & ICH10R
Memory:3x 2048 MB Mushkin Redline XP3-12800 DDR3
@ 1520 MHz 8-7-7-16
Harddisk:WD Raptor 740ADFD 74 GB
Power Supply:BFG ES-800 800W
Software:Windows 7 64-bit
Drivers:NVIDIA: 195.62
ATI: Catalyst 9.12
GTX 480: 197.17
Display: LG Flatron W3000H 30" 2560x1600
  • All video card results were obtained on this exact system with the exact same configuration.
  • All games were set to their highest quality setting
Each benchmark was tested at the following settings and resolution:
  • 1024 x 768, No Anti-aliasing. This is a standard resolution without demanding display settings.
  • 1280 x 1024, 2x Anti-aliasing. Common resolution for most smaller flatscreens today (17" - 19"). A bit of eye candy turned on in the drivers.
  • 1680 x 1050, 4x Anti-aliasing. Most common widescreen resolution on larger displays (19" - 22"). Very good looking driver graphics settings.
  • 1920 x 1200, 4x Anti-aliasing. Typical widescreen resolution for large displays (22" - 26"). Very good looking driver graphics settings.
  • 2560 x 1600, 4x Anti-aliasing. Highest possible resolution for commonly available displays (30"). Very good looking driver graphics settings.

BattleForge


BattleForge, a card based RTS, is developed by the German EA Phenomic Studio. A few months after launch the game was transformed into a Play 4 Free branded game. That move and the fact that it was included as game bundle with a large number of ATI cards made it one of the more well known RTS games of 2009. You as a player assemble your deck before game to select the units that will be available. Your choice can be from forces of Fire, Frost, Nature and Shadow to complement each other.
The BattleForge engine has full support for DX 9, DX 10 and DX 10.1, we used the internal benchmark tool to acquire our results.

Our Patreon Silver Supporters can read articles in single-page format.
Discuss(25 Comments)