• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GPU-Z PCI bus speed crossfire

Joined
May 3, 2010
Messages
1,095 (0.20/day)
Location
Essex, England
Processor Ryzen 5900X OC 5150Mhz
Motherboard Asus ROG Crosshair VIII Formula
Cooling Custom EKWB for CPU, VRM's & GPU with 2x 480mm Rads
Memory Gskill TridentZ 3600 Mhz C17
Video Card(s) Powercolor RX 6900XT Liquid Devil Ultimate
Storage Samsung 970 Evo Plus 2TB x 2 Raid0
Display(s) MSI Optix MAG272CQR 27 1440p x2
Case Corsair 1000D
Audio Device(s) onboard 7.1 HD Audio
Power Supply Seasonic PRIME Ultra 1300w PSU
Mouse Logitech G300s
Keyboard Logitech G19s
Software Windows 10 64Bit
Benchmark Scores R20 : 9329 Timespy: 21455
Hi,

I'll grab more screens and in-depth details soon, awaiting new PSU.

Running Asus Sabertooth Z97 with Sapphire Vega64 Nitro+ in crossfire.

PCI-E's should be showing x8 bus speed as it's x16 shared but each card showing as running at x16 speed.
 
Download AIDA64 and check GPGPU benchmark, read/write to GPU should give you bandwidth available to each card.
 
Download AIDA64 and check GPGPU benchmark, read/write to GPU should give you bandwidth available to each card.

Of course, once my new PSU is installed I will run it.

I know the board is only rated for 8/8 when both slots are in use which is why I figured it may be a big reading 16/16 on GPU-Z
 
Of course, once my new PSU is installed I will run it.

I know the board is only rated for 8/8 when both slots are in use which is why I figured it may be a big reading 16/16 on GPU-Z
Could be a plx link chip showing both x16 links? How good is the motherboard.
 
Could be a plx link chip showing both x16 links? How good is the motherboard.

One of the best, it's an old board (4yrs), Asus Z97 Sabertooth MK1. I'd like to think it's a good high-end board, for the time.
 
The system may not read it correctly in some configurations. If you've got two of them in the system then they'll definitely be at 8x. Some of my mobo's will do this if I'm running my two 5970's for example.
 
The system may not read it correctly in some configurations. If you've got two of them in the system then they'll definitely be at 8x. Some of my mobo's will do this if I'm running my two 5970's for example.
I never heard of any confirmed reports that gpuz shows the wrong pcie status, always turned out to be user error. Could you get me Screenshots please?
 
I never heard of any confirmed reports that gpuz shows the wrong pcie status, always turned out to be user error. Could you get me Screenshots please?
GPU-z PCI-e.png

This board (4CoreDual-SATA2), doesn't have a x16 electrical PCI-e, yet my 7900 GX2 (7950 GX2 for driver ;)), apparently can utilise x16 lanes.
I think GPU-z simply gets confused over PLX/Bridge chips in dual GPU cards (GX2s, and X2s).
It should display MB's PCI-e limit for both GPUs, and not only on primary one.
 
This board (4CoreDual-SATA2),
Pretty Sure this Board is Physically PCIx16 but only PCIx4 Electrically. Its also AGP 8
Mine is as far as i can remember (its boxed and in Storage)
 
Hey guys, got a few screenshots,
Full Render.png
GPU-Z.JPG
 
PCI-e 3.0 x16 = ~16GB/s max. in theory, you got 12GB/s in AIDA64 so it is working as x16.
 
PCI-e 3.0 x16 = ~16GB/s max. in theory, you got 12GB/s in AIDA64 so it is working as x16.

Hey, Thanks, I agree. I never thought it was x16, this thread was more of reporting the bug itself.
 
PCI-e 3.0 x16 = ~16GB/s max. in theory, you got 12GB/s in AIDA64 so it is working as x16.
You mean 12GB/s for both? That's also not what you think it is because if you consider the write speed of 24GB/s, where did that extra 8GB/s come from? The board doesn't have a PLX chip and the CPU still only has 16 PCIe lanes. The documentation of the board even says:
2 x PCIe 3.0/2.0 x16 (Single at x16, dual at x8/x8)

If anything, this is a bug as there is no hardware actually running both at 16x.
 
Last edited:
I meant it as I saw it. Measured 12GB/s value is more than PCI-e x8 can provide (which should be between 6 and 8 GB/s).
However, this was tested with both GPUs selected (and CFX enbabled), so GPGPU benchmark might added two values together to get that possible x16 speed (?).
To be sure though, it would be better to test only one GPU or do it on disabled crossfire.
 
I meant it as I saw it. Measured 12GB/s value is more than PCI-e x8 can provide (which should be between 6 and 8 GB/s).
However, this was tested with both GPUs selected (and CFX enbabled), so GPGPU benchmark might added two values together to get that possible x16 speed (?).
To be sure though, it would be better to test only one GPU or do it on disabled crossfire.
What you need is a PCIe bandwidth test, not a GPGPU memory test. Doesn't AIDA have that?
 
7900GX2 has a bridge chip on the card, which provides 2x x16 to the GPUs (and inter-GPU bandwidth)
Yes, but MB can only provide x4 bandwidth to it.
Woudn't that be a problem, since both GPUs must be fed data throught that x4 connection ?
What you need is a PCIe bandwidth test, not a GPGPU memory test. Doesn't AIDA have that?
GPGPU Memory Read and Write tests are PCI-e speed tests. Pure VRAM bandwidth is tested under "Memory Copy".
There is no way a Vega GPU has 12GB/s of VRAM Read/Write Speed :D
 
Yes, but MB can only provide x4 bandwidth to it.
Woudn't that be a problem since both GPUs must be fed data throught that x4 connection ?
It's probably reading the last link to the GPU and not the link going to the root complex. GPU-Z probably isn't intelligent enough to traverse PCIe topology to figure out what the smallest link between the GPU and CPU is when there is more than one.
 
It’s physically impossible for the board to do anymore than 8x 8x I have the same board.
 
Yes, but MB can only provide x4 bandwidth to it.
Woudn't that be a problem, since both GPUs must be fed data throught that x4 connection ?
Yeah, you are right of course. Technically, the GPUs are still connected and running at x16,
 
Yeah, you are right of course. Technically, the GPUs are still connected and running at x16,
I guess then nothing can be done to make GPU-z show what actual PCI-e speed is... (because like @Aquinus said, it's in topology ?)
Q : Can a small/optional PCI-e real-time speed test be build into it though (like "load test", for PCI-e link version) ?
Would be nice to have a program that can max. PCI-e bandwidth on GPU, while NVMe is being tested.
 
Last edited:
Back
Top