• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 2080 memory bandwidth issue

Joined
Aug 13, 2010
Messages
5,521 (1.02/day)
Hi w1z, seems like this RTX 2080 turned into an 8800GT :P

YFKciaA.png
 
Maybe it deactivates bandwidth when it doesn't need it? :D
You know, like engines shut down cylinders?
 
yeah i noticed it too, very curious...maybe a gpuz needs an update maybe it is like bug said...
 
yeah i noticed it too, very curious...maybe a gpuz needs an update maybe it is like bug said...
No way it's like I said, I was just poking fun.
 
maybe u need multiply that for 6 cos it use gdr6 XD
 
It just shows the "true", it doesn't recognize it as DDR. So multiply that with 4.
 
The formula for calculating memory bandwidth is (bus bandwidth / 8) * (memory clock speed) * (memory type bits per cycle). So for stock GTX 2080 that's (256 / 8) * 1750 * 8 = 448 000 MB/s = 448 GB/s. My guess is that @W1zzard 's calc for memory doesn't know about GDDR6 (or isn't detecting it correctly) and is thus defaulting to standard DDR, which has only 2 bits/cycle versus GDDR6 which has 8 bits/cycle.

All the GPU-Z screenshots in the GTX 2080/2080 Ti reviews have the same issue... Wiz is gonna have to redo all of those once he fixes this bug!
 
The formula for calculating memory bandwidth is (bus bandwidth / 8) * (memory clock speed) * (memory type bits per cycle). So for stock GTX 2080 that's (256 / 8) * 1750 * 8 = 448 000 MB/s = 448 GB/s. My guess is that @W1zzard 's calc for memory doesn't know about GDDR6 (or isn't detecting it correctly) and is thus defaulting to standard DDR, which has only 2 bits/cycle versus GDDR6 which has 8 bits/cycle.

All the GPU-Z screenshots in the GTX 2080/2080 Ti reviews have the same issue... Wiz is gonna have to redo all of those once he fixes this bug!
So in a nutshell, this means that it's quadruple like GDDR5?
 
Memory bandwidth throughput is memory bandwidth throughput. It does not need calculations, multiplier or anything else. Never did.
I just put it here so w1zz could fix it. That's all. Basically what @Assimilator said
 
So in a nutshell, this means that it's quadruple like GDDR5?

Nope, octuple. GDDR5 is 4 bits/cycle which is why GTX 1070 needs to run its 256bit GDDR5 at 2GHz to reach "only" 256GB/s bandwidth, while RTX 2080's 256bit GDDR6 purrs along at a much lower 1.75GHz yet is able to reach almost double the bandwidth of 1070 @ 448GB/s.

On the other hand, GDDR5X (which GDDR6 is essentially the successor of) is also 8 bits/cycle, hence why GTX 1080 can get 320GB/s bandwidth (far higher than GTX 1070) at only 1250MHz memory clock (far lower).
 
O man i feel sorry for you, wasted money eh?
 
Nope, octuple. GDDR5 is 4 bits/cycle which is why GTX 1070 needs to run its 256bit GDDR5 at 2GHz to reach "only" 256GB/s bandwidth, while RTX 2080's 256bit GDDR6 purrs along at a much lower 1.75GHz yet is able to reach almost double the bandwidth of 1070 @ 448GB/s.

On the other hand, GDDR5X (which GDDR6 is essentially the successor of) is also 8 bits/cycle, hence why GTX 1080 can get 320GB/s bandwidth (far higher than GTX 1070) at only 1250MHz memory clock (far lower).
Ah, ok. It's always nice to be a little wiser. :D
 
Will the Texture Fillrate GT/s issue on the 2080's also be fixed with the memory bandwidth correction? The 278.8 for the 2080 does not match the NV whitepapers.
 
Last edited:
BUMP. @W1zzard

(cant seem to edit this post??????)

Also had the question in another thread since wedesnday. Confirmation or if its just me would be great!
 
Low quality post by Tomgang
Nvidia has done it again. They gimped turing like they gimped Maxwell (GTX 970) if any one remember. This time its just more official:roll:
 
W1zz, GPU-Z has to be fixed to suite RTX cards.

Base clock slot became extremely irrelevant this time around.
Having default and current is fine when it is accurate - it isnt as for 2.11.0
No mention of RT or Tensor cores in any way.

Boost is just the official stuff, and while being in the same realm of Intel's CPU boost clocks, the numbers were not relevant since GTX 900 four years ago.
Maybe time to move those slots into real-time reading? or at least real time vs official in the box below.

No RTX 2080 Ti is going to operate at 1350Mhz core under 3D load.

Stuff has to change.
 
W1zz, GPU-Z has to be fixed to suite RTX cards.

Base clock slot became extremely irrelevant this time around.
Having default and current is fine when it is accurate - it isnt as for 2.11.0
No mention of RT or Tensor cores in any way.

Boost is just the official stuff, and while being in the same realm of Intel's CPU boost clocks, the numbers were not relevant since GTX 900 four years ago.
Maybe time to move those slots into real-time reading? or at least real time vs official in the box below.

No RTX 2080 Ti is going to operate at 1350Mhz core under 3D load.

Stuff has to change.
Oh, good. For a moment there I was worried we ran out of people that can tell others what they need to do :P
 
Back
Top