• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Correct readings?

rodent

New Member
Joined
Feb 8, 2017
Messages
9 (0.00/day)

Attachments

  • gpu-z_sy_testrender.jpg
    gpu-z_sy_testrender.jpg
    94.7 KB · Views: 900
  • gpu-z_sy_testrender2.jpg
    gpu-z_sy_testrender2.jpg
    115.2 KB · Views: 5,763
It's not that far off. Depending on who you ask, just +1MHz here or there. MSI says 1556-1746/2002. TPU says 1557-1746/2002.

GPU-Z says 1557-1747/2002.
gpu-z_sy_testrender2-jpg.83875


That's not much higher GPU clock, either the same or +1MHz(depending who you ask). And exactly the same memory clock(regardless of who you ask), 8008/4 = 2002. GDDR5 is quad pumped(MHz x 4 = Effective MHz).

Anyway, for another reference point, check what MSI Afterburner says.
 
Last edited:
Well I mean the render test (the other screenshot). It says 1911 Mhz for GPU and 1901 Mhz for memory.
 
Well I mean the render test (the other screenshot). It says 1911 Mhz for GPU and 1901 Mhz for memory.
I got the same problem with my MSI 1070-X and here all tools tells me that the memory clock is 100 MHz lower then spec like yours. I just over clocked it and got the promised 2002. To lazy to return the card. The 1911 GHz for the CPU is the boost 3 value. All fine there.
 
Last edited:
You're probably doing it wrong. The GPU-Z render test probably just isn't providing enough load to cause the card to ramp up to its full memory clock speeds. It's only meant to check PCI-E speeds. Try running a graphics benchmark like Heaven or Valley.
 
You're probably doing it wrong. The GPU-Z render test probably just isn't providing enough load to cause the card to ramp up to its full memory clock speeds. It's only meant to check PCI-E speeds. Try running a graphics benchmark like Heaven or Valley.
I trust he did it right since I the same problem and I "test" my GPU folding. I'm however interested to know if some MSIs are shipped with lower memory clocks. Mine was and it would not ramp up to boost 3 speed on the CPU unless I had hardware acceleration ON in Firefox. Still the case.
 
Well the specs for my card says only 1746 Mhz boost (it's factory overclocked so I suppose that's the number when overclocked). All these numbers are confusing.

Does your card have Micron memory? Mine has - I read that NVidia 1070 with Micron (instead of Samsung) memory originally had overclocking issues, MSI were supposed to fix that with a BIOS update. Maybe they've just reduced memory clock to fix it in that update?
 
First... the 1911 clock you are seeing is the actual boost clock. The clock you see in gpuz is the minimum boost. There are many bins above it.

The memory... confirm its speeds with msi afterburner and see if gpuz is seeing it right on the first screen, but wrong on the sensor tab.

Edit: you are are also running game a pretty old driver for the card... consider updating it....edit2.. driver is good.. sorry.
 
You're probably doing it wrong. The GPU-Z render test probably just isn't providing enough load to cause the card to ramp up to its full memory clock speeds. It's only meant to check PCI-E speeds. Try running a graphics benchmark like Heaven or Valley.

OK. I'm currently running the card on an old board with PCI-E 1.1 so that may explain it.
 
PCI-E 1.1 shouldn't limit the max speed the memory can achieve. I'd still try giving it a substantial graphics load while being monitored. Maybe try a few different graphics benchmarks to be sure. Possibly try 3DMark too(Fire Strike and/or Time Spy)...besides the 2 I mentioned previously. I don't know why, or if, folding would or wouldn't be enough to fully load the memory. I did say probably...since I don't now 100% for sure what's going on with your cards.

What I've read, IIRC, about the BIOS update says it should allow for stable memory OCs of more than 100MHz or so above 2002MHz with the Micron GDDR5. Nobody mentioned underclocks happening because of it....that I can recall.
 
First... the 1911 clock you are seeing is the actual boost clock. The clock you see in gpuz is the minimum boost. There are many bins above it.

The memory... confirm its speeds with msi afterburner and see if gpuz is seeing it right on the first screen, but wrong on the sensor tab.

Edit: you are are also running game a pretty old driver for the card... consider updating it.

Well both Afterburner and GPU-Z says the same in my latest test, except that Afterburner doubles up memory clock, I suppose because it counts currently used memory (2 x 2GB in this case = 1901 x 2 = 2038).

As for the driver it's the latest from NVidia, I've been advised always to get them from there?


gtx1070_gpu_mem_test.jpg
gxt1070_driver.jpg
 
PCI-E 1.1 shouldn't limit the max speed the memory can achieve. I'd still try giving it a substantial graphics load while being monitored. Maybe try a few different graphics benchmarks to be sure. Possibly try 3DMark too(Fire Strike and/or Time Spy)...besides the 2 I mentioned previously. I don't know why, or if, folding would or wouldn't be enough to fully load the memory. I did say probably...since I don't now 100% for sure what's going on with your cards.

What I've read, IIRC, about the BIOS update says it should allow for stable memory OCs of more than 100MHz or so above 2002MHz with the Micron GDDR5. Nobody mentioned underclocks happening because of it....that I can recall.

OK, I thought an Iray 3D test render would max out everything. Will try one of those benchmark tests then to see what happens.
 
Afterburner reports the data rate frequency/MHz as doubled. Since it's GraphicsDoubleDataRate5 memory/RAM. So just divide what AB says by 2.

1901MHz x 2 = 3802MHz or 1901MHz x 4 = 7604MHz(effective). What you're looking for then is 2002MHz in GPU-Z and 4004MHz in AB. Which would be 8008MHz effective. It's confusing...I know.

GDDR5 operates with two different clock types. A differential command clock (CK) as a reference for address and command inputs, and a forwarded differential write clock (WCK) as a reference for data reads and writes, that runs at twice the CK frequency. Being more precise, the GDDR5 SGRAM uses a total of three clocks: two write clocks associated with two bytes (WCK01 and WCK23) and a single command clock (CK). Taking a GDDR5 with 5 Gbit/s data rate per pin as an example, the CK runs with 1.25 GHz and both WCK clocks at 2.5 GHz. The CK and WCKs are phase aligned during the initialization and training sequence. This alignment allows read and write access with minimum latency.
https://en.wikipedia.org/wiki/GDDR5_SDRAM

2002MHz(CK) x2 = 4004MHz(WCK) x2 (WCK01 + WCK23) = 8008MHz effective
 
Last edited:
OK, currently running Heaven with Ultra quality and 8x antialias and it says:

Graphics: 1961 Mhz
Memory: 4004 Mhz (2 x 2002 I assume)

GPU-Z and Afterburner says almost the same, so I guess things are OK then. I just find it confusing that the actual GPU boost clock speed is much higher than the specs state, but I'm not complaining as long as it isn't lower...

Thanks everyone for your input!
 
Afterburner reports the data rate frequency/MHz as doubled. Since it's GraphicsDoubleDataRate5 memory/RAM. So just divide what AB says by 2.

1901MHz x 2 = 3802MHz or 1901MHz x 4 = 7604MHz(effective). What you're looking for then is 2002MHz in GPU-Z and 4004MHz in AB. Which would be 8008MHz effective. It's confusing...I know.

OK. Yes, if only they could agree on representing the data the same way, much confusion could be avoided I think...
 
Yeah...I have no idea why they do that with the boost numbers. But they do. It's typical that the card can/will boost significantly higher.
 
Yeah...I have no idea why they do that with the boost numbers. But they do. It's typical that the card can/will boost significantly higher.

OK. Well at least they don't exaggerate, as they often do... :)
 
perfcap Vrel?
 
Back
Top