Thursday, October 25th 2018

TechPowerUp GPU-Z 2.14.0 Released

TechPowerUp today released the latest version of GPU-Z, the popular graphics subsystem information and diagnostic utility. Version 2.14.0 adds support for Intel UHD Graphics iGPUs embedded into 9th generation Core "Coffee Lake Refresh" processors. GPU-Z now calculates Pixel and Texture Fill-rates more accurately, by leveraging the boost clock instead of the base clock. This is particularly useful for scenarios such as iGPUs, which have a vast difference between the base and boost clocks. It's also relevant to some of the newer generations of GPUs, such as NVIDIA RTX 20-series.

A number of minor bugs were also fixed with GPU-Z 2.14.0, including a missing Intel iGPU temperature sensor, and malfunctioning clock-speed measurement on Intel iGPUs. For NVIDIA GPUs, power sensors show power-draw both as an absolute value and as a percentage of the GPU's rated TDP, in separate read-outs. This feature was introduced in the previous version, this version clarifies the labels by including "W" and "%" in the name. Grab GPU-Z from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.14.0
The change-log follows.

  • When available, boost clock is used to calculate fillrate and texture rate
  • Fixed missing Intel GPU temperature sensor
  • Fixed wrong clocks on some Intel IGP systems ("12750 MHz")
  • NVIDIA power sensors now labeled with "W" and "%"
  • Added support for Intel Coffee Lake Refresh
Add your own comment

13 Comments on TechPowerUp GPU-Z 2.14.0 Released

#1
Robcostyle
About boost clocks shown -how about dynamic value? I mean, instead of showing some base boost, why won’t it show the maximum?
Mine strix 1080 ti, even with factory cooling always runs 1949-1974 MHz, instead of 1704 mhz shown in gpuz
Besides, it could ease up the comparison between various brands
Posted on Reply
#2
W1zzard
Robcostylewhy won’t it show the maximum?
because it's not possible to read maximum as far as I know
Posted on Reply
#3
Bjørgersson
RobcostyleAbout boost clocks shown -how about dynamic value? I mean, instead of showing some base boost, why won’t it show the maximum?
Mine strix 1080 ti, even with factory cooling always runs 1949-1974 MHz, instead of 1704 mhz shown in gpuz
Besides, it could ease up the comparison between various brands
Because 1704 MHz is the boost 1.0 clocks, anything above that is boost 3.0 which changes dynamically depending on load, temperatures, power limit / consumption etc., so GPU-Z can only read those values under load. At least I believe so.
Posted on Reply
#4
Tsukiyomi91
at last I can finally see some numbers on how my Intel iGPU is behaving xD
Posted on Reply
#5
trog100
this one matches what my palit software shows me.. the one i was using showed me 1724 boost instead of 1800.. it also explains why i could not clock as high as i thought i should be able to..

the actual boost with the valley benchmark running varies between 1900 and 2100... furmark has it down to 1500..

the main control (governor) seem to be power usage.. assuming the temps are okay as they should be..




trog


ps.. the memory reading is wrong though.. on my card it should be 7747.. the default is 7000.. i aint sure where the 1937 comes from
Posted on Reply
#6
newtekie1
Semi-Retired Folder
trog100ps.. the memory reading is wrong though.. on my card it should be 7747.. the default is 7000.. i aint sure where the 1937 comes from
1937MHz is the actual memory frequency. DDR6(and DDR5X) is Quad-Data Rate, so you multiple the actual memory frequency by 4 to get your effective frequency of 7747. But the memory is actually runnig at 1937MHz, so that is what GPU-Z shows.
Posted on Reply
#7
trog100
newtekie11937MHz is the actual memory frequency. DDR6(and DDR5X) is Quad-Data Rate, so you multiple the actual memory frequency by 4 to get your effective frequency of 7747. But the memory is actually runnig at 1937MHz, so that is what GPU-Z shows.
yes i did think it might be that.. the earlier version showed it differently which is what confused me.. i prefer the earlier way i think.. he he

trog
Posted on Reply
#8
T4C Fantasy
CPU & GPU DB Maintainer
trog100yes i did think it might be that.. the earlier version showed it differently which is what confused me.. i prefer the earlier way i think.. he he

trog
we display the correct clocks not MT/s
Posted on Reply
#9
WikiFM
W1zzardbecause it's not possible to read maximum as far as I know
It could take the max boost clocks after using the PCI-E render test, which should be more accurate than just reading the specified boost clock, so first show the fillrates using factory clocks and update them after running the render test.

That is because NVIDIA cards boost way higher than their boost clocks but AMD cards can't even reach their boost clocks without increasing TDP or undervolting (no Vega card can reach its boost clock out of the box).

Btw, next version could also add FP32 performance in GFLOPS using both factory clocks and after a PCI-E render test.
Posted on Reply
#10
Bjørgersson
WikiFMIt could take the max boost clocks after using the PCI-E render test, which should be more accurate than just reading the specified boost clock, so first show the fillrates using factory clocks and update them after running the render test.

That is because NVIDIA cards boost way higher than their boost clocks but AMD cards can't even reach their boost clocks without increasing TDP or undervolting (no Vega card can reach its boost clock out of the box).

Btw, next version could also add FP32 performance in GFLOPS using both factory clocks and after a PCI-E render test.
And what if I don't want to run the render test? :eek:
Posted on Reply
#11
WikiFM
chfrcoghlanAnd what if I don't want to run the render test? :eek:
Then you don't see the updated fillrates that's all. The user should be the one to click in the render test button or if it will be automatic then ask to user it it wants tl run it.
Posted on Reply
#12
Bjørgersson
WikiFMThen you don't see the updated fillrates that's all. The user should be the one to click in the render test button or if it will be automatic then ask to user it it wants tl run it.
My only problem with this run the render test method is that, sure, GPU-Z would be able to read the maximum boost clocks, but during gaming, 99% guaranteed the GPU is not going to be running on those clocks because of the temperatures and the aggressive clock decreasing of boost 3.0.
For example, my GTX 1070 boosts to 2100 MHz, but only below 55 or 60 °C. After reaching this temperature, the GPU starts decreasing its clocks by 12-13 MHz every 1-2 °C, which means that I play at 2050-2062 MHz 99% of the time. So yes, I agree that your method would allow GPU-Z to read the maximum boost clocks, but it would also make no sense, as I'm sure at least 80% of Pascal GPUs run above 60 °C, or even 70 °C while gaming.

Also, I'm sorry if sounded cocky in my previous comment, I didn't mean to. :)
Posted on Reply
#13
WikiFM
chfrcoghlanMy only problem with this run the render test method is that, sure, GPU-Z would be able to read the maximum boost clocks, but during gaming, 99% guaranteed the GPU is not going to be running on those clocks because of the temperatures and the aggressive clock decreasing of boost 3.0.
For example, my GTX 1070 boosts to 2100 MHz, but only below 55 or 60 °C. After reaching this temperature, the GPU starts decreasing its clocks by 12-13 MHz every 1-2 °C, which means that I play at 2050-2062 MHz 99% of the time. So yes, I agree that your method would allow GPU-Z to read the maximum boost clocks, but it would also make no sense, as I'm sure at least 80% of Pascal GPUs run above 60 °C, or even 70 °C while gaming.

Also, I'm sorry if sounded cocky in my previous comment, I didn't mean to. :)
2100 Mhz is closer to 2062 Mhz than to maybe 1900 Mhz (which I estimate is your current boost clock according to GPU-Z), so it is still more accurate (and looks better) hehehe
Posted on Reply
Add your own comment
Apr 26th, 2024 15:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts