• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

TechPowerUp GPU-Z 2.14.0 Released

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,852 (7.39/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
TechPowerUp today released the latest version of GPU-Z, the popular graphics subsystem information and diagnostic utility. Version 2.14.0 adds support for Intel UHD Graphics iGPUs embedded into 9th generation Core "Coffee Lake Refresh" processors. GPU-Z now calculates Pixel and Texture Fill-rates more accurately, by leveraging the boost clock instead of the base clock. This is particularly useful for scenarios such as iGPUs, which have a vast difference between the base and boost clocks. It's also relevant to some of the newer generations of GPUs, such as NVIDIA RTX 20-series.

A number of minor bugs were also fixed with GPU-Z 2.14.0, including a missing Intel iGPU temperature sensor, and malfunctioning clock-speed measurement on Intel iGPUs. For NVIDIA GPUs, power sensors show power-draw both as an absolute value and as a percentage of the GPU's rated TDP, in separate read-outs. This feature was introduced in the previous version, this version clarifies the labels by including "W" and "%" in the name. Grab GPU-Z from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.14.0



The change-log follows.
  • When available, boost clock is used to calculate fillrate and texture rate
  • Fixed missing Intel GPU temperature sensor
  • Fixed wrong clocks on some Intel IGP systems ("12750 MHz")
  • NVIDIA power sensors now labeled with "W" and "%"
  • Added support for Intel Coffee Lake Refresh

View at TechPowerUp Main Site
 
About boost clocks shown -how about dynamic value? I mean, instead of showing some base boost, why won’t it show the maximum?
Mine strix 1080 ti, even with factory cooling always runs 1949-1974 MHz, instead of 1704 mhz shown in gpuz
Besides, it could ease up the comparison between various brands
 
About boost clocks shown -how about dynamic value? I mean, instead of showing some base boost, why won’t it show the maximum?
Mine strix 1080 ti, even with factory cooling always runs 1949-1974 MHz, instead of 1704 mhz shown in gpuz
Besides, it could ease up the comparison between various brands
Because 1704 MHz is the boost 1.0 clocks, anything above that is boost 3.0 which changes dynamically depending on load, temperatures, power limit / consumption etc., so GPU-Z can only read those values under load. At least I believe so.
 
at last I can finally see some numbers on how my Intel iGPU is behaving xD
 
this one matches what my palit software shows me.. the one i was using showed me 1724 boost instead of 1800.. it also explains why i could not clock as high as i thought i should be able to..

the actual boost with the valley benchmark running varies between 1900 and 2100... furmark has it down to 1500..

the main control (governor) seem to be power usage.. assuming the temps are okay as they should be..


20xx-z.jpg


trog


ps.. the memory reading is wrong though.. on my card it should be 7747.. the default is 7000.. i aint sure where the 1937 comes from
 
Last edited:
ps.. the memory reading is wrong though.. on my card it should be 7747.. the default is 7000.. i aint sure where the 1937 comes from

1937MHz is the actual memory frequency. DDR6(and DDR5X) is Quad-Data Rate, so you multiple the actual memory frequency by 4 to get your effective frequency of 7747. But the memory is actually runnig at 1937MHz, so that is what GPU-Z shows.
 
1937MHz is the actual memory frequency. DDR6(and DDR5X) is Quad-Data Rate, so you multiple the actual memory frequency by 4 to get your effective frequency of 7747. But the memory is actually runnig at 1937MHz, so that is what GPU-Z shows.

yes i did think it might be that.. the earlier version showed it differently which is what confused me.. i prefer the earlier way i think.. he he

trog
 
yes i did think it might be that.. the earlier version showed it differently which is what confused me.. i prefer the earlier way i think.. he he

trog
we display the correct clocks not MT/s
 
because it's not possible to read maximum as far as I know

It could take the max boost clocks after using the PCI-E render test, which should be more accurate than just reading the specified boost clock, so first show the fillrates using factory clocks and update them after running the render test.

That is because NVIDIA cards boost way higher than their boost clocks but AMD cards can't even reach their boost clocks without increasing TDP or undervolting (no Vega card can reach its boost clock out of the box).

Btw, next version could also add FP32 performance in GFLOPS using both factory clocks and after a PCI-E render test.
 
Last edited:
It could take the max boost clocks after using the PCI-E render test, which should be more accurate than just reading the specified boost clock, so first show the fillrates using factory clocks and update them after running the render test.

That is because NVIDIA cards boost way higher than their boost clocks but AMD cards can't even reach their boost clocks without increasing TDP or undervolting (no Vega card can reach its boost clock out of the box).

Btw, next version could also add FP32 performance in GFLOPS using both factory clocks and after a PCI-E render test.
And what if I don't want to run the render test? :eek:
 
And what if I don't want to run the render test? :eek:
Then you don't see the updated fillrates that's all. The user should be the one to click in the render test button or if it will be automatic then ask to user it it wants tl run it.
 
Then you don't see the updated fillrates that's all. The user should be the one to click in the render test button or if it will be automatic then ask to user it it wants tl run it.
My only problem with this run the render test method is that, sure, GPU-Z would be able to read the maximum boost clocks, but during gaming, 99% guaranteed the GPU is not going to be running on those clocks because of the temperatures and the aggressive clock decreasing of boost 3.0.
For example, my GTX 1070 boosts to 2100 MHz, but only below 55 or 60 °C. After reaching this temperature, the GPU starts decreasing its clocks by 12-13 MHz every 1-2 °C, which means that I play at 2050-2062 MHz 99% of the time. So yes, I agree that your method would allow GPU-Z to read the maximum boost clocks, but it would also make no sense, as I'm sure at least 80% of Pascal GPUs run above 60 °C, or even 70 °C while gaming.

Also, I'm sorry if sounded cocky in my previous comment, I didn't mean to. :)
 
My only problem with this run the render test method is that, sure, GPU-Z would be able to read the maximum boost clocks, but during gaming, 99% guaranteed the GPU is not going to be running on those clocks because of the temperatures and the aggressive clock decreasing of boost 3.0.
For example, my GTX 1070 boosts to 2100 MHz, but only below 55 or 60 °C. After reaching this temperature, the GPU starts decreasing its clocks by 12-13 MHz every 1-2 °C, which means that I play at 2050-2062 MHz 99% of the time. So yes, I agree that your method would allow GPU-Z to read the maximum boost clocks, but it would also make no sense, as I'm sure at least 80% of Pascal GPUs run above 60 °C, or even 70 °C while gaming.

Also, I'm sorry if sounded cocky in my previous comment, I didn't mean to. :)

2100 Mhz is closer to 2062 Mhz than to maybe 1900 Mhz (which I estimate is your current boost clock according to GPU-Z), so it is still more accurate (and looks better) hehehe
 
Back
Top