• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GPUz vRAM use. What is it really showing us?

Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
W1z,

We've seen a couple of people (@John Naylor in particular) mention that applications such as GPUz and MSI Afterburner do not read actual memory used in GPUs, but the allocation (it was compared to a credit limit versus credit used). After multiple attempts to have the guy clarify, it has never happened. Can you clarify that point? At least for GPUz?

It seems odd that if these applications are reading allocated versus actual used that they update so frequently, the allocated amount. It would make sense that if it is an allocation, like a page file for example, that the size doesn't vary much, particularly it doesn't LOWER the allocation upon refresh rate.

If it is an allocation, how far off do you image actual use to be?

Anyway, just looking for clarity on what exactly the memory used in GPUz is reading and some details.

Much appreciated.
 
Last edited:
Joined
Mar 10, 2015
Messages
2,239 (1.31/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
W1z,

We've seen a couple of people (@John Naylor in particular) mention that applications such as GPUz and MSI Afterburner do not read actual memory used in GPUs, but the allocation (it was compared to a credit limit versus credit used). After multiple attempts to have the guy clarify, it has never happened. Can you clarify that point? At least for GPUz?

It seems odd that if these applications are reading allocated versus actual used that they update so frequently, the allocated amount. It would make sense that if it is an allocation, like a page file for example, that the size doesn't vary much, particularly it doesn't LOWER the allocation upon refresh rate.

If it is an allocation, how far off do you image actual use to be?

Anyway, just looking for clarity on what exactly the memory used in GPUz is reading and some details.

Much appreciated.
Excellent question! Subbed because this is the part of tech that is interesting.

Honestly, to me, allocation == used. GPUz shouldn't know if an empty memory cell is allocated and storing a null value or allocated and not used (and I doubt the GPU reports the difference). This would prove interesting in something like the 970 where 3.8GB is allocated. Does just allocating into that .5GB cause some slowdowns?

Enough of my blabbering as I don't want to clutter your thread.
 
Joined
Jul 9, 2016
Messages
372 (0.30/day)
System Name Gaming System
Processor i7-4960X
Motherboard ASUS Rampage IV Black Edition
Cooling Noctua NH-D15
Memory Corsair Vengeance LP 32GB DDR3
Video Card(s) EVGA GTX 1080 ti FTW3
Storage 1TB Samsung 850 EVO; 500GB Samsung 850 EVO
Display(s) HP zr2740w Surround
Case Corsair A540
Audio Device(s) onboard
Power Supply EVGA G2 1300
Mouse Logitech MK550
Keyboard Logitech MK550
Software Windows 10 Pro
My guess is he probably doesn’t know, as he is probably reading from the video card bios on what is available and not available (or allocated). It is entirely up to the app/game to decide how much to ask for and how much to actually use. As a software developer I can go to the OS and ask for 1GB of memory. If it is available the OS will give it to me. Now that is probably not a good practice to ask for more than what you need, as your app may crash or fail to run. However, if it is a game app then possibly the developer may think anyone should only run one game at time, and ask for as much as possible. Just my thoughts.
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
If the author doesn't know... I wonder how anyone knows (and keeps posting that info repeatedly)...LOL!

I just want to get to the bottom of it. When it was said, it didn't make sense, and like I said, the user who said it was beckoned multiple times over (once in this thread and has posted since!) this and hasn't taken the time to respond.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
19,756 (3.49/day)
Processor Core i7-4790K
Memory 16 GB
Video Card(s) GTX 1080
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 7
Did some quick testing.

Code:
#define size 1024*1024*1024/8 // 1 GB buffer size
1)		cl_mem buffer_A = clCreateBuffer(m_context, CL_MEM_READ_ONLY, sizeof(double) * size, 0, &res);
2)		res = clEnqueueWriteBuffer(commandQueue, buffer_A, CL_TRUE, 0, sizeof(double) * size, hA, 0, NULL, NULL);
Reported VRAM usage before starting my program: 1296 MB
Executed line 1 - no change in memory usage
Executed line 2 - memory usage went up insantly, to 2319 MB, up by the expected 1 GB

 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
19,756 (3.49/day)
Processor Core i7-4790K
Memory 16 GB
Video Card(s) GTX 1080
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 7
So that is telling us it is reading what is in use, not what is 'allocated'... did I read that correctly?
not sure how clCreateBuffer is implemented internally, whether it "allocates" by your definition or does something else. But yes, it looks like it
 
Joined
Oct 28, 2018
Messages
565 (1.49/day)
Location
Zadar, Croatia
System Name SloMo
Processor G4560
Motherboard MSi H110-PRO-D
Cooling LC-CC-95 @ Arctic Cooling fan
Memory 2X Crucial DDR4 2400 4GB
Video Card(s) Integrated HD 610
Storage WD 500 GB + Seagate 500 GB + Toshiba 3 TB
Display(s) Lenovo D221
Case Corsair Carbide 100R
Audio Device(s) Manhattan Flex BT Headphones, Encore P-801 stereo speakers
Power Supply Corsair CX450M
Mouse microsoft office mouse
Keyboard Modecom mc-800m
Software Windows 10 Pro x64
Benchmark Scores gorstak @ hwbot.org
well, I don't know about vram, but If you create a fixed size swap file, the selected size is just it, some kind of border being set. How much of it will be used and written/rewritten on depends on a million small details.
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
not sure how clCreateBuffer is implemented internally, whether it "allocates" by your definition or does something else. But yes, it looks like it
I wont lie.. I am not sure either. I was just using the analogies the poster mentioned when trying to describe how GPUz reads memory. I would have to dig up and quote the passage to be more accurate.

well, I don't know about vram, but If you create a fixed size swap file, the selected size is just it, some kind of border being set. How much of it will be used and written/rewritten on depends on a million small details.
This is about vRAM though. A static page file is a different beast entirely...that said.....

When I mentioned the page file earlier, it was in reference to how it manages itself when set to dynamic. Basically, if it pushes past any line in the sand it stays at that same line in the sand/size until it is crossed again. Now, these are two different things, but IIRC, he mentioned the PF as sort of an analog.
 
Joined
Feb 2, 2015
Messages
2,707 (1.55/day)
Location
On The Highway To Hell \m/
It fills up your VRAM as full as it thinks it needs to, with things it thinks it might need in the near future. At any given time it's calling on a much smaller amount of those things it needs to use right now. While at the same time it's deciding to get rid of some things from the VRAM that it no longer sees that it might need in the near future. And filling the VRAM back up with other things it thinks it might soon need instead. Or not, because it doesn't think it might need anything more than what's in the VRAM currently. Keeping as much space as possible free is a good idea too. It wants to use as much as it thinks is a good idea to. But using too much is a waste(or serves no purpose). This is how it keeps your 3D apps running smoothly. And why the "allocated" amount of VRAM is ever-changing. It's being used in a highly efficient and highly effective manner.

Short answer: What's allocated is being used. It's not all being used at once. Because that's not even possible(in a typical scenario the allocated amount is way too big to be used in full at any given time). And would not be an efficient and effective way to use the VRAM.
 
Last edited:
Joined
Mar 10, 2015
Messages
2,239 (1.31/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
So that is telling us it is reading what is in use, not what is 'allocated'... did I read that correctly?
It tells me that allocated == used. Granted, I am relating everything back to cpu memory. When you allocate memory, it is not usable by another application. In that regards, it doesn't matter if it has a 1 or a zero in it because another application cannot use the space.
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
Right... and I think overall I either misunderstood the point or the analogies the guy was trying to make... here is the support....


https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x

These tools don't "actually report how much VRAM the GPU is actually using — instead, it reports the amount of VRAM that a game has requested. We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, ..... They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available".
I can refute the statement that 'cards with larger memory will request more memory' as a blanket statement. It seems to depend in title and amount of vram on the card. But I've seen 4/8/11 GB cards allocate the same amount of memory in most titles... does everyone else share that sentiment?
 
Last edited:
Joined
Jan 8, 2017
Messages
4,546 (4.38/day)
System Name Good enough
Processor AMD Ryzen R7 1700X - 4.0 Ghz / 1.350V
Motherboard ASRock B450M Pro4
Cooling Scythe Katana 4 - 3x 120mm case fans
Memory 16GB - Corsair Vengeance LPX
Video Card(s) OEM Dell GTX 1080
Storage 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) 4K Samsung TV
Case Zalman R1
Power Supply 500W
Code:
#define size 1024*1024*1024/8 // 1 GB buffer size
1)        cl_mem buffer_A = clCreateBuffer(m_context, CL_MEM_READ_ONLY, sizeof(double) * size, 0, &res);
2)        res = clEnqueueWriteBuffer(commandQueue, buffer_A, CL_TRUE, 0, sizeof(double) * size, hA, 0, NULL, NULL);
It seems that for OpenCL allocation and memory copies are distinct operations. From what I can gather clCreateBuffer with the CL_MEM_READ_ONLY flag basically only checks if the request amount of memory is available and does nothing else, no copying , no initialization, nothing. It would make sense that you would not see any change in terms of memory usage because by that point (first line) you haven't touched the memory at all.

This line should result in an immediate increase in memory usage in one go : (or at least that's what the CL_MEM_COPY_HOST_PTR flag should do)

cl_mem buffer_A = clCreateBuffer(m_context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof(double) * size, hA, NULL);
 
Last edited:
Joined
Feb 2, 2015
Messages
2,707 (1.55/day)
Location
On The Highway To Hell \m/
Here's something I was reading the other day that speaks to my point.
If a game has allocated 3GB of graphics memory it might be using only 500MB on a regular basis with much of the rest only there for periodic, on-demand use. Things like compressed textures that are not as time sensitive as other material require much less bandwidth and can be moved around to other memory locations with less performance penalty. Not all allocated graphics memory is the same and inevitably there are large sections of this storage that is reserved but rarely used at any given point in time.
https://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Discloses-Full-Memory-Structure-and-Limitations-GTX-970
 
Joined
Mar 10, 2015
Messages
2,239 (1.31/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Right... and I think overall I either misunderstood the point or the analogies the guy was trying to make... here is the support....

https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x

These tools don't "actually report how much VRAM the GPU is actually using — instead, it reports the amount of VRAM that a game has requested. We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, ..... They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available".


I can refute the statement that 'cards with larger memory will request more memory' as a blanket statement. It seems to depend in title and amount of vram on the card. But I've seen 4/8/11 GB cards allocate the same amount of memory in most titles... does everyone else share that sentiment?
Your quote is the surefire way to identify a lazy developer. At the very least, it identifies a developer that doesn't follow the guidelines of 'memory is precious, take only what you need and use what you take.' Or, the developer eats at Chinese Buffets.

Edited for poor typing.

Also, I don't like to call out developers as I am a developer myself and we constantly run into time crunches. But, it takes more time to figure out how much ram is available and take more than you need than it does to request what you need.

Edited again. Also, it doesn't matter if the application is 'using' the allocated memory or not. Once you allocate it, it is yours. You are using it. This is why we need 12GB gpus and 32GB of system ram. Wonder if those devs happen to work at Bethesda....
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
Here's something I was reading the other day that speaks to my point.
I can see the difference...and maybe it's just me... but rarely used is still used at some point, yeah?

It feels safe to say that these applications give a pretty good idea, if high level only info, of what is going on with vram consumption.. I'd have to imagine there would be some form of penalty if this data wasn't there already for "periodic" or even "rare" recalls of it. I did expect some fat to be left on the bone in the allocation versus what it's actually crunching every clock cycle.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
19,756 (3.49/day)
Processor Core i7-4790K
Memory 16 GB
Video Card(s) GTX 1080
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 7
I think the issue is with your definition of "use"
 
Joined
Sep 3, 2017
Messages
162 (0.20/day)
Location
Russia
Processor FX 8320 @4.2 | i7 2600 @3.8 | Xeon W3670 @ 3.6
Motherboard Asus Sabertooth R2.0 | Asus P8Z77-V Deluxe | Gigabyte X58-UD7
Cooling Zalman Performa 10+ | Zalman Performa 11+ | Zalman Performa 10+
Memory Crucial Ballistix Sport XT 32GB @ 1866 | Corsair Vengeance 32GB @1866 | Samsung 24GB @ 1600
Video Card(s) XFX Radeon 390x | Zotac GTX 1070 AMP Extreme | Zotac GTX 980 AMP Extreme
Storage Intel SSD / SAS 15k Fujitsu | Intel SSD / Velociraptors / Hitachi 2TB | Intel SSD / Samsung 1TB
Display(s) Samsung 245T | HP ZR30w | IBM 20" 4x3
Case Chieftec | Corsair Graphite 600T | Thermaltake Xaser IV
Audio Device(s) SB Titanium HD | SB Titanium HD | SB X-fi Elite Pro
Power Supply Thermaltake 875W | Corsair 850W | Thermaltake 1500W
Mouse Logitech | Logitech | Logitech
Keyboard Mitsumi Classic | Microsoft |Microsoft
Software W7 x64 | W7 x64 |W7 x64 / XP x32
2 years ago with 2GB GTX 680 i had issues and hard lags (up to 10 seconds) in online games if "usage" reported by monitoring programs get past 1980MB and software (either browser or game or something else) tries to allocate a bit more. So it's definitely used VRAM, allocated on request for this particular software and not available for others. No matter it actually hardly used by this software.

If you run some game you will see how usage amount of VRAM changes during different activity like loading new levels. Same goes to regular software, open additional tab or window and you will see increased size, close and it will be less VRAM.
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
I think the issue is with your definition of "use"
Elaborate. :)

I think I have a handle on it now. The bottom line, to me, is that these show memory 'use' more accurately then they dont. Surely there is some static data in there that may be rarely used, but use is use.
 
Joined
Jan 8, 2017
Messages
4,546 (4.38/day)
System Name Good enough
Processor AMD Ryzen R7 1700X - 4.0 Ghz / 1.350V
Motherboard ASRock B450M Pro4
Cooling Scythe Katana 4 - 3x 120mm case fans
Memory 16GB - Corsair Vengeance LPX
Video Card(s) OEM Dell GTX 1080
Storage 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) 4K Samsung TV
Case Zalman R1
Power Supply 500W
I don't even know why would it even matter if the data loaded into memory is used frequently or not. It's there, that's all that matters, what's the point of this debate ?
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
Well, the first post explains exactly why this thread was created...as a refresher, a user made repeated claims about the inaccuracies of these applications in measuring vram use so I made a thread seeing what the actual story was.

Hopefully now, misinformation about the application's inaccuracies will stop being shared. :)

Capeesh? :)

Example... someone says "You can confirm by looking in MSI AB or GPUz, how much memory you are using...... "

and the reply is, "Actually you can not. I hesitate to say that people are wrong when this statement is made .... they have simply been miinformed, not because they aren't useful tools but because these utilities do not actually do all that folks think they do. These utilities measure VRAM allocation, not usage." or "but there is no way to measure actual usage as no tool is available which does this. "

In the end it seems like it is splitting hairs a bit. Since use and allocation don't seem to be a big difference ("rare" is still being used, yes?) saying users are not able to get a good idea of their vram use/allocation from these applications seems incorrect.
 
Last edited:
Joined
Nov 13, 2007
Messages
7,633 (1.74/day)
Location
Austin Texas
System Name _
Processor 8700K @ 5.1 Ghz
Motherboard MSI Z370-A PRO
Cooling 120mm Custom Liquid
Memory 32 GB 3600 Mhz DDR4 16-16-16-36-380 trfc - 2T
Video Card(s) Gigabyte GTX 2080 Ti Windforce (Undervolted OC 1905MHz)
Storage 3x1TB SSDs
Display(s) Alienware 34" 3440x1440 120hz, G-Sync
Case Jonsbo U4
Audio Device(s) Bose Solo
Power Supply Corsair SF750
Mouse silent click gaming mouse
Keyboard tenkeyless
Software Windows 10 64 Bit
The question is not whether usage == allocation. The question is whether or not applications "FILL UP" or "USE" more ram because they can, as a 'best practice' to speed up performance. Examples: Windows Superfetch, Microsoft SQL... if you have 32 gb, it will use all 32 gb and preload the most accessed data in there.

What I understood Naylor to say was "you don't need as much ram as your applications say you do because programs just load extra stuff in there and use it because they can"

There is no evidence to suggest that games behave this way. It's not about use, it's about game / engine / driver behavior when more vram is given to them; and I don't see differences between vram sizes on cards. Differences are by resolution, texture quality etc.

While this kind of caching is common for databases and database engines, if this was the case for games, then after several hours of gaming, your vram would be pegged completely (as can be seen by looking at the ram usage of any database server). And GPU-Z would always report 90-100% utilization after several hours of game. It seems that engines only put into vram what they will need in for the immediate future; and only start nitpicking and paging to ram when they run out of vram.
 
Last edited:
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
Some games will put more memory to use depending on the available memory though. This isnt a terribly common thing, but does happen. ;)

We already know there is fat on the bone. How much will vary.
 
Joined
Nov 13, 2007
Messages
7,633 (1.74/day)
Location
Austin Texas
System Name _
Processor 8700K @ 5.1 Ghz
Motherboard MSI Z370-A PRO
Cooling 120mm Custom Liquid
Memory 32 GB 3600 Mhz DDR4 16-16-16-36-380 trfc - 2T
Video Card(s) Gigabyte GTX 2080 Ti Windforce (Undervolted OC 1905MHz)
Storage 3x1TB SSDs
Display(s) Alienware 34" 3440x1440 120hz, G-Sync
Case Jonsbo U4
Audio Device(s) Bose Solo
Power Supply Corsair SF750
Mouse silent click gaming mouse
Keyboard tenkeyless
Software Windows 10 64 Bit
Some games will put more memory to use depending on the available memory though. This isnt a terribly common thing, but does happen. ;)

We already know there is fat on the bone. How much will vary.
More system ram possibly, but more vram? What game does that?

I know games like GTA 5 that will auto configure draw distances and some quality settings based on how much vram you have, but they always behave the same given the same settings regardless of your card size.
 
Joined
Dec 31, 2009
Messages
15,698 (4.36/day)
vRAM. I dont recall the titles, but I've seen it before in reviews. Again, it isnt common, but happens.

The whole point here is the viability of gpuz and msi ab in reading ram use/allocation. And it looks like it is viable, contrary what has been posted a couple of times around tpu.
 
Top