• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GPU-Z Shared Memory Layout

what happens if 2 or more GPUs are present ?

- the structure posted on the OP provide all the info ?
- the OP is rather old, does the OP structure is still up to date ?

The structure just contains name-value pairs, and so does not limit itself to one GPU.
For more details, I suggest you contact the author of GPU-Z.
 
how to get free video card memory

Code:
#define SHMEM_NAME _T("GPUZShMem")
#define MAX_RECORDS 128

#pragma pack(push, 1)
struct GPUZ_RECORD 
{
	WCHAR key[256];
	WCHAR value[256];
};

struct GPUZ_SENSOR_RECORD
{
	WCHAR name[256];
	WCHAR unit[8];
	UINT32 digits;
	double value;
};

struct GPUZ_SH_MEM
{
	UINT32 version; 	 // Version number, 1 for the struct here
	volatile LONG busy;	 // Is data being accessed?
	UINT32 lastUpdate; // GetTickCount() of last update
	GPUZ_RECORD data[MAX_RECORDS];
	GPUZ_SENSOR_RECORD sensors[MAX_RECORDS];
};
#pragma pack(pop)

If you use this shared memory in your application please leave a short comment.

Hello!I want to know, How can I get how much video card I have used.
My application use C++ Language. I want to get the Free physical video card, like Gpu-Z.
Do you have suggestion?
 
I am guessing that its not possible to get someone to explain how GPU-z gets the temps, but is it possible for it to start in minimized mode ?.

I am guessing that what SirReal says still holds true, one instance of GPU-z now gives you the temps of all your GPUs
 
I am guessing that its not possible to get someone to explain how GPU-z gets the temps, but is it possible for it to start in minimized mode ?.

I am guessing that what SirReal says still holds true, one instance of GPU-z now gives you the temps of all your GPUs

There is not a single way, it depends on the card, OS, driver etc.

GPU-Z has a -minimized command line parameter you can use.
 
ok, thank you
 
Edit: W1zzard has already answered this in the other thread here: http://www.techpowerup.com/forums/t...-gpu-z-shared-memory-update-frequency.195840/

I am using the shared memory [ currently with a DLL built with the code here: https://github.com/JohnnyUT/GpuzShMem , but have a newer version coming up with that dependency removed, and using some code modified from here: http://www.techpowerup.com/forums/threads/gpu-z-shared-memory-class-in-c.164244/#post-2605403 ] in my freeware project 'Remote Sensor Monitor'.

Details about the project can be found here:

http://www.hwinfo.com/forum/Thread-Introducing-Remote-Sensor-Monitor-A-RESTful-Web-Server

I am posting in this thread with reference to some 'discrepancies' I observed in the shared memory update frequency. I was assuming it could be polled every second. I have a Perl script running on a client machine accessing the GPU-Z shared memory values over HTTP based on this polling interval. A sample debug output of that script is below. The values in the CSV file lines are: 'lastUpdateTime' from GPU-Z shared memory converted to 'seconds' from 'milliseconds', GPU load and GPU power consumption. The line next to it has parameters from the client machine: Elapsed refers to the amount of time in seconds between the time the request to the shared memory was sent over the network to the time it took for the data to come back (this includes network delays etc.). Usually, it is between 10ms to 250ms, skewed towards lower delays. The SleepInterval number corresponds to the time after which the next request to the shared memory is placed on the network.

Code:
Enqueueing to CSV File : , 331202, 0.002799, 0.046480
Elapsed: 0.022736, SleepInterval: 0.977264
Enqueueing to CSV File : , 331202, 0.002799, 0.046480
Elapsed: 0.015493, SleepInterval: 0.984507
Enqueueing to CSV File : , 331204, 0.001460, 0.047216
Elapsed: 0.025459, SleepInterval: 0.974541
Enqueueing to CSV File : , 331204, 0.001460, 0.047216
Elapsed: 0.010567, SleepInterval: 0.989433
Enqueueing to CSV File : , 331207, 0.000000, 0.000000
Elapsed: 0.01458, SleepInterval: 0.98542
Enqueueing to CSV File : , 331207, 0.000000, 0.000000
Elapsed: 0.056981, SleepInterval: 0.943019
Enqueueing to CSV File : , 331207, 0.000000, 0.000000
Elapsed: 0.097846, SleepInterval: 0.902154
Enqueueing to CSV File : , 331209, 0.001490, 0.046605
Elapsed: 0.104709, SleepInterval: 0.895291
Enqueueing to CSV File : , 331209, 0.001490, 0.046605
Elapsed: 0.019662, SleepInterval: 0.980338
Enqueueing to CSV File : , 331212, 0.000000, 0.000000
Elapsed: 0.016406, SleepInterval: 0.983594
Enqueueing to CSV File : , 331212, 0.000000, 0.000000
Elapsed: 0.018795, SleepInterval: 0.981205
Enqueueing to CSV File : , 331212, 0.000000, 0.000000
Elapsed: 0.018195, SleepInterval: 0.981805
Enqueueing to CSV File : , 331214, 0.000000, 0.000000
Elapsed: 0.019669, SleepInterval: 0.980331
Enqueueing to CSV File : , 331214, 0.000000, 0.000000
Elapsed: 0.019537, SleepInterval: 0.980463
Enqueueing to CSV File : , 331217, 0.000000, 0.000000
Elapsed: 0.027269, SleepInterval: 0.972731
Enqueueing to CSV File : , 331217, 0.000000, 0.000000
Elapsed: 0.020597, SleepInterval: 0.979403
Enqueueing to CSV File : , 331217, 0.000000, 0.000000
Elapsed: 0.019204, SleepInterval: 0.980796
Enqueueing to CSV File : , 331219, 0.000000, 0.000000
Elapsed: 0.224437, SleepInterval: 0.775563
Enqueueing to CSV File : , 331219, 0.000000, 0.000000
Elapsed: 0.022575, SleepInterval: 0.977425
Enqueueing to CSV File : , 331222, 0.000000, 0.000000
Elapsed: 0.022621, SleepInterval: 0.977379
Enqueueing to CSV File : , 331222, 0.000000, 0.000000

I would expect the update time provided by GPU-Z in the shared memory to closely follow 1 second skips, but I see the parameters being repeated multiple times and the update time skipping by 2 or 3 seconds. I am wondering why I am unable to access updated values on a more frequent basis. I did face a similar issue (though not this much skew) with HWiNFO, which I solved by setting the scan interval to 900ms in the HWiNFO software. Any similar feature (or, any way to ensure I can read GPU-Z shared memory updated values every second) would be awesome to have.
 
Last edited:
Is the shared memory on the first page still valid.

I'm added GPUZ to TThrottle http://www.efmer.eu/boinc/

GPUZ_RECORD data[MAX_RECORDS]; seem to align properly.

The following record GPUZ_SENSOR_RECORD sensors[MAX_RECORDS];

Starts at address xxxx 10 according to the debugger.
name 0x0000000002270010 "U Core Clock" wchar_t [256]

But it's actually starting at 0x000000000227000B

I can move the pointer 4 bytes back, but I like to know why. The only thing I can think of is that the data structure is somehow different, but that doesn't explain why the date is aligned as it should.

Code:
#define MAX_RECORDS 128

typedef struct GPUZ_RECORD
{
    WCHAR key[256];
    WCHAR value[256];
}GPUZ_RECORD;

typedef struct GPUZ_SENSOR_RECORD
{
    WCHAR name[256];
    WCHAR unit[8];
    UINT32 digits;
    double value;
}GPUZ_SENSOR_RECORD;

typedef struct GPUZ_SH_MEM
{
    UINT32 version;            // Version number, 1 for the struct here
    volatile LONG busy;        // Is data being accessed?
    UINT32 lastUpdate;        // GetTickCount() of last update
    GPUZ_RECORD data[MAX_RECORDS];
    GPUZ_SENSOR_RECORD sensors[MAX_RECORDS];
}GPUZ_SH_MEM, *LPGPUZ_SH_MEM;

It seems the alignment fails on the double value;

I changed it to BYTE bValue[8];
That holds the 64 bit double.

Code:
    double dValue;
    memcpy(&dValue, lpMem->sensors[2].bValue, sizeof dValue);

This generates the right double, a bit awkward, but Visual studio aligns the whole block when using a double.
 
Last edited:
- what happens if 2 or more GPUs are present, the structure posted on the OP provide all the info ?
- the OP is rather old, does the OP structure is still up to date ?

I was able to read three 1070Ti NVidia devices (2 dell, one non dell by bringing up three instances of GPU-z and selecting a different GPU for each instance My app (I am working on just for my own usage) then counted the number of gpus available using

Code:
theprocess.MainWindowTitle.Contains("TechPowerUp GPU-Z"))

I then "read" from the memory map for each occurrence of that app using example code from ascl's blog ( #22 post Thank Asci!)
Code:
 data = (GPUZ_SH_MEM)Marshal.PtrToStructure(map, typeof(GPUZ_SH_MEM));

I verified that my two dell IDs were listed and that one non-dell ID in my list of the "data" items. I also set the three instances of GPU-z for the same non-dell GPU and my app did not find any Dell device IDs in the 3 results returned. I assume this is how more than one device can be read. I wish to plot GPU load usage of multiple GPUs on the same graph Alternately, I suspect I can simply log each app to disk and combine several logs into my chart which is far easier as I don't need real time display.
 
Follow-up: (did not see how to edit previous post)
More testing showed that each "read" from the memory map was just as likely to get GPU#1 three times in a row as it was to get GPUs 1, 2 and 3 respectively on a 3 GPU system with three instance of GPUz running. In addition I was unable to spot anything in the "DATA" record that could be used to identify which of two identical Dell gtx1070 boards the sensor record belonged to but I didn't spend a lot of time looking at that. A search for the name of the memory map in both the binary and the "in memory" image failed as I thought I could edit the binary to name the map differently. I want to do a performance comparison of nVidia and ATI boards in PCIe slots x16, x8, x1 with risers such as 4-in-1 etc. It will be easier to log results to a file such as "log_nv0.txt", "log_nv1.txt", etc for my study. When I name the log file I know which GPU was mining which project which would be difficult to find looking at data in the memory map. It would be helpful if the executable had command line options to select a device and give a log name as I could then shell out the GPUz app from my analysis program. I appreciate getting info on the memory map and I learned quite a bit, the main learning was to stick with C# and use marshaled code and not get into MFC / ATL to access the map natively. I have been spoiled by c#.
 
Last edited:
Back
Top