I am doing a research on whether GPGPU (CUDA in particular) technologies are good for video decoding. So far my results are promising. I've been using GPU-Z to measure GPU and Video Engine load while playing video with regular, CPU-sofware codecs (DTV-DVD Microsoft) and GPGPU-based codecs (LAV Filters, which support CUDA hardware acceleration). In the first case, with regular codecs I get the following picture: Video decoding process is carried out on CPU, GPU only participates in rendering videoframes to screen. In second case, with GPU hardware acceleration, I get the following picture: Video decoding is carried out on GPU, and yet, GPU load is lower than in the first case, I find this very confusing. Could someone, perhaps, shed a light on this phenomenon? Perhaps, GPU load is shown in correlation with power consumption? In example, at X power consumption GPU is operationg at Y load of possible maximum at THIS power consumption rate, but when decoding video on GPU power consumption rises, and maximum possible GPU load also rises, and so overall GPU load seems lower, yet it's actually bigger than in the first case? Am I making any sence here? =) All test are carried out with nVidia GTX 660 OC, if this matters. I suppose, Memory Controller load is reduced in second case because memory transfer operations are carried out via DXVA hardware accelerated circiuts of Video Engine.