• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GPU video encode Experience

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.01/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
Hi Everyone,

I thought I'd share this interesting findings about GPU accelerated video encoding. It may not be interesting especially if you already done some tweaking in the past.

I've used Arcsoft's mediaconverter7 trial version for the tests. To enable GPU encoding, you have to enable hardware encoding in the ATI driver menu. I used ATI tray tool to observe the GPU usage and task manager for the CPU usage.

I found that most of the time, the CPU is doing all the hard work. My Quadcore is flexing its muscle even when GPU encoding is enabled. The most the GPU needed was about 20% (at 240Mhz underclocked speed).

The software utilises GPU with .mp4 file extension conversion only. I did not manage to find other formats that powers up the GPU usage. Another evidence is the quality, you can tell if it is the GPU that converts the file as the quality can be quite bad.

I don't have a Nvidia card yet so I can't really tell if CUDA can do more GPU enable file format.

As for the CPU and GPU utilisation, it seems that the higher resolution conversion will maximises the hardware workload and to me that is more efficient. Kinda like playing game at high resolution to make the GPU work hard. at Low DVD resolution, the CPU was about 50% and GPU... nil. at HD resolution, 90% and up to 20% GPU.

As for quality. It seems that using CPU standalone conversion, the quality at low bitrate is SO much better than GPU and it is what one expects. The right way to say it will be an efficient compromise in quality. In the case of GPU+CPU, once you reduce the bitrate, the picture suddenly becomes pixelated and very ugly. At the recommended high bitrate, both GPU and CPU standalone had similar quality. This means that GPU encoding has a huge drop-off in quality when there is restriction in the bitrate.

In regards to acceleration. It seems there is only a modest amount of speed improvement. I was testing a 47second video. I will try to test a 1hr video to see if there is a significant improvement with speed using GPU. at the moment, the speed is almost identical.

I hope this has been insightful.

Cheers

James
 
I would say that Arcsoft`s mediaconverter7 is not that good at using gpu encoding. I am using badaboom 2.0 on a nvidia gtx460, and i must say the conversion speed is superb, and so is the quality. Although, badaboom only works on nvidia, so i can not speak for Ati(Amd) cards.
 
the problem with the hardware video encoders, is that they only work on media types that the hardware supports.


so while that means that the H264 codec may be supported, it may only be at certain resolutions, frame rates, and encoding settigns (reference frames, B frames, etc etc)


Even in the best circumstances, encoding will be a mix between hardware and software - unless its VERY specifically configured (For example, the clearly defined DVD and blu ray standards, for set top players)



the best methods that work atm dont directly use the GPU to encode it, but rather use GPGPU methods like CUDA and STREAM. Not too many of those exist, but they tend to work better (and have similar limitations to what i said above, limited output support)
 
badaboom works very well. only supports nvidia tho, as has already been said.
 
Thanks for the feedback.

I'm really glad that Nvidia has made the initiative of GPGPU. I think to some extend has triggered the openCL and directcompute initiate. Both API are the way to go for software dev as there is no restriction in the hardware.

I hope to see C++ coded programs become OpenCL soon. I'm told that the process is quite straight forward. It won't be long now as there are more hardware support (APU) and proper API availability.

Then I hope, we can encode instantly and won't have to worry about how long it takes.
 
Thanks for the feedback.

I'm really glad that Nvidia has made the initiative of GPGPU. I think to some extend has triggered the openCL and directcompute initiate. Both API are the way to go for software dev as there is no restriction in the hardware.

I hope to see C++ coded programs become OpenCL soon. I'm told that the process is quite straight forward. It won't be long now as there are more hardware support (APU) and proper API availability.

Then I hope, we can encode instantly and won't have to worry about how long it takes.

openCL and directcompute and the like are in their infancy. now that both side support those we should see more and more software using it..
 
Yes, OpenCL is new. I mean it is only version 1.1 now. however, OpenCL has a direct C++ wrapper. I've seen the research community transporting codes into GPU using OpenCL and have demonstrated that even a GF310 is able to perform better than a core-i7 CPU by up to 30x.

The double precision bit is slower but still much faster than CPU. The reality is that OpenCL is supported by both CPU and GPU regardless of vendors and it even works on mobile devices like smartphones and tablets. On the other hand, CUDA only works with Nvidia GPU, not even CPU.

There is a whole lot of opportunities for software developers to start selling software with "instant completion time" processes. CUDA may be mature but OpenCL will easily catch up from learning off CUDA. Sorry Nvidia fans... thats reality. I personally praise Nvidia for making all these efforts.
 
Back
Top