• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Adds AI Frame Generation Capability to Video Encoding and Decoding, Increase Frame-Rates of Videos

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The defining feature of DLSS 3 is AI frame-generation, the ability of NVIDIA GeForce RTX 40-series "Ada" GPUs to predict the next frame to one that's rendered by the GPU, and generate the frame without any involvement of the graphics rendering pipeline. NVIDIA is taking this concept to video encoding, too, letting you increase the frame-rate of your videos through the "magic" of frame generation. NVIDIA Ada GPUs' Optical Flow Accelerator (NVOFA) component can apply the same Optical Flow logic for videos as it does for graphics rendering, predict the next frame, and increase frame-rate through AI generation of that frame. NVIDIA refers to this as Engine-assisted Frame-rate Up Conversion (FRUC).

There's more to FRUC than the "smooth motion" features your TV comes with; NVENC compares two real frames from a video, determines motion vectors, and sets up an optical flow stage, so the generated frames that are interpolated with real frames are accurate. NVIDIA will be releasing FRUC as a library, so it can be integrated with popular content-creation and media-consumption applications on NVIDIA Ada GPUs. It allows people with Ada to create higher frame-rate videos; as well as those with Ada GPUs to consume media at higher frame-rates.



A video presentation by NVIDIA on the video encoding features of Ada follows.

View at TechPowerUp Main Site | Source
 
actually sounds pretty cool
 
This is neat, but also if the reason DLSS Frame Generation can't work with Ampere and Turing is because those GPUs don't have powerful enough optical flow engines for real-time frame generation, then what's the excuse for them not supporting it in offline video encoding either? It doesn't need to be real-time.
 
Can't wait to watch Citizen Kane in 120fps :roll:
 
This is neat, but also if the reason DLSS Frame Generation can't work with Ampere and Turing is because those GPUs don't have powerful enough optical flow engines for real-time frame generation, then what's the excuse for them not supporting it in offline video encoding either? It doesn't need to be real-time.
The whole reason is force people to buy RTX4xxx, people are as blind towards nVidia as Apple, the only difference nVidia's products are both overpriced and good.
 
The whole reason is force people to buy RTX4xxx, people are as blind towards nVidia as Apple, the only difference nVidia's products are both overpriced and good.
I suspect the market for people who would reencode a saved video, before watching it locally is rather small.

The market for real time reencoding however is much, much higher. Since previous gen can't do real time, at all, that's probably the reason, along with obviously marketting RTX4xxx.
 
I suspect the market for people who would reencode a saved video, before watching it locally is rather small.

The market for real time reencoding however is much, much higher. Since previous gen can't do real time, at all, that's probably the reason, along with obviously marketting RTX4xxx.
Yeah, video makers might find this useful, I personally use Dainapp for getting rid of lag spikes in the footage.
 
Svp + rife or optical flow. Supports back to Turing.
 
this is not new projects like svp and RIFF have been doing this for years with varying results riff can use vulcan or cuda but I don't think its as fast as what nvidia is promising
also trying to use this in a multiplayer environment is insanity
 
So that feature on TV's that every movie director and cinephile tells you to disable is now on nvidia. Neat.
 
So, essentially is "Soap Opera" on steroids?

Curious, how many of you have been leaving the image interpolation function on your TVs for the past decade?
 
This is neat, but also if the reason DLSS Frame Generation can't work with Ampere and Turing is because those GPUs don't have powerful enough optical flow engines for real-time frame generation, then what's the excuse for them not supporting it in offline video encoding either? It doesn't need to be real-time.
Also, what does "not powerful enough" mean? Not powerful enough to render a 3D scene? Fine, then give us the support in 2D video decoding. If Turing's OFA is too weak for that too, then why does it exist?

I can smell a big "actually, Turing can have it too" announcement 6-12 months down the line when all the poor souls are already paying their loans they took out to switch to Ada.

Edit: I can also smell AMD developing a similar technology and enabling it for everything above the RX 580.
 
Back
Top