Thursday, September 22nd 2022

NVIDIA Adds AI Frame Generation Capability to Video Encoding and Decoding, Increase Frame-Rates of Videos

The defining feature of DLSS 3 is AI frame-generation, the ability of NVIDIA GeForce RTX 40-series "Ada" GPUs to predict the next frame to one that's rendered by the GPU, and generate the frame without any involvement of the graphics rendering pipeline. NVIDIA is taking this concept to video encoding, too, letting you increase the frame-rate of your videos through the "magic" of frame generation. NVIDIA Ada GPUs' Optical Flow Accelerator (NVOFA) component can apply the same Optical Flow logic for videos as it does for graphics rendering, predict the next frame, and increase frame-rate through AI generation of that frame. NVIDIA refers to this as Engine-assisted Frame-rate Up Conversion (FRUC).

There's more to FRUC than the "smooth motion" features your TV comes with; NVENC compares two real frames from a video, determines motion vectors, and sets up an optical flow stage, so the generated frames that are interpolated with real frames are accurate. NVIDIA will be releasing FRUC as a library, so it can be integrated with popular content-creation and media-consumption applications on NVIDIA Ada GPUs. It allows people with Ada to create higher frame-rate videos; as well as those with Ada GPUs to consume media at higher frame-rates.

A video presentation by NVIDIA on the video encoding features of Ada follows.

Source: NVIDIA
Add your own comment

13 Comments on NVIDIA Adds AI Frame Generation Capability to Video Encoding and Decoding, Increase Frame-Rates of Videos

#1
ZoneDymo
actually sounds pretty cool
Posted on Reply
#2
konga
This is neat, but also if the reason DLSS Frame Generation can't work with Ampere and Turing is because those GPUs don't have powerful enough optical flow engines for real-time frame generation, then what's the excuse for them not supporting it in offline video encoding either? It doesn't need to be real-time.
Posted on Reply
#3
Chomiq
Can't wait to watch Citizen Kane in 120fps :roll:
Posted on Reply
#4
Unregistered
kongaThis is neat, but also if the reason DLSS Frame Generation can't work with Ampere and Turing is because those GPUs don't have powerful enough optical flow engines for real-time frame generation, then what's the excuse for them not supporting it in offline video encoding either? It doesn't need to be real-time.
The whole reason is force people to buy RTX4xxx, people are as blind towards nVidia as Apple, the only difference nVidia's products are both overpriced and good.
#5
dgianstefani
TPU Proofreader
Xex360The whole reason is force people to buy RTX4xxx, people are as blind towards nVidia as Apple, the only difference nVidia's products are both overpriced and good.
I suspect the market for people who would reencode a saved video, before watching it locally is rather small.

The market for real time reencoding however is much, much higher. Since previous gen can't do real time, at all, that's probably the reason, along with obviously marketting RTX4xxx.
Posted on Reply
#6
Arco
dgianstefaniI suspect the market for people who would reencode a saved video, before watching it locally is rather small.

The market for real time reencoding however is much, much higher. Since previous gen can't do real time, at all, that's probably the reason, along with obviously marketting RTX4xxx.
Yeah, video makers might find this useful, I personally use Dainapp for getting rid of lag spikes in the footage.
Posted on Reply
#7
ZoneDymo
ChomiqCan't wait to watch Citizen Kane in 120fps :roll:
screw that noise, battleship potemkin it is!
Posted on Reply
#8
mouacyk
Svp + rife or optical flow. Supports back to Turing.
Posted on Reply
#9
Asni
I guess Ang Lee is now moving to 240fps.
Posted on Reply
#10
OneMoar
There is Always Moar
this is not new projects like svp and RIFF have been doing this for years with varying results riff can use vulcan or cuda but I don't think its as fast as what nvidia is promising
also trying to use this in a multiplayer environment is insanity
Posted on Reply
#11
lightofhonor
So that feature on TV's that every movie director and cinephile tells you to disable is now on nvidia. Neat.
Posted on Reply
#12
Mistral
So, essentially is "Soap Opera" on steroids?

Curious, how many of you have been leaving the image interpolation function on your TVs for the past decade?
Posted on Reply
#13
AusWolf
kongaThis is neat, but also if the reason DLSS Frame Generation can't work with Ampere and Turing is because those GPUs don't have powerful enough optical flow engines for real-time frame generation, then what's the excuse for them not supporting it in offline video encoding either? It doesn't need to be real-time.
Also, what does "not powerful enough" mean? Not powerful enough to render a 3D scene? Fine, then give us the support in 2D video decoding. If Turing's OFA is too weak for that too, then why does it exist?

I can smell a big "actually, Turing can have it too" announcement 6-12 months down the line when all the poor souls are already paying their loans they took out to switch to Ada.

Edit: I can also smell AMD developing a similar technology and enabling it for everything above the RX 580.
Posted on Reply
Add your own comment
Oct 5th, 2024 21:39 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts