• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Releases VRWorks SDK Update for "Pascal"

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,906 (7.37/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA today released a major update to its VRWorks SDK that enables game developers to implement new VR features features introduced by the GeForce "Pascal" graphics processors, taking advantage of the new Simultaneous Multi-projection Engine (SMP). The two major features introduced are Lens-Matched Shading and Single-Pass Stereo.

Lens-Matched Shading uses SMP to provide substantial performance improvements in pixel shading. The feature improves upon Multi-res Shading by rendering to a surface that more closely approximates the lens corrected image that is output to the headset display. This avoids the performance cost of rendering many pixels that are discarded during the VR lens warp post-process. Single-Pass Stereo, on the other hand, removes the need for a GPU to render the geometry and tessellation of a 3D scene twice (one for each eye/viewport), and lets both viewports share one pass of geometry and tessellation, thereby halving the the tessellation and vertex-shading workload.



View at TechPowerUp Main Site
 
They make it sound like multiple viewports are free. They just aren't. You still have to calculate different angle and that isn't free. But it is a lot cheaper than actually entirely calculating two separate viewports in parallel.

What they are basically saying, before, you were actually running 2 instances of a game in parallel under different angle (viewport) and render a 3D image via VR. Now you're running just one instance of a game so to speak and you're calculating pixels for two separate viewports within that one instance.

This means you still have to calculate same amount of pixels as before, but you basically halved the textures and vertex data requirements. It won't deliver half the load compared to old method as a whole, but it would certainly help GPU and memory breathe easier.
 
Back
Top