Tuesday, August 20th 2019

NVIDIA Beats Intel to Integer Scaling, but not on all Cards

NVIDIA, with its GeForce 436.02 Gamescom-special drivers. among several new features and performance updates, introduced integer scaling, a resolution upscaling algorithm that scales up extremely low-resolution visuals to a more eye-pleasing blocky pixellated lines by multiplying pixels in a "nearest-neighbor" pattern without changing their color, as opposed to bilinear upscaling, that blurs the image by attempting to add details where none exist, by altering colors of multiplied pixels.

Intel originally announced an integer upscaler this June that will be exclusive to the company's new Gen11 graphics architecture, since older generations of its iGPUs "lack the hardware requirements" to pull it off. Intel's driver updates that add integer-scaling are set to arrive toward the end of this month, and even when they do, only a tiny fraction of Intel hardware actually benefit from the feature (notebooks and tablets that use "Ice Lake" processors).
NVIDIA's integer upscaling feature has been added only to its "Turing" architecture GPUs (both RTX 20-series and GTX 16-series), but not on older generations. NVIDIA explains that this is thanks to a "hardware-accelerated programmable scaling filter" that was introduced with "Turing." Why this is a big deal? The gaming community has a newfound love for retro games from the 80's thru 90's, with the growing popularity of emulators and old games being played through DOSBox. Many small indie game studios are responding to this craze with hundreds of new titles taking a neo-retro pixellated aesthetic (eg: "Fez").

When scaled up to today's high-resolution displays, many of these games look washed out thanks to bilinear upscaling by Windows. This calls for something like integer upscaling. We're just surprised that NVIDIA and Intel aren't implementing integer upscaling on unsupported GPUs by leveraging programmable shaders. Shader-based upscaling technologies aren't new. The home-theater community is vastly invested in the development of MadVR, a custom-video renderer that packs several shader-based upscaling algorithms that only need Direct3D 9.0c-compliant programmable shaders.

Update 12:57 UTC: The NVIDIA 436.02 drivers have been released now and are available for download.
Add your own comment

54 Comments on NVIDIA Beats Intel to Integer Scaling, but not on all Cards

#51
las
I CAN'T LIVE without integer scaling.
Posted on Reply
#52
erocker
*
Why in the heck is this only enabled on Turing cards? Need to figure out a way around this.
Posted on Reply
#53
lexluthermiester
erockerWhy in the heck is this only enabled on Turing cards? Need to figure out a way around this.
Based on how it's done it should be trivial to do on all supported cards NVidia is making drivers for.
Chrispy_

The ZSNES HQX option (above) would seem to be really well-suited to desktop UIs - certainly better than integer scaling (which some prefer to the default bilinear scaling).
If a GPU driver gave me an HQX option, I'd switch to it immediately - I'm just not sure what the GPU overhead would be, but you save so much by dropping from 4K rendering to 1080p rendering that I don't think it would matter on any GPU.
I have to say, I still like the smoothed out version. To me it just looks better.
Posted on Reply
#54
Chrispy_
Yep, there are loads of pixel scaling methods but I showed ZSNES's HQX filter above because I feel that is the best suited filter for mixed graphics and text (ie, games and applications).

In the absence of scaling options, we've had to make do with blurry interpolation for 20 years. Integer scaling is an improvement for people who don't like blurring such as me. The real issue is the lack of any options at all; We used to have all these options!

In the late '90s as LCD displays started replacing CRTs, the blurry interpolation filtering was mandatory because the hardware display scalers used simple bicubic resampling to get results at non-integer scaling factors like 1024x768 on a 1280x1024 LCD screen. It's a one-size-fits-all solution that did the job adequately for the lowest price, requiring only a single fixed-function chip. 20 years later, we're still tolerating the lame result of this lowest-common-denominator fixed-function scaling, despite the fact that GPUs now have a myriad of optimisations for edge-detection, antialiasing, filtering, post-processing and other image-improving features.

I prefer this integer-scaled image (currently Turing only)

over this usual blurry display-scaled or GPU-scaled image

only because nobody is currently offering this as an option.

The technology to make scaling better has existed for 20 years, yet Intel and Nvidia are willy-waving over the ability to turn off the bad filtering rather than giving us better filtering. Am I the only one who sees this as a dumb move - regression rather than progress?

I would like to see an options list like this; I mean we already have a myriad of AA options for supersampling - why not apply the techniques developed over the last two decades to subsampling? There's FXAA, SMAA, Temporal AA (TXAA) - and probably more I've forgotten about that are post-process filters that can be applied to any image. Games already apply these filters to upscaled images - that's what happens when you reduce the 'resolution scale' slider in conjunction with FXAA or TXAA etc.

It can't be too hard to enable that feature outside of games, can it? I mean it already happens if I use a VR headset to view my Windows desktop and webpages so it's not a technical challenge, it just needs someone at Intel/AMD/Nvidia to include it in a driver control panel for use outside of gaming rather than as a gaming-only option.
Posted on Reply
Add your own comment
Apr 25th, 2024 05:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts