Saturday, July 13th 2019

Intel adds Integer Scaling support to their Graphics lineup

Intel's Lisa Pearce today announced on Twitter, that the company has listened to user feedback from Reddit and will add nearest neighbor integer scaling to their future graphics chips. Integer scaling is the holy grail for gamers using console emulators, because it will give them the ability to simply double/triple or quadruple existing pixels, without any loss in sharpness that is inherent to traditional upscaling algorithms like bilinear or bicubic. This approach also avoids ringing artifacts that come with other, more advanced, scaling methods.

In her Twitter video, Lisa explained that this feature will only be available on upcoming Gen 11 graphics and beyond - previous GPUs lack the hardware required for implementing integer scaling. In terms of timeline, she mentioned that this will be part of the driver "around end of August", which also puts some constraints of the launch date of Gen 11, which seems to be rather sooner than later, based on that statement.

It is unclear at this time, whether the scaling method is truly "integer" or simply "nearest neighbor". While "integer scaling" is nearest neighbor at its core, i.e. it picks the closest pixel color and does no interpolation, the difference is that "integer scaling" uses only integer scale factors. For example, Zelda Breath of the Wild runs at 900p natively, which would require a 2.4x scaling factor for a 4K display. Integer scaling would use a scale factor of x2, resulting in a 1800p image, with black borders on top - this is what the gamers want. The nearest neighbor image would not have the black bars, but every second pixel would be tripled instead of doubled, to achieve the 2.4x scaling factor, but resulting in a sub-optimal presentation.

Update Jul 13: Intel has posted an extensive FAQ on their website, which outlines the details of their Integer Scaling implementation, and we can confirm that it is done correctly - the screenshots clearly show black borders all around the upscaled image, which is exactly what you would expect for scaling with integer scale factors. Intel does provide two modes, called "NN" (Nearest Neighbor) and "IS" (Integer Scaling).
Will Intel implement pure integer scaling with borders?

Yes, the driver being released in late August will provide users with the option to force integer scaling. The IS option will restrict scaling of game images to the greatest possible integer multiplier. The remaining screen area will be occupied by a black border, as mentioned earlier.
Sources: Twitter, FAQ on Intel Website
Add your own comment

56 Comments on Intel adds Integer Scaling support to their Graphics lineup

#51
efikkan
I just find it puzzling how this is even news.
While it could be interesting to be able to enable this without application support, implementing this in an application has been possible "forever".
1) You just render your game to a framebuffer of the desired resolution, let's say 320x200.
2) Make another one to fill the screen, let's say 2560x1440.
3) Do a couple of lines of code to calculate your integer scaling, in this case 7.
4) Render the first framebuffer to a centered quad(2240x1400) with nearest neighbour texture filtering,
Super easy.

bug, post: 4069707, member: 157434"
I find it funny this has been blocked by lack of hardware support. Of all the scaling methods, this is the one that barely makes an impact if you implement it in software only.
I don't understand how this has been lacking in hardware support, it has been part of the OpenGL specification as long as I can remember. But perhaps Intel always implemented it in software?

lexluthermiester, post: 4069725, member: 134537"
I actually prefer Bilinear and Trilinear "scaling" filters as they give a softer blending effect. I've never been a fan of the sharp-edge pixel look. But I digress...
Trilinear will only make things even more blurred.

lexluthermiester, post: 4069782, member: 134537"
Thing is, we never had that BITD. Because of the way TV CRT's worked, there was always the smoothing/blurring/blending effect caused by the way the electron gun beams scanned through the color masks and produced an image. Anyone who says they remember the "blocky" look is kinda fooling themselves because it just never happened that way. I'm not saying at all that it's wrong to prefer that look, just that we never had it back then because of the physical limitations of the display technology of the time.

So this Integer Scaling thing, while useful to some, isn't all that authentic for many applications.
Integer scaling is better than a blurred stretched image, but as you are saying it's not a correct representation of how graphics designed for CRTs looked. It's actually one of the things that has annoyed med with the retro indie gaming trend over the past years, those who made them have either forgot how old games looked, or have only seen them in emulators. I usually call it "fake nostalgia", since most people have a misconception about how old games looked.

CRTs displayed pixels very different from modern LCD, plasma and OLED displays. There were two main types of CRTs; shadow mask and aperture grille. Shadow masks were the worst in terms of picture quality, and used up to many small dots to make a single "pixel", but had the advantage of "dynamic" resolutions. Aperture grille (Trinitron/dynatron as you probably know it) had grids of locked pixels and a much sharper picture. But even for these, pixels blended slightly together. One of the "wonderful" things of CRTs were how the pixels slightly bled in the edges, causing a slight blurring effect, but only on the pixel edges for Trinitrons. So the pixels appeared fairly sharp, not blurred like if you scale a picture in Photoshop.

Additionally, if we're talking about console games, CRT TVs didn't even have square pixels. I don't know the exact ratio, but somewhere close to 5:4 or 4.5:4. So if you want to emulate NES authentically, it should be slightly stretched.

While CRTs have a very good color range, but the precision were not so good. But many graphics artists exploited this to create gradients and backgrounds using dithering:
[ATTACH alt="gradient.png"]126907[/ATTACH]
(click the thumbnail)
On most CRTs this would look softer or even completely smooth.

And then there is the shade between the scanlines, but this varied from CRT to CRT. There are some emulators which emulate this effect.
Posted on Reply
#52
lexluthermiester
efikkan, post: 4081274, member: 150226"
But perhaps Intel always implemented it in software?
Intel didn't do anything actually, it's always been done in software by the devs who wanted the effect.
efikkan, post: 4081274, member: 150226"
Trilinear will only make things even more blurred.
Not really. Trilinear filtering only applies to MIP-mapping and works by applying a refined version of a bilinear filter to a MIP-mapped image.
efikkan, post: 4081274, member: 150226"
I don't know the exact ratio, but somewhere close to 5:4 or 4.5:4.
3:2
Posted on Reply
#53
efikkan
lexluthermiester, post: 4081402, member: 134537"
Intel didn't do anything actually, it's always been done in software by the devs who wanted the effect.
I don't think you understood me.
Texture filtering, is exposed through APIs like OpenGL, Vulkan or Direct3D, but it's up to the driver to choose how to implement it. Textures are usually transformed and sampled by the TMUs in the GPU, so the texture filtering has to be tied to it, because the difference in nearest neighbor vs. linear filtering is where the texture is sampled. There are also other things to consider like clamping and wrapping/repeating which affects texture sampling.
This is done in hardware, at least for Nvidia and AMD GPUs. I know that Intel have emulated some things in software in the past, but I would be very surprised if they did texture filtering in software. And it's not like nearest filtering is primarily a "emulator" thing; it's useful whenever you want to render a texture which is not interpolated, like in GUIs, fonts, etc.

My point is, why all this fuzz about nearest neighbor filtering? It's an essential feature for all graphics APIs. Intel would be pretty stupid to not have it in hardware. I suspect this whole thing is more about technical details being lost in translation between technical staff and PR.

lexluthermiester, post: 4081402, member: 134537"
Not really. Trilinear filtering only applies to MIP-mapping and works by applying a refined version of a bilinear filter to a MIP-mapped image.
A mipmap is a lower resolution texture. When you sample a texture in the lower resolution mipmap, you will effectively get an averaged value for that part of the texture. This is why Trilinear filtering gives a more blurred picture than bilinear filtering.
Posted on Reply
#54
lexluthermiester
efikkan, post: 4081419, member: 150226"
My point is, why all this fuzz about nearest neighbor filtering? It's an essential feature for all graphics APIs. Intel would be pretty stupid to not have it in hardware. I suspect this whole thing is more about technical details being lost in translation between technical staff and PR.
You seem to be misunderstanding. "Integer Scaling" is not the same as "nearest neighbor filtering".
Posted on Reply
#55
efikkan
lexluthermiester, post: 4081454, member: 134537"
You seem to be misunderstanding. "Integer Scaling" is not the same as "nearest neighbor filtering".
I beg your pardon?
As explained a few posts up, I detailed how easy it is to achieve this integer scaling with any resolution, and I assume Intel is talking of a feature which implements this without application support. I kindly suggest reading it again.
My confusion is why Intel needs to "implement hardware support", implement hardware support for what precisely?. The scaling itself is done by nearest neighbor filtering when rendering one framebuffer into another, which is already implemented in all the APIs, so the only thing they need to implement is the usage of it to achieve "integer scaling". That sounds more like something that needs software support rather than hardware support, as this is an "application level" feature.
Posted on Reply
#56
lexluthermiester
efikkan, post: 4081471, member: 150226"
I beg your pardon?
Um, yes.
efikkan, post: 4081471, member: 150226"
As explained a few posts up, I detailed how easy it is to achieve this integer scaling with any resolution, and I assume Intel is talking of a feature which implements this without application support. I kindly suggest reading it again.
I read it.
efikkan, post: 4081471, member: 150226"
My confusion is why Intel needs to "implement hardware support", implement hardware support for what precisely?. The scaling itself is done by nearest neighbor filtering when rendering one framebuffer into another, which is already implemented in all the APIs, so the only thing they need to implement is the usage of it to achieve "integer scaling". That sounds more like something that needs software support rather than hardware support, as this is an "application level" feature.
That statement clearly demonstrates that you do not understand what is being done and why.
Posted on Reply
Add your own comment