• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel adds Integer Scaling support to their Graphics lineup

It's not entirely authentic, but it's as close to the original as possible.
When instead of one pixel you display 9 or 16, any interpolation method will be way softer than the original.
Like you said, you prefer a softer look. It's just that others prefer it the other way around (imagine that :D )

Try Topaz Gigapixel AI for upscaling you will be amazed at what it can do. Obviously this tech would be years away from use in graphics card but you can achieve amazing results.
 
I just find it puzzling how this is even news.
While it could be interesting to be able to enable this without application support, implementing this in an application has been possible "forever".
1) You just render your game to a framebuffer of the desired resolution, let's say 320x200.
2) Make another one to fill the screen, let's say 2560x1440.
3) Do a couple of lines of code to calculate your integer scaling, in this case 7.
4) Render the first framebuffer to a centered quad(2240x1400) with nearest neighbour texture filtering,
Super easy.

I find it funny this has been blocked by lack of hardware support. Of all the scaling methods, this is the one that barely makes an impact if you implement it in software only.
I don't understand how this has been lacking in hardware support, it has been part of the OpenGL specification as long as I can remember. But perhaps Intel always implemented it in software?

I actually prefer Bilinear and Trilinear "scaling" filters as they give a softer blending effect. I've never been a fan of the sharp-edge pixel look. But I digress...
Trilinear will only make things even more blurred.

Thing is, we never had that BITD. Because of the way TV CRT's worked, there was always the smoothing/blurring/blending effect caused by the way the electron gun beams scanned through the color masks and produced an image. Anyone who says they remember the "blocky" look is kinda fooling themselves because it just never happened that way. I'm not saying at all that it's wrong to prefer that look, just that we never had it back then because of the physical limitations of the display technology of the time.

So this Integer Scaling thing, while useful to some, isn't all that authentic for many applications.
Integer scaling is better than a blurred stretched image, but as you are saying it's not a correct representation of how graphics designed for CRTs looked. It's actually one of the things that has annoyed med with the retro indie gaming trend over the past years, those who made them have either forgot how old games looked, or have only seen them in emulators. I usually call it "fake nostalgia", since most people have a misconception about how old games looked.

CRTs displayed pixels very different from modern LCD, plasma and OLED displays. There were two main types of CRTs; shadow mask and aperture grille. Shadow masks were the worst in terms of picture quality, and used up to many small dots to make a single "pixel", but had the advantage of "dynamic" resolutions. Aperture grille (Trinitron/dynatron as you probably know it) had grids of locked pixels and a much sharper picture. But even for these, pixels blended slightly together. One of the "wonderful" things of CRTs were how the pixels slightly bled in the edges, causing a slight blurring effect, but only on the pixel edges for Trinitrons. So the pixels appeared fairly sharp, not blurred like if you scale a picture in Photoshop.

Additionally, if we're talking about console games, CRT TVs didn't even have square pixels. I don't know the exact ratio, but somewhere close to 5:4 or 4.5:4. So if you want to emulate NES authentically, it should be slightly stretched.

While CRTs have a very good color range, but the precision were not so good. But many graphics artists exploited this to create gradients and backgrounds using dithering:
gradient.png
(click the thumbnail)
On most CRTs this would look softer or even completely smooth.

And then there is the shade between the scanlines, but this varied from CRT to CRT. There are some emulators which emulate this effect.
 
But perhaps Intel always implemented it in software?
Intel didn't do anything actually, it's always been done in software by the devs who wanted the effect.
Trilinear will only make things even more blurred.
Not really. Trilinear filtering only applies to MIP-mapping and works by applying a refined version of a bilinear filter to a MIP-mapped image.
I don't know the exact ratio, but somewhere close to 5:4 or 4.5:4.
3:2
 
Intel didn't do anything actually, it's always been done in software by the devs who wanted the effect.
I don't think you understood me.
Texture filtering, is exposed through APIs like OpenGL, Vulkan or Direct3D, but it's up to the driver to choose how to implement it. Textures are usually transformed and sampled by the TMUs in the GPU, so the texture filtering has to be tied to it, because the difference in nearest neighbor vs. linear filtering is where the texture is sampled. There are also other things to consider like clamping and wrapping/repeating which affects texture sampling.
This is done in hardware, at least for Nvidia and AMD GPUs. I know that Intel have emulated some things in software in the past, but I would be very surprised if they did texture filtering in software. And it's not like nearest filtering is primarily a "emulator" thing; it's useful whenever you want to render a texture which is not interpolated, like in GUIs, fonts, etc.

My point is, why all this fuzz about nearest neighbor filtering? It's an essential feature for all graphics APIs. Intel would be pretty stupid to not have it in hardware. I suspect this whole thing is more about technical details being lost in translation between technical staff and PR.

Not really. Trilinear filtering only applies to MIP-mapping and works by applying a refined version of a bilinear filter to a MIP-mapped image.
A mipmap is a lower resolution texture. When you sample a texture in the lower resolution mipmap, you will effectively get an averaged value for that part of the texture. This is why Trilinear filtering gives a more blurred picture than bilinear filtering.
 
My point is, why all this fuzz about nearest neighbor filtering? It's an essential feature for all graphics APIs. Intel would be pretty stupid to not have it in hardware. I suspect this whole thing is more about technical details being lost in translation between technical staff and PR.
You seem to be misunderstanding. "Integer Scaling" is not the same as "nearest neighbor filtering".
 
You seem to be misunderstanding. "Integer Scaling" is not the same as "nearest neighbor filtering".
I beg your pardon?
As explained a few posts up, I detailed how easy it is to achieve this integer scaling with any resolution, and I assume Intel is talking of a feature which implements this without application support. I kindly suggest reading it again.
My confusion is why Intel needs to "implement hardware support", implement hardware support for what precisely?. The scaling itself is done by nearest neighbor filtering when rendering one framebuffer into another, which is already implemented in all the APIs, so the only thing they need to implement is the usage of it to achieve "integer scaling". That sounds more like something that needs software support rather than hardware support, as this is an "application level" feature.
 
I beg your pardon?
Um, yes.
As explained a few posts up, I detailed how easy it is to achieve this integer scaling with any resolution, and I assume Intel is talking of a feature which implements this without application support. I kindly suggest reading it again.
I read it.
My confusion is why Intel needs to "implement hardware support", implement hardware support for what precisely?. The scaling itself is done by nearest neighbor filtering when rendering one framebuffer into another, which is already implemented in all the APIs, so the only thing they need to implement is the usage of it to achieve "integer scaling". That sounds more like something that needs software support rather than hardware support, as this is an "application level" feature.
That statement clearly demonstrates that you do not understand what is being done and why.
 
Back
Top