• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Marseille Commercializes the mCable Gaming Edition: HDMI-embedded Anti-Aliasing

It's not scaling. That would affect ALL edges. We can clearly see it's doing some sort of edge detection because some things are hardly filtered, which shows problems detecting edges. Which is a typical problem with post processing methods (like FXAA, MLAA and SMAA) that have no depth information (Z Buffer) and are done purely on 2D output image.

Would be interesting hooking up PlayStation 2 through PS2 to HDMI adapter using this cable. PS2 has low resolution, so this could help quite a lot.

Or, ya know, just run a damn PS2 emulator with AA on. Save yourself $100 too.
 
Actually the answer is fairly simple? Is this AA only used on the 3D games or is used all the time, even on 2D scenarios, like movies, etc? If is used everytime, then naturally is using Supersampling to double(triple?) the resolution and then scale it down. This is good, since it also makes the movies look a little better too (in theory).
If the AA is only used in 3D games, then obviously is just a filter applied on the final image, something like SMAA or FXAA, but I doubt it tbh...

Supersampling works because the sampler (renderrer) can generate more data (pixels) to be used for the process. Anything beyond the final framebuffer (heck, perhaps even earlier than that) cannot do the said data over-generation. It can extrapolate the extra data, but that would be pointless since all that would do is blur the image further.

The edge detection/sharpening theory sounds the most plausible.
 
Supersampling works because the sampler (renderrer) can generate more data (pixels) to be used for the process. Anything beyond the final framebuffer (heck, perhaps even earlier than that) cannot do the said data over-generation. It can extrapolate the extra data, but that would be pointless since all that would do is blur the image further.

The edge detection/sharpening theory sounds the most plausible.
in this case I meant that the small chip is actually re-sampling the video stream to 4K for example, and output it to 1080p or the original source size, basically using very little processing power and delay.
 
in this case I meant that the small chip is actually re-sampling the video stream to 4K for example, and output it to 1080p or the original source size, basically using very little processing power and delay.

That's the thing, you can't generate true 4k video out of an already rendered 1080p one. What you can do is guess the extra pixels, but any method you use for that (save for straight forward nearest-neighbor, which would be redundant to use) would introduce blurriness that would absolutely kill detail in the picture, even before we get to downscale it back to display resolution.
We are talking about chaining two lossy processes for negative gains.

Oddly enough, Linus just posted a review on this cable yesterday. It's not snake oil!

Whether it's worth the $150 is of course up to you guys.

I think the question we should be asking here is whether whatever technique used in this cable can be implemented in software, or even incorporated directly to drivers. Freesync/Gsync kinda scenario.

Very informative video, though. The cable does remove the jaggies well, but on the other hand, it amplifies temporal aliasing. And boy, does it screw up colours!
 
Given the very small amount of change I see in the higher resolution comparison (in all fairness, even when I start comparing pixels fully zoomed in on that one, I'm not really seeing it) of Hitman there, I get the impression this is mostly valuable for really low res source material, such as 480p and 576i.

It really was too good to be thát true I guess. It looks like the processing is not a true AA, but just a rounding method for jaggies, or put bluntly, a very expensive edge blur filter.

There's also a noticeable contrast increase there.
Can't see it? Maybe you need some glasses, no offense. There's definitely a difference there in the Hitman comparisons. Look again.
 
Can't see it? Maybe you need some glasses, no offense. There's definitely a difference there in the Hitman comparisons. Look again.

Well, my comment reflects what Linus could see too, heavily reduced effect at higher res and greatest benefits at 720p and lower. So, I think I'm fine ^^

Sure, you could probably find a set of pixels on the screenshot that are actually different, but to call it vastly improved or even 'visibly improved' in the case of Hitman, no. Look at the curb of the street in the back, for example. The most obvious jaggies, and the most disruptive ones, remain.

But yes, with 3/4x zoom on top of the monitor, its distinguishable. I just dont watch my content that way. It still is impressive dont get me wrong.
 
Well, my comment reflects what Linus could see too, heavily reduced effect at higher res and greatest benefits at 720p and lower. So, I think I'm fine ^^

Sure, you could probably find a set of pixels on the screenshot that are actually different, but to call it vastly improved or even 'visibly improved' in the case of Hitman, no. Look at the curb of the street in the back, for example. The most obvious jaggies, and the most disruptive ones, remain.

But yes, with 3/4x zoom on top of the monitor, its distinguishable. I just dont watch my content that way. It still is impressive dont get me wrong.

It's great for people with PS3, Xbox 360, Xbox One and PS4 running on 1080p TV screens with content that isn't true 1080p.
 
Back
Top