• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD-made PlayStation 5 Semi-custom Chip Has Ray-tracing Hardware (not a software solution)

Domus you know I looked at the picture and I see some funny thing: tesselation look bether from distance but look worse in close but POM look bether in close but worse in distance
 
Well, sorry, but POM is not a screen space effect, I have a feeling you don't know these things work.
yeah, um... having released my own game on Steam and written shaders, I'm sure I have no clue at all how they work...

Whereas tessellation generates tons of extra geometry (which is why it remained prohibitively expensive to this day and will continue to do so, as RT will probably do), POM works by displacing existing geometry to create depth in textures. It's much faster and is indiscernible from tessellation the vast majority of the time.

View attachment 133704

As you can see there are also no artifacts since it's not a screen space effect. You probably seen POM countless times in games and didn't even know it, maybe you even mistaken it for tessellation.



What's absolutely hilarious about this is you'd be amazed about the extent developers go to make DXR function in real time because it never works out of the box, they still need to find hacks to make it feasible.

POM is a screen space effect lol. POM doesn't displace existing geometry, it's a pixel shader that creates an illusion of depth. what you're describing is displacement mapping. take a flat surface, apply POM on it, now look at it from an angle almost parallel to it. you'll notice that all the depth there is disappears. it creates an illusion depth, unlike tessellation. the most clever use of POM is for bullet impact holes and similar stuff. using it for ground textures is not a very good idea unless it's almost completely flat (ie. tile floors, wood floors, etc.), since you will still run into artifacting if viewed at a bad angle.

POM definitely has its uses, however if you're trying to use as few hacks as possible, you have to use proper geometry instead of hacks/illusions.
also, I seriously suggest you drop this smug, arrogant "I'm right, you're wrong attitude". it makes you look like a total buffoon when you think you're right, but aren't.
 
Last edited:
Good thing it's already coming in the next consoles.

Now it's just coming to AMD's GPUs for PC, and finally the bashing of a good technology can be over!
 
This is a test bed for it.
 
so, a texel effect then?

Look at my previous reply. The point is it's not a screen space effect, unlike kernel effects and other screen space magic.
 
yeah, um... having released my own game on Steam and written shaders, I'm sure I have no clue at all how they work...

That's pretty cool but no, unfortunately that doesn't necessarily mean you know how they work.

POM doesn't displace existing geometry,

It doesn't, yes, it has to do with textures, I mentioned that but haven't phrased it in the best way I guess.

I seriously suggest you drop this smug, arrogant "I'm right, you're wrong attitude". it makes you look like a total buffoon when you think you're right, but aren't.

But here's the problem though, you're still actually wrong, it's not a screen space effect as you so adamantly believe. Therefore it gets rather awkward when you try to call me a buffoon and in addition to that you are also proudly claiming to be making games (not that it would really matter here but still). I'd say it's pretty arrogant to assume just because you've worked with something you automatically believe you're an authority over anything related to the field. And you clearly aren't, not to the extent you try to claim at least.

Your remark is kinda cringy and an obvious oxymoron, you can probably understand why hopefully.

Here : https://learnopengl.com/Advanced-Lighting/Parallax-Mapping

Nothing is used from screen space to construct anything. It's not a screen space effect and it generates no (obvious) artifacts if implemented correctly.

Think about this : Xbox One and PS4 have first gen GCN which is known to be horrible at tessellation yet in most games you can observe that a good amount of surfaces have depth in them. How do you think they do that ? It's POM and adjacent methods mostly. I think the only times when tessellation is used is when you need something to simulate the deformation of terrain that can also accommodate changes according to other geometry (walking over snow, mud, etc) and I only saw it in a few games like ROTTR, GOW, the rest of them don't use tessellation for a lot of things, it's simply too expensive.

You can either accept that some features will always remain too expensive to be put into practice on a large scale or that they are everywhere. Whatever floats your boat.
 
Last edited:
I haven't seen anything from screenshots or vids that make me excited for it but I also haven't seen in it person either so I'll have to reserve final judgement until that day. Until then, whoopdeedoo.
 
Yeah this is why most people, including the most dedicated AMD fans, are waiting for RDNA 2 with RT hardware. 5700XT and the rest of the Navi lineup will become obsolete the moment the PS5 drops next year!
 
There is a big difference between buying a first-generation, arguably-overpriced RTX card and hoping that software will pop out of nowhere... and a closed platform like the PS5 that will stay around for years and devs can optimize with more ease for. Then a good amount of RT games will bleed over from the PS and XB to PC and eventually you'll have plenty of software where RT doesn't feel tacked-on.

I just think rtx is a gimmick like physx was, yeah both looked kind of nice, but I still don't really care to be honest. its about the gameplay I care about, which only indie games like Dead Cells, Wizard of Legend, Stardew Valley, Slay the Spire, etc have been able to satisfy.
 
This says otherwise

Burh.

It says exactly what I said, that it runs on the shaders as I explained.

There are two options :

- shader which means you can use them for anything
- dedicated RT units which means you can't use them for anything

That demo uses the shaders.
 
the fact that you associate ray-tracing with RTX is pretty fucked


I wonder if you said the same about tessellation when it first became available on first-gen cards lol


this guy gets it


I associate it with lower fps actually, that's about it.
 
Last edited by a moderator:
This says otherwise
Ran on Vega 56
Using shaders is a viable and working approach. The problem is performance. Neon Noir always keeps coming up around this. The demo runs on Vega56 at 1080p 30fps with only reflections done with RT. For comparison, Battlefield 5 with DXR runs about the same fps on a GTX1080 which is roughly as fast a GPU.
 
Looks promising and gives us an idea about Big Navi.
 
Using shaders is a viable and working approach. The problem is performance. Neon Noir always keeps coming up around this. The demo runs on Vega56 at 1080p 30fps with only reflections done with RT. For comparison, Battlefield 5 with DXR runs about the same fps on a GTX1080 which is roughly as fast a GPU.

Yes but it comes without most of the drawbacks of RTX. No hardware on die requirement, meaning more space to pack more cores. No noise is introduced, so you don't need tensor cores to denoise and quality is better.

I would much rather go down this path then eat away die space just for Ray Tracing and AI. Just look at how big Nvidia's die sizes are.
 
Yes but it comes without most of the drawbacks of RTX. No hardware on die requirement, meaning more space to pack more cores. No noise is introduced, so you don't need tensor cores to denoise and quality is better.

I would much rather go down this path then eat away die space just for Ray Tracing and AI. Just look at how big Nvidia's die sizes are.
My comparison was deliberate. GTX1080 is not an RTX card and has no hardware on die. It only has DXR implementation in drivers. AMD could have done the same for direct comparison but has not done so for obvious reasons.

Noise is introduced and is denoised in Neon Noir, it is just not done on Tensor cores. With regards to RTX and games with RT effects, Nvidia has been strangely quiet about Tensor core denoising and several developers have said they used their own denoising. It seems that Tensor cores might not (always) be the best way for that.

By the way, RT cores add about 3-4% die cost. That is not much.
 
lel xD don't forget to mention DLSS :)
..."you get blur and you get blur and everyone gets more blur"
This has been discussed time and time again.
Ray tracing will make graphics look more realistic.
Yes, image will be less sharp in badly lit areas or in out-of-focus range (as RT can give us realistic depth of field).

Moreover: proper shadows will mean difficulties in seeing some things.

You're trying to criticize Nvidia's implementation, but you're actually just neglecting the whole idea of rendering by ray tracing.
As a result, you won't be pleased by competing solutions as well (AMD, Intel, consoles etc).

For people who don't care about realism, but expect crisp image, good visibility and max fps (like e-sport, fast-paced shooters etc), ray tracing will only bring troubles. Don't use it.
 
That's pretty cool but no, unfortunately that doesn't necessarily mean you know how they work.



It doesn't, yes, it has to do with textures, I mentioned that but haven't phrased it in the best way I guess.



But here's the problem though, you're still actually wrong, it's not a screen space effect as you so adamantly believe. Therefore it gets rather awkward when you try to call me a buffoon and in addition to that you are also proudly claiming to be making games (not that it would really matter here but still). I'd say it's pretty arrogant to assume just because you've worked with something you automatically believe you're an authority over anything related to the field. And you clearly aren't, not to the extent you try to claim at least.

Your remark is kinda cringy and an obvious oxymoron, you can probably understand why hopefully.

Here : https://learnopengl.com/Advanced-Lighting/Parallax-Mapping

Nothing is used from screen space to construct anything. It's not a screen space effect and it generates no (obvious) artifacts if implemented correctly.

Think about this : Xbox One and PS4 have first gen GCN which is known to be horrible at tessellation yet in most games you can observe that a good amount of surfaces have depth in them. How do you think they do that ? It's POM and adjacent methods mostly. I think the only times when tessellation is used is when you need something to simulate the deformation of terrain that can also accommodate changes according to other geometry (walking over snow, mud, etc) and I only saw it in a few games like ROTTR, GOW, the rest of them don't use tessellation for a lot of things, it's simply too expensive.

You can either accept that some features will always remain too expensive to be put into practice on a large scale or that they are everywhere. Whatever floats your boat.
I really wish you would stop talking to me like I have no idea what I'm talking about, and that I can't separate POM from tessellation.
 
Then a good amount of RT games will bleed over from the PS and XB to PC and eventually you'll have plenty of software where RT doesn't feel tacked-on.
They could bleed over, assuming Nvidia adopts or finds a way to make the AMD/Sony implementation work on their hardware.
AMD simply does not have the marketshare to set defacto standards on PC.
 
I really wish you would stop talking to me like I have no idea what I'm talking about

Unfortunately you didn't do a very good job portraying yourself like someone who does. I did my best to try and not be condescending but there is so much that I can do when you tried to call me a buffoon or whatnot.

AMD simply does not have the marketshare to set defacto standards on PC.

And Nvidia can ? I'll remind you that with their 80% market share or whatever it is they still struggle to make RTX a thing.
 
Last edited:
And Nvidia can ? I'll remind you that with their 80% market share or whatever it is they still struggle to make RTX a thing.
?
RTX is their solution to RTRT. Making RTX a standard was never a goal. It's a proprietary technology.
Initially so many people were skeptic and AMD was distancing from the whole idea.
One year later RTRT is the hottest topic in game graphics. Everyone is working on an answer to RTX.
How is that not a success?

If you don't agree on Nvidia's huge role in bringing RTRT to masses, you should also not praise AMD for boosting core count. Clearly, Intel uses different cores. ;-)

And BTW: CUDA?
 
?
RTX is their solution to RTRT. Making RTX a standard was never a goal. It's a proprietary technology.
Initially so many people were skeptic and AMD was distancing from the whole idea.
One year later RTRT is the hottest topic in game graphics. Everyone is working on an answer to RTX.
How is that not a success?

If you don't agree on Nvidia's huge role in bringing RTRT to masses, you should also not praise AMD for boosting core count. Clearly, Intel uses different cores. ;-)

And BTW: CUDA?

maybe Nvidia should bring back Physx for old time sake. Kappa
 
Back
Top