• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Patents Provide Early UDNA Insights - "Blackwell-esque" Ray Tracing Performance Could be Achievable

Worse in every feature? Are you sure? Better take your green badge out and show it :) Fanboys will be fanboys forever.
Please name me a feature AMD has done BETTER or introduced FIRST in modern (i.e. DX Ultimate compliant) GPUs. To be clear, I've already acknowledge power efficiency with RDNA 2 (which was provided by a better node, not the architecture), and DP 2.1 support with RDNA 3. Other people have also mentioned Adrenaline being better than Nvidia's software, which I'll acknowledge, even though Nvidia has consolidated to the Nvidia app now.

Upscaling, later and worse. Frame gen, later and worse. RT, later and worse. Compute (AI or standard), later and worse, etc.
 
Please name me a feature AMD has done BETTER or introduced FIRST in modern (i.e. DX Ultimate compliant) GPUs. To be clear, I've already acknowledge power efficiency with RDNA 2 (which was provided by a better node, not the architecture), and DP 2.1 support with RDNA 3. Other people have also mentioned Adrenaline being better than Nvidia's software, which I'll acknowledge, even though Nvidia has consolidated to the Nvidia app now.

Upscaling, later and worse. Frame gen, later and worse. RT, later and worse. Compute (AI or standard), later and worse, etc.
You can add TruForm to the list. Doesn't change the fact ATI/AMD only shone 3-4 times over the past quarter of a century.
Not saying Nvidia is flawless or anything. But they are the one that pushed the industry forward, for better or worse.
 
So last gen RT? Is this supposed to be impressive?

Udna can be interesting, but for me it boils down to software more than hardware.
A single architecture might mean that rocm and hips get much needed attention.
Productivity is AMDs greatest weakness for me, 9070xt shouldn't be competing with a 3060ti at blender! I bought a 3060ti for 330€ a couple of years ago and the cheapest 9070xt I can get is 800€, for me it's just insane.
 
Last edited:
So last gen RT? Is this supposed to be impressive?
For "I don't skate to where the puck is, I skate to where the puck will be" Wayne Gretzky, no.
For AMD, pretty much.
 
For "I don't skate to where the puck is, I skate to where the puck will be" Wayne Gretzky, no.
For AMD, pretty much.
I completely disagree with you in this case.

As a consumer i care about the results not about the brand, I'm only impressed when the results warrant it, might be interested in the progress made from the point of better competition but that's it.

AMD started RT a gen behind and next gen they will be a gen behind plus this gen they didn't make products for every segment an one of the focus was RT, that's not impressive in any sense!
Unless you assume that AMD is much more incompetent than Nvidia and this is a impressive feat for them.
If you have a long history with Radeon GPUs that shouldn't be something you expect, and if it's the case that just means that they are too far from Nvidia to be competitive.
 
I genuinely want to know if they're still pushing towards multichip or staying on the monolithic path.
 
I genuinely want to know if they're still pushing towards multichip or staying on the monolithic path.
You can rest assured AMD will always go the cheaper way with TSMC, 9070 XT is made on TSMC N4C node, it's the lower cost version of 4nm, on the other hand Nvidia is on N4P node, performance and more complex node, and more expensive.
 
I genuinely want to know if they're still pushing towards multichip or staying on the monolithic path.
If I can't have AMD's MCM for cheaper than Nvidia's monolith, who cares?
 
I genuinely want to know if they're still pushing towards multichip or staying on the monolithic path.

Well, Navi 31 was a learning experience there. It badly underperformed, and even with driver maturity the XTX still can't touch it's originally intended target, the 4090. Still all things considered it could have been worse, it's just that they ended up giving up on good margins and had to depreciate their own product to keep it viable.

Pre-launch price cuts and unplanned announcement of "we're targeting their second tier card instead!" are certainly things AMD does not wish to repeat, they never made such grand claims for the 9070 XT.

I think we'll see chiplets on the GPU again... once AMD is back on their feet. Maybe with big UDNA.
 
Better late than never, AMD

So PS6-era will become RT-mainstreamed-era, at long last!

Exciting times, for sure <3
 
If I can't have AMD's MCM for cheaper than Nvidia's monolith, who cares?
Because if it works and scales even reasonably well it will become cheaper by design. Smaller dies have a lower cost as defects are less impactful to the yield on the same node process.

Translation is that it's lower cost to put the same silicon on the chip. Performance isn't likely to be linear (apples to apples tech) 300mm die area isn't ever going to equal 4x 75mm chiplets due to inter chip latency.

However, monolithic hits a wall with physics of node processes faster than chiplets if for no other reason than heat. For an unrealistic (because it doesn't exist) example, 5090 more than twice the size of the 5070Ti, not twice as fast but also draw double the power (more or less)

Chiplets that can be spaced out even a little make for an easier dispersion of the same amount of wattage/heat.

Few hurdles to get over, but parallelism (imho) is ultimately the way forward and for now, chiplets is the most likely contender.
 
Back
Top