• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD "Navi 31" Memory Cache Die Has Preparation for 3D Vertical Cache?

In case you forgot, AMD is still on top in terms of performance/price and Nvidia at the bottom, so this is complete nonsense.


Such as ?


They have HIP and OpenCL, HIP is not supported on windows but I doubt this is relevant to the vast majority of users.

You missed my writing. The problem is not the performance and you put a performance metric chart.
The problem is the package compared to performance.
Everything else apart from the performance is worse.
Image reconstruction tech, RT performance, drivers issues, no cuda alternative, no NVENC alternative, at least AV1 on 7900s, etc.

So they are not cheap enough for the package they offer.
But we are off topic.

On topic, the 3D V cache even if it works with the gpus, it will give something that AMD does not need necessarily. More performance.
 
More latency because less lanes?
I’m not an expert in this but usually serial implies one after the other instead of all at once. In other words, I don’t think all lanes are accessed simultaneously like in parallel connections so latency should increase as you require more lanes for more bandwidth if latency is even a factor at all.

Someone with more knowledge can correct me or provide more detail.
 
I think that the technology is not so strictly serial if you look at it correctly. There are also elements of parallel data transfer.
 
I like the design, but it barely checks out. Perhaps saving quite a bit of cost, but it's not good enough. The memory/cache side of things seems good enough, makes me wonder if those cores really are any better. 35-40% faster with 20% more cores doesn't seem that good for "50% watt/perf", which it should've been less power or higher performance than currently... It just seems to me those cores are again (like always?) too weak, or have too little of them. 3090 ti > 4090 had like 50% more cores, all of which faster as well.

Perhaps AMD should try making a bigger Navi, 128 instead of 96, 2 more mcd's, that sort of stuff. Might be too power hungry but unless there are core bugs (which is likely) it will get on 4090 level...
 
Image reconstruction tech, RT performance, drivers issues, no cuda alternative, no NVENC alternative, at least AV1 on 7900s, etc.

Define "competitive image reconstruction tech", FSR works fine. RT performance comes with a colossal performance hit on every card, including Nvidia's, you can't seriously use this as some kind of big selling point when even the 4090 can't hit 60fps at 4K natively in some games, not to mention RT is still not exactly wide spread and it has been almost 5 years since RTRT became a thing.

I already said there is an alternative to CUDA and VCN is comparable to NVENC. And you still failed to mention what those drivers issues are.
 
X3D was only good because Intel was being incompetent for 4 straight years
Nvidia however never sit back and relax
You are right that Nvidia is not sitting back currently. (I dont think we can say never as they had era where they were crappy like the Geforce FX series that were huge crap. They just didn't aimed far enough and they suffered from it)

But the fact that X3D is good on CPU have nothing to do with Intel but how CPU work. CPU are very latency dependent were GPU are made to hide the memory latency. Having the 3D V-Cache on CPU help to reduce the trip to memory and the CPU can continue is computation faster. The code that run on CPU also tend to be loop that reuse the same instruction and data frequently.

On GPU, the cache is there to increase the effective bandwidth, not to reduce latency. The benefits is you can use a smaller bus. A 4080 have a smaller bus than a 3080 but still manage to perform way better due to the large L2 cache.

With RDNA3, they increased the bus side for the top end SKU from 256 bit to 384 bit but reduced the cache from 128 to 96 MB. The thing is getting to 192 MB (like rumored) of cache with 3D V-cache might not increase that much the hit rate. Not much increase on the cache hit rate means not much increase in effective bandwidth.

But anyway, is RDNA 3 really bandwidth starved ? I am not sure.
 
RDNA3's problem is RayTracing performance and I doubt extra cache will help there.

And just to be clear, my point of view is that, it doesn't matter if rasterization is still the king. Raytracing is what helps(if performance is great), or not(if performance is not great), in marketing those chips TODAY.
 
At least this explains the missing performance.
 
Raytracing is what helps(if performance is high), or not(if performance is not great), in marketing those chips TODAY.

Does it ? Nvidia seems to be plastering everything with "get x100 times more FPS with DLSS4637" and they rarely, if ever, advertise anything related to RT by itself because they know it still sucks.
 
RDNA3's problem is RayTracing performance and I doubt extra cache will help there.

And just to be clear, my point of view is that, it doesn't matter if rasterization is still the king. Raytracing is what helps(if performance is great), or not(if performance is not great), in marketing those chips TODAY.
How is this a problem on RDNA3? They have roughly 3090 RT performance. I did not see many people calling 3090 RT performance as a "problem".
Yes 4090 increased this performance but that does not mean that older cards somehow got worse.

And yes AMD is still one generation behind in RT perf but that's not too bad considering they started with it years after Nvidia. Nvidias' RT is not better because of some "magic". It is better because they were the first to bring it to dGPU's.
 
Does it ? Nvidia seems to be plastering everything with "get x100 times more FPS with DLSS4637" and they rarely, if ever, advertise anything related to RT by itself because they know it still sucks.
DLSS was created for both cheating performance and making Raytracing something more than a laughable slideshow. But considering that Nvidia has reach 90% of market share when AMD is offering superior options in rasterization, at least at under $500, I believe that people are really looking at RT performance when buying.

How is this a problem on RDNA3? They have roughly 3090 RT performance. I did not see many people calling 3090 RT performance as a "problem".
Yes 4090 increased this performance but that does not mean that older cards somehow got worse.

And yes AMD is still one generation behind in RT perf but that's not too bad considering they started with it years after Nvidia. Nvidias' RT is not better because of some "magic". It is better because they were the first to bring it to dGPU's.
I can see your point and have used it in arguments in the past. What was "amazing RT performance" 9 months ago, it is considered "crap performance" today? It doesn't make sense. I agree.

But the thing is that AMD needs a strong win to start selling GPUs and the only way to do it is to close the gap with Nvidia in RT. One generation behind doesn't help when for a decade or longer people and press where downgrading AMD in GPUs to the value option with the crappy drivers. People will prefer to go and buy an old gen Nvidia card, even inferior card in rasterization, over a brand new AMD card, if AMD is not completely beating Nvidia in something. Also one gen behind means that AMD not only has to compete with 4000 series, but also 3000 series for the reason I just mentioned. People will buy the inferior(in rasterization performance) Nvidia card, because of AMD's online reputation.
AMD needs to be better in EVERYTHING compared to Nvidia's last gen and start closing the gap with Nvidia's latest gen. RX 6000 was a step even two steps forward. RX 7000 is more like half step backwards. Maybe going chiplets didn't gave them enough time to improve in other areas. Don't know.
 
I believe that people are really looking at RT performance when buying.

Dude, I know personally several people who have Nvidia GPUs and have no clue what either RT or DLSS are. The vast majority of users are completely clueless about these things, they look at the charts but they rarely have any idea what they even mean.

Some of them might spot the option in the menus, they enable it see that performance falls off a cliff with hardly any visual improvement and they never touch it again.
 
Dude, I know personally several people who have Nvidia GPUs and have no clue what either RT or DLSS are. The vast majority of users are completely clueless about these things, they look at the charts but they rarely have any idea what they even mean.

Some of them might spot the option in the menus, they enable it see that performance falls off a cliff with hardly any visual improvement and they never touch it again.
Vya Domus is right. The vast majority of gamers buy consoles, use mobile, buy pre-built PCs, use laptop integrated graphics, etc. The sliver of users that buy a discrete video card for a DIY build is quite small comparatively. The number of educated DIY buyers are even smaller than that. And from that share, AMD has between 10-20% share.

Nvidias only selling point is brand. But to become the top brand, they had to deliver and keep delivering even if buyers are unaware why its better. AMD does have a lot of work cut out for them to topple Nvidia as the top brand and better RTRT performance is only one of many top priorities to reach this goal. Its why AMD iterates on RTRT but hasn’t doubled down yet because its still a distance future must have tech. Rasterization is still king and I personally give AMD the benefit of the doubt as they are close to toppling Intel in the CPU space.
 
Dude, I know personally several people who have Nvidia GPUs and have no clue what either RT or DLSS are. The vast majority of users are completely clueless about these things, they look at the charts but they rarely have any idea what they even mean.

Some of them might spot the option in the menus, they enable it see that performance falls off a cliff with hardly any visual improvement and they never touch it again.
Ask them why they chose Nvidia. See how many of them will reply with "A friend told me to avoid AMD", or "Someone told me they are faster in modern games" and stuff.
i doubt there will be many replying "It was cheaper".

The vast majority of gamers buy consoles, use mobile, buy pre-built PCs, use laptop integrated graphics, etc.
Nvidia. 90% market share.
RTX 3050 outselling RX 6600.

Look, both of you and anyone else reading this thread. I am an AMD fan, but then I see people buying RTX 3050 over RX 6600 and Nvidia grabbing 90% of the market.

Now ask yourselves, not me, yourselves. If people don't care, why Nvidia is selling everything at any price?

I don't think it's just brand recognition. I can understand it when a company is at 60% and the other at 40%, or even 70% vs 30%. But 90% vs 8%?
 
Ask them why they chose Nvidia. See how many of them will reply with "A friend told me to avoid AMD", or "Someone told me they are faster in modern games" and stuff.
i doubt there will be many replying "It was cheaper".
Exactly, so it has nothing to do with RT performance.
 
RDNA3's problem is RayTracing performance and I doubt extra cache will help there.

And just to be clear, my point of view is that, it doesn't matter if rasterization is still the king. Raytracing is what helps(if performance is great), or not(if performance is not great), in marketing those chips TODAY.

Well atm, it does not seem to suffer RT performance, it seems to suffer in Nvidia's take on RT implementation (which, I know......shocking).
RDNA3 seems pretty darn close in Unreal 5's RT with the fornite patch (according to the HardwareUnboxed video)
 
Exactly, so it has nothing to do with RT performance.
That "Someone told me it's faster in modern games", was meant to mean RT. Because AMD cards are usually faster in many price points over Nvidia in rasterization. But people buy Nvidia. Why?
I have posted something in the past.

What would someone choose?
A card that produces 200fps in raster and 50 in Raytracing?
or a card that produces 150 fps in raster and 75 in raytracing?

I could have gone with the second option.
 
So I guess those 3dv, if exist, will be the better choice for gaming if that what you do with your GPU. Just as their 3dv CPU brother's.
 
That "Someone told me it's faster in modern games", was meant to mean RT.
But it's not, because in the end they still don't actually check if that's true, it's just random crap people say.

What would someone choose?
A card that produces 200fps in raster and 50 in Raytracing?
or a card that produces 150 fps in raster and 75 in raytracing?
You're assuming an average consumer would even know or care enough to look into it to that level, which they don't. You may base your choices according to those metrics but most don't, as I said a lot of them don't even know what those things are, how would they ever even check what the performance in RT is ?

The main metric by which people choose what to buy is actually price, I recently saw some news article that RX 7900XT and 4070ti are the best sellers, why do you think that is ? It's because they're the cheapest new GPUs, that's it.

Nvidia has a huge advantage in the mobile space, practically every laptop that ships with a dedicated GPU is Nvidia, there people don't even have a choice, they have to buy Nvidia. And people buy a ton of laptops, I don't know if there are any market share statistics about that but it wouldn't surprise me if a massive chunk of Nvidia's market share comes from mobile parts.
 
In case you forgot, AMD is still on top in terms of performance/price and Nvidia at the bottom, so this is complete nonsense.

View attachment 281462


They have HIP and OpenCL, HIP is not supported on windows but I doubt this is relevant to the vast majority of users.

The problem is that the RTX 4070 Ti performs notoriously poorly at 4K due to its narrow memory bus and relatively low memory capacity. If you use a more reasonable 1440p target for that segment, the story changes as performance per dollar breaks even:

performance-per-dollar_2560-1440.png


It will pull ahead at 1080p as well:

performance-per-dollar_1920-1080.png


Yes... the GPU market is so bad I just had to bring up that this filth they call the 4070 Ti actually seems to offer "good value for money", and it's ridiculous - this is a low end card they are selling at the premium segment, it's just... man, what a time...

The other issue, which is the compute problem, is a long-standing one, though. No one sane uses OpenCL (in fact it's been deprecated by Apple since 2018, they've never been one to shy away from pulling the plug on technologies they don't believe in), both AMD and Intel will need to develop and offer a comparable GPU C compiler if they want to have any chance of challenging CUDA's almost two decade long reign. Intel might be in the best position for this, their software R&D capabilities are vast and they can pour the resources on this... but they need a viable product first and I have a feeling that is where Intel's graphics division is currently busy with.

I must say I'm fascinated by the idea that integrating 3D V-Cache onto every product will radically alter their characteristics to the point it could be considered a holy band-aid for every woe that would affect one of AMD's compute architectures... I'm not sure I believe in outmuscling existing problems, not against a competitor such as NVIDIA.
 
You're assuming an average consumer
I am assuming, you are assuming. You are ALSO ASSUMING. Your whole post is like "I KNOW WHAT PEOPLE WANT AND HOW THEY CHOOSE". Well, no, you don't.

In the end, I repeat.

RTX 3050 outsells RX 6600.
Nvidia at 90% market share.

You all avoid that huge elephand in the room. Are these just "brand recognition"? I don't think so. If it was brand recognision alone, Ryzen would had failed in CPUs. Because AMD's brand recognition back then was the kind of "They made bulldozer, avoid at any cost". But they offered something unbeatable back then. 8 core CPUs that could also perform. They where still far behind in single thread, but in multi thread they where killing.

GPUs? NOT beating current gen in raster, offering last gen in RT and also having to suffer that "crap drivers" reputation.
 
The problem is that the RTX 4070 Ti performs notoriously poorly at 4K due to its narrow memory bus and relatively low memory capacity. If you use a more reasonable 1440p target for that segment
Buying close to 1000 dollar GPUs for 1080 and even 1440p is comical though.

The other issue, which is the compute problem, is a long-standing one, though. No one sane uses OpenCL (in fact it's been deprecated by Apple since 2018, they've never been one to shy away from pulling the plug on technologies they don't believe in), both AMD and Intel will need to develop and offer a comparable GPU C compiler if they want to have any chance of challenging CUDA's almost two decade long reign. Intel might be in the best position for this, their software R&D capabilities are vast and they can pour the resources on this... but they need a viable product first and I have a feeling that is where Intel's graphics division is currently busy with.

As I already said this is completely irrelevant for vast overwhelming majority of consumers. AMD moved from OpenCL as well to HIP, which can be compiled for both AMD and Nvidia actually.

Intel is absolutely not in the best position for competing with CUDA, they've been completely out of the loop on GPGPU front, to this day they don't have any GPU compute products in the hands of their customers to my knowledge. And this is with their vast resources, they've simply failed to deliver anything.

RTX 3050 outsells RX 6600.
And you think it's because of RT performance ? Both of those cards are worthless for RT, actually the RX 6600 is sometimes faster. Your theory falls apart.

NOT beating current gen in raster
What does that even mean ? 7900 XTX is faster than 4080 in raster and cheaper. Let's assume AMD had some 600mm^2 RX 7950 XTX XXX that would be faster than the 4090 in raster, which was entirely feasible, they could have done that if they wanted to. Why would have that made any difference ? Those GPUs account for a minuscule portion of the market share.

having to suffer that "crap drivers" reputation.
And what do you want them to do about that ? Their drivers work fine, that "reputation" mostly comes from trolls that have never owned AMD cards who talk utter nonsense.

Ryzen would had failed in CPUs.
They did kind of failed for a while, it took a long time for them to gain a lot of market share and for people to realize a lot of Intel's products were asinine and that it was not a good idea to pay hundreds of dollars for overclocked quad cores every new generation and to have to change the platform on top of that.
 
Last edited:
And you think it's because of RT performance ? Both of those cards are worthless for RT, actually the RX 6600 is sometimes faster. Your theory falls apart.
My theory doesn't fall apart because you say so. Especially when you base that conclusion on nothing.

People buying RTX 3050 over RX 6600 do it because Nvidia is winning the charts and it's cards are considered as offering the best performance in everything and especially the feature that is most talked about the last 2-3 years at least. Raytracing. It's common knowledge for about forever and not just my opinion, that the best high end card also sells low end cards. So, people go and buy an RTX 3050 because Nvidia offers the fastest cards and also because it's cards are considered as the fastest in RayTracing. That's what they heard/been told. So, people who don't know about hardware will go and buy the Nvidia card because "Nvidia is faster". RX 6600 is the best card and at a lower price point, but people either choose Nvidia or avoid AMD.

Buying close to 1000 dollar GPUs for 1080 and even 1440p is comical though.
Buying close to $1000 and ignoring RT performance could also be described as comical.

But yeah, RTX 4070 Ti does have, as usually for this category of models from Nvidia, just enough memory capacity to perform and sell today, but fell apart tomorrow.
 
People buying RTX 3050 over RX 6600 do it because Nvidia is winning the charts
Which charts ? Like I said 6600 is faster.

that the best high end card also sells low end cards.
So basically your theory is that if AMD has some hypothetical top performer in literally anything then they could sell every heaping pile of crap they can come up with because people will just buy it ?

Why would that be advantageous to you, the consumer ?

Buying close to $1000 and ignoring RT performance could also be described as comical.
Why wouldn't you ignore it ? Cards at that price point work fine in RT for 1080p and even 1440p but at 4K RT performance is crap on anything and you'd be an idiot in my opinion to buy any card for that purpose in particular because if games are barely playable right now at that resolution they're gonna run like absolute crap in a few years.
 
Back
Top