Tuesday, October 6th 2020

AMD Big Navi GPU Features Infinity Cache?

As we are nearing the launch of AMD's highly hyped, next-generation RDNA 2 GPU codenamed "Big Navi", we are seeing more details emerge and crawl their way to us. We already got some rumors suggesting that this card is supposedly going to be called AMD Radeon RX 6900 and it is going to be AMD's top offering. Using a 256-bit bus with 16 GB of GDDR6 memory, the GPU will not use any type of HBM memory, which has historically been rather pricey. Instead, it looks like AMD will compensate for a smaller bus with a new technology it has developed. Thanks to the new findings on Justia Trademarks website by @momomo_us, we have information about the alleged "infinity cache" technology the new GPU uses.

It is reported by VideoCardz that the internal name for this technology is not Infinity Cache, however, it seems that AMD could have changed it recently. What does exactly you might wonder? Well, it is a bit of a mystery for now. What it could be, is a new cache technology which would allow for L1 GPU cache sharing across the cores, or some connection between the caches found across the whole GPU unit. This information should be taken with a grain of salt, as we are yet to see what this technology does and how it works, when AMD announces their new GPU on October 28th.
Source: VideoCardz
Add your own comment

141 Comments on AMD Big Navi GPU Features Infinity Cache?

#1
JAB Creations
I've been stuck on a 290X for a few years now and I can't wait to get the 6900XT or if they make the liquid cooled version 6900XTX. Now that AMD has beaten back the anti-capitalist crony Intel and made enough money to really push R&D:
  • The drivers are rumored to be solid for this release.
  • There will actually be stock because unlike Nvidia they're not trying to artificially drive up prices.
  • It's not going to be a watt-sucking heat-producing beast.
  • I'll finally stop running out of video memory (browsers use GPU memory).
Posted on Reply
#2
okbuddy
1gb cache = from 512bit to 128bit bw

wow

how about 6gb cache we could not need
Posted on Reply
#3
Vayra86
Good comedy, this

Fans desperately searching for some argument to say 256 bit GDDR6 will do anything more than hopefully get even with a 2080ti.

History repeats.

Bandwidth is bandwidth and cache is not new. Also... elephant in the room.... Nvidia needed expanded L2 Cache since Turing to cater for their new shader setup with RT/tensor in them...yeah, I really wonder what magic Navi is going to have with a similar change in cache sizes... surely they won't copy over what Nvidia has done before them like they always have right?! Surely this isn't history repeating, right? Right?!

:lovetpu:
JAB Creations
I've been stuck on a 290X for a few years now and I can't wait to get the 6900XT or if they make the liquid cooled version 6900XTX. Now that AMD has beaten back the anti-capitalist crony Intel and made enough money to really push R&D:
  • The drivers are rumored to be solid for this release.
  • There will actually be stock because unlike Nvidia they're not trying to artificially drive up prices.
  • It's not going to be a watt-sucking heat-producing beast.
  • I'll finally stop running out of video memory (browsers use GPU memory).

Let's revisit those assumptions post launch ;) That'll be fun, too. I'll take a bet... drivers will need hotfixing, which will likely come pretty late or creates new issues along the way (note: Nvidia has fallen prey to this just as well, this alone should say enough); things will be out of stock shortly after launch, its going to suck an easy 250-300W just as well, and yes, you do have 16GB on the top model.

If I'm wrong, I'll buy it :p
Posted on Reply
#4
robb
Vayra86
Good comedy, this

Fans desperately searching for some argument to say 256 bit GDDR6 will do anything more than hopefully get even with a 2080ti.

History repeats.

Bandwidth is bandwidth and cache is not new. Also... elephant in the room.... Nvidia needed expanded L2 Cache since Turing to cater for their new shader setup with RT/tensor in them...yeah, I really wonder what magic Navi is going to have with a similar change in cache sizes... surely they won't copy over what Nvidia has done before them like they always have right?! Surely this isn't history repeating, right? Right?!

:lovetpu:





Let's revisit those assumptions post launch ;) That'll be fun, too. I'll take a bet... drivers will need hotfixing, which will likely come pretty late or creates new issues along the way (note: Nvidia has fallen prey to this just as well, this alone should say enough); things will be out of stock shortly after launch, its going to suck an easy 250-300W just as well, and yes, you do have 16GB on the top model.

If I'm wrong, I'll buy it :p
You have to be a special kind of stupid to think their top card will only match the 2080ti considering the 2080ti is 50% faster than the 5700xt. It does not take a genius to realize that doubling the cores of the 5700xt, increasing IPC, and running higher clocks would result in a MUCH higher gain than 50%. FFS even the XBOX series X has a gpu as fast or faster than the 2080 super and the 6900xt will be a hell of a lot bigger gpu.
Posted on Reply
#5
Frick
Fishfaced Nincompoop
robb
You have to be a special kind of stupid to think their top card will only match the 2080ti considering the 2080ti is 50% faster than the 5700xt. It does not take a genius to realize that doubling the cores of the 5700xt, increasing IPC, and running higher clocks would result in a MUCH higher gain than 50%. FFS even the XBOX series X has a gpu as fast or faster than the 2080 super and the 6900xt will be a hell of a lot bigger gpu.
It's less about being stupid and more about managing expectations. High tier AMD cards have burned people in the past because they expected too much. The only sensible thing to do is to wait for reviews.
Posted on Reply
#6
john_
I don't think cache can replace bandwidth. Especially when games ask for more and more VRAM. I might be looking at it the wrong way and the next example could be wrong, but, Hybrid HDDs NEVER performed as real SSDs.

I am keeping my expectations really low after reading about that 256bit data bus.
Posted on Reply
#7
Valantar
Regardless of the veracity of this, there is definitely something weird about the rumored specifications for these GPUs. 256-bit and 192-bit bus widths for a high-end GPU in 2020 with no new tricks to counteract this would be a significant bottleneck. And AMD obviously knows this. They do, after all, design GPUs for a living. They have the resources to, say, make a 512-bit test chip + PCB and benchmark it with varying numbers of memory controllers enabled, identifying when and how bottlenecks appear. And while 512-bit buses aren't really commercially viable (huge, hot, expensive, and at that point HBM is a better alternative at likely the same price), 384-bit buses are. So if they've chosen to go 256-bit for their highest end GPU, there has to be some reason for it.
Posted on Reply
#8
nguyen
robb
You have to be a special kind of stupid to think their top card will only match the 2080ti considering the 2080ti is 50% faster than the 5700xt. It does not take a genius to realize that doubling the cores of the 5700xt, increasing IPC, and running higher clocks would result in a MUCH higher gain than 50%. FFS even the XBOX series X has a gpu as fast or faster than the 2080 super and the 6900xt will be a hell of a lot bigger gpu.
Let say 6900XT is 20-30% faster than 2080 Ti in "specific" rasterization workload that doesn't require massive bandwidth, but slower than 2080 Ti in Ray Trace workload, does it mean 6900XT is a faster GPU ?
"But you don't need Ray Tracing" is not an excuse for >500usd GPU.
Before you say that there are other API alternative for Ray Tracing, not having dedicated RT cores will just hammer performance, just look at Crysis Remastered as an example (the game can leverage the RT cores)

Posted on Reply
#9
Vya Domus
Vayra86
Fans desperately searching for some argument to say 256 bit GDDR6 will do anything more than hopefully get even with a 2080ti.
I've noticed you are quite dead set on saying some pretty inflammatory and quite stupid things to be honest as of late. What's the matter ?

A 2080ti has 134% the performance of a 5700XT. The new flagship is said to have twice the shaders, likely higher clock speeds and improved IPC. Only a pretty avid fanboy of a certain color would think that such a GPU could only muster some 30% higher performance with all that. GPUs scale very well, you can expect it to be between 170-190% the performance of a 5700XT.
Vayra86
Bandwidth is bandwidth and cache is not new.
Caches aren't new, caches as big as the ones rumored are a new thing. I should also point out that bandwidth and the memory hierarchy is completely hidden away from the GPU cores, in other words, whether it's reading at 100GB/s from DRAM or at 1 TB/s from a cache, it doesn't care, it's just operating on some memory at an address as far as the GPU core is concerned.

Rendering is also an iterative process where you need to go over the same data many times a second, if you can keep for example megabytes of vertex data in some fast memory close to the cores that's a massive win.

GPUs hide very well memory bottlenecks by scheduling hundreds of threads, another thing you might have missed is that over time the ratio of GB/s from DRAM per GPU core has been getting lower and lower. And somehow performance keeps increasing, how the hell does that work if "bandwidth is bandwidth" ?

Clearly, there are ways of increasing the efficiency of these GPU such that they need less DRAM bandwidth to achieve the same performance, this is another one of those ways. By your logic, we must have had GPUs with tens of TB/s by now because otherwise the performances wouldn't have gone up.
JAB Creations
  • There will actually be stock because unlike Nvidia they're not trying to artificially drive up prices.

They wont have much stock, most wafers are going to consoles.
JAB Creations
  • It's not going to be a watt-sucking heat-producing beast.

While performance/watt must have increased massively, perhaps even over Ampere, the highest end card will still be north of 250W.
Posted on Reply
#10
fynxer
john_
I don't think cache can replace bandwidth. Especially when games ask for more and more VRAM. I might be looking at it the wrong way and the next example could be wrong, but, Hybrid HDDs NEVER performed as real SSDs.

I am keeping my expectations really low after reading about that 256bit data bus.
Why do you think we have cache in CPU, GPU and SSD + more.

Because it works and it does replace bandwidth, information that the GPU uses repeatedly is stored in and fetched from cache and thus does not have to travel through the memory bus each time. Therefore the memory bandwidth saved by using cache can instead be used for other information. So a 256-bit bus with a large very effective cache equals MORE MEMORY BANDWITH, Nvidia already uses this system on all their cards.
Posted on Reply
#12
londiste
Vya Domus
A 2080ti has 134% the performance of a 5700XT.
At 1080p. At 1440p, its 142% and at 2160p its 152%.
More notably though, 3080 is twice as fast.
Posted on Reply
#13
Vya Domus
londiste
At 1080p. At 1440p, its 142% and at 2160p its 152%.
Probably you're right, I went of the comparison tool thingy when you browse different GPU that one says the 2080ti is 134% the performance of a 5700XT.

Based on TPU review data: "Performance Summary" at 1920x1080, 4K for 2080 Ti and faster.
Posted on Reply
#14
ZoneDymo
It Always pains me to see people overhyping products, it can pretty much only lead to dissapointment.
That said, lets not forget this GPU was pretty much made with the help of Sony and Microsoft because of their consoles using RDNA2, that is a lot of (smart) people working on a product, so I do have faith that it will be good.

And personally I care little for "beating" Nvidia in "performance".
If it delivers good frames, while going ez on the powerconsumption and while costing, finally again, a reasonable amount of money and not the obscene prices being asked as of late, its a winner in my book.

Heck I would REALLY love it if we had a new RX460/470/480 moment, where all games could be lifted up, where everyone could upgrade and get with the times.

This would also be really good for the evolution/implementation of Ray Tracing, the industry can only really make use of that if the world can use it.
Posted on Reply
#15
Sithaer
ZoneDymo
And personally I care little for "beating" Nvidia in "performance".
If it delivers good frames, while going ez on the powerconsumption and while costing, finally again, a reasonable amount of money and not the obscene prices being asked as of late, its a winner in my book.

Heck I would REALLY love it if we had a new RX460/470/480 moment, where all games could be lifted up, where everyone could upgrade and get with the times.

This would also be really good for the evolution/implementation of Ray Tracing, the industry can only really make use of that if the world can use it.
Yup, this is what I would also love to see and what I mainly care about when upgrading.

Those RX cards were a godsend for me, it was a solid upgrade from my previous card w/o breaking the bank/my wallet.

Looking at the prices lately, most likely my only option will be the second hand market again if I want the same performance uplift as last time. 'went from a GTX 950 to RX 570'
Posted on Reply
#16
M2B
For the sake of comparison, the RTX 2080Ti has exactly twice as many shaders as the RTX 2060 Super with very similar Real-World clocks and performs about 63.3% better at 4K according to TPU's average framerate in 20+ games.
Based on Xbox Series X performance scaling over the X1X it doesn't seem like RDNA2 has much in the way of IPC improvements over RDNA.
So with similar clocks I expect the top-end 80CU RDNA2 to be 55-65% faster than the 5700XT depending on the resolution. (Assuming there is no bandwidth bottleneck)
But as we all know RNDA2 will have noticeably higher clocks than RDNA1, I expect the average clocks of the 80CU part to be in the 2-2.1GHz range which is a decent 10-13% above the 5700XT, assuming semi-linear scaling, this clock boost alone will put RDNA2 10-12% above RDNA1, now with addition of that massive shader count increase It's probably reasonable to expect the top-end RDNA2 to be 75-85% faster than the 5700XT as Vya Domus predicted.

Expecting flagship RDNA2 to be only as fast as a 3070/2080Ti is not realistic, as it will probably beat them both comfortably.
Posted on Reply
#17
R0H1T
When did X1X have RDNA based GPU :wtf:

Also, don't extrapolate RDNA2 performance based on console numbers. They're not exactly comparable, it's more like comparing cashews to figs.
Posted on Reply
#18
M2B
R0H1T
When did X1X have RDNA based GPU :wtf:
Nobody said it had.
Posted on Reply
#19
R0H1T
M2B
Based on Xbox Series X performance scaling over the X1X it doesn't seem like RDNA2 has much in the way of IPC improvements over RDNA.
You said this, how can it be interpreted any differently?
Posted on Reply
#20
M2B
R0H1T
You said this, how can it be interpreted any differently?
I agree, that part of my comment was a bit confusing but I didn't mean The X1X has RDNA.
just the Real-World performance increase didn't suggest higher IPC than RDNA1 to me, based on how RDNA performs in comparison to the console.
Posted on Reply
#21
Calmmo
You say Infinity Cache, i hear "we have chilplets on GPU's now"
Posted on Reply
#22
laszlo
me love to read comments!

Posted on Reply
#23
delshay
So no Nano card this time around as you need HBM for that.
Posted on Reply
#24
bug
Ok, who the hell calls Navi2 "Big Navi"?
Big Navi was a pipe dream of AMD loyalists left wanting for a first gen Navi high-end card.
Posted on Reply
#25
Valantar
M2B
I agree, that part of my comment was a bit confusing but I didn't mean The X1X has RDNA.
just the Real-World performance increase didn't suggest higher IPC than RDNA1 to me, based on how RDNA performs in comparison to the console.
That comparison is nonetheless deeply flawed. You're comparing a GCN-based console (with a crap Jaguar CPU) to a PC with an RDNA-based GPU (unknown CPU, assuming it's not Jaguar-based though) and then that again (?) to a yet to be released console with an RDNA 2 GPU and Zen2 CPU. As there are no XSX titles out yet, the only performance data we have for the latter is while running in backwards compatibility mode, which bypasses most of the architectural improvements even in RDNA 1 and delivers IPC on par with GCN. The increased CPU performance also helps many CPU-limited XOX games perform better on the XSX. In other words, you're not even comparing apples to oranges, you're comparing an apple to an orange to a genetically modified pear that tastes like an apple but only exists in a secret laboratory.

Not to mention the issues with cross-platform benchmarking due to most console titles being very locked down in terms of settings etc. Digital Foundry does an excellent job of this, but their recent XSX back compat video went to great lengths to document how and why their comparisons were problematic.
Posted on Reply
Add your own comment