• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD’s new RDNA GPU Architecture has Been Exclusively Designed for Gamers

I would be fine if Navi just moved the needle on AMDs mid-range. Vega 56 isn’t cheap to make, and the RX 590 is not efficient enough. If Navi can run with the 2070 and do it at RX 590 prices without resorting to extreme power consumption, I’ll take it.
 
I'd like to ask who spends $700 on a card to play at 1080p nowadays?

I mean, I guess? Going by TPU's tests, The VII is basically dead even with the 2070 and 16% slower than 2080 at 1080. 6% faster than 2070 and 14% slower than 2080 at 1440. 10% faster than 2070 and 10% slower than 2080 at 4k. I guess that is competing at 4k? At least not being blown out.

I'd like to ask who spends $700 on a card to play at 1080p nowadays?

Anyone who wants to use RTX?

I would be fine if Navi just moved the needle on AMDs mid-range. Vega 56 isn’t cheap to make, and the RX 590 is not efficient enough. If Navi can run with the 2070 and do it at RX 590 prices without resorting to extreme power consumption, I’ll take it.

Didn't their own slides say $499 for Navi that competes with 2070?
 
I mean, I guess? Going by TPU's tests, The VII is basically dead even with the 2070 and 16% slower than 2080 at 1080. 6% faster than 2070 and 14% slower than 2080 at 1440. 10% faster than 2070 and 10% slower than 2080 at 4k. I guess that is competing at 4k? At least not being blown out.



Anyone who wants to use RTX?



Didn't their own slides say $499 for Navi that competes with 2070?
I didn’t see the price on the 5700, but maybe I missed it. Even then, we don’t know how they will fill out the product line. Maybe there will be a 5600 that would take on the 1660 as well.
 
I can only imagine the PSU cables' temps.
Do you have a pic by any chance?

I was young and digital cameras were expensive (plus my money had vanished into the video cards). So no. I fried one of them eventually. I don't remember how.
 
nope... no price has been given yet... link or you misread

That's why I asked...why would I give a link when I was asking a question?
 
I mean, I guess? Going by TPU's tests, The VII is basically dead even with the 2070 and 16% slower than 2080 at 1080. 6% faster than 2070 and 14% slower than 2080 at 1440. 10% faster than 2070 and 10% slower than 2080 at 4k. I guess that is competing at 4k? At least not being blown out.

Is that 10% a noticeable difference at a 60fps avg? 80fps avg? 100fps avg? Show me someone could actually perceive such a difference and I'll show you a liar.

Anyone who wants to use RTX?

There are three price points one could hit before hitting $700 to get RTX at 1080p. my point still stands. Also is turning on RTX tanks framerates why bring it up if you're comparing performance?


Didn't their own slides say $499 for Navi that competes with 2070?

Are you making a statement or asking a question for an answer no one outside of AMD could know? Besides that no comment, I'll wait for benches before offering my 2 cents.
 
Is that 10% a noticeable difference at a 60fps avg? 80fps avg? 100fps avg? Show me someone could actually perceive such a difference and I'll show you a liar.



There are three price points one could hit before hitting $700 to get RTX at 1080p. my point still stands. Also is turning on RTX tanks framerates why bring it up if you're comparing performance?




Are you making a statement or asking a question for an answer no one outside of AMD could know? Besides that no comment, I'll wait for benches before offering my 2 cents.
absolutely.66 fps vs 60 fps is noticeable.The more intensive the scene (e.g. fighting in assassin's creed odyssey) the more pronounced the diffference is.In regular world exploraton you can see it but it doesn't really have that much impact.
anyway,it doesn't matter if you can see it or not,card A is slower,has no rtx acceleration,runs hotter and louder at the same price as card B = that means you're not getting a product of the same quality.


but only with amd you're getting 16gb vram and pci-e 4.0, up to 69% faster,much wow


amd768.jpg



boy,for those who mocked the rtx demo,will your face be red now that the most hyped up gpu series since the launch of vega presented you whatever this is

maxresdefault.jpg



I get why you'd make pci-e 4.0 for x570/ryzen 3 a big thing,the connectivity capabilities of the chipset are just insane.But make your mid-range card all about pci-e 4.0? Hvae they forgotten what actually sells cards completely?
 
Last edited:
That's a PCI Express benchmark though. If you hammer the bus in order to test its performance, of course you're gonna detect a difference. Realistic graphics rendering workload though, it doesn't help. The biggest bottleneck is GPU -> VRAM -> GPU, not PCIE.
 
That's a PCI Express benchmark though. If you hammer the bus in order to test its performance, of course you're gonna detect a difference. Realistic graphics rendering workload though, it doesn't help. The biggest bottleneck is GPU -> VRAM -> GPU, not PCIE.
Outside that one benchmark in any OTHER real world use case its a joke, like that toyota comercial where tout their truck pulling the space shuttle at a speed 2mph. Just cause you show doesn't mean anything to real world person that buys the truck cause they won't and never will try to pull that much with it. Thing with GPU's even top end 2080ti i doubt even can fully saturate on its own pcie3 in any gaming test outside a benchmark designed for doing that 1 and only thing that serves no purpose except for just being a shallow claim.
 
I remember when AMD launched the MI50 and MI60 cards and how they are supposed to communicate with the CPU (Epyc) via Infinity Fabric through PCIe 4.0 and i wonder ... what if RDNA cards do just that with Zen 2 CPUs?

It may be farfetched, i know, but if these cards were to have one level of performance with Intel's CPUs and a whole new higher level with Zen 2 CPUs, that would be quite the breakthrough, wouldn't it?
 
Last edited:
I remember when AMD launched the MI50 and MI60 cards and how they are supposed to communicate with the CPU (Epyc) via Infinity Fabric through PCIe 4.0 and i wonder ... what if RDNA cards do just that with Zen 2 CPUs?

It may be farfetched, i know, but if these cards were to have one level of performance with Intel's CPUs and a whole new higher level with Zen 2 CPUs, that would be quite the breakthrough, wouldn't it?
what purpose would that serve?

DavidWang_NextHorizon_09.jpg
 
Yup: that's what i was referring to.

Now imagine only one RDNA GPU but with AM4 socked using Zen 2 CPU.
one rdna gpu communicating with one rdna gpu ?
 
No: communicating with Zen 2 CPU, via IF through PCIe 4.0.
can you read graphs?
mi60 uses pci-4.0 for cpu communication with cpu and IF for gpu-gpu communication.
what would even be the point of using IF for cpu-gpu communication for mainstream? driving costs higher for no additional perfromance ?
 
Yeah, you can see the PCB fingers on the picture for the Infinity Fabric links.
 
can you read graphs?
mi60 uses pci-4.0 for cpu communication with cpu and IF for gpu-gpu communication.
what would even be the point of using IF for cpu-gpu communication for mainstream? driving costs higher for no additional perfromance ?

Perhaps reducing / eliminating some bottlenecks that occur regarding CPU - GPU communication, thus improving performance?
 
  • Like
Reactions: HTC
You can see the Infinity Fabric PCB in some pictures of the MI60 installed in servers:
MI60IF.png


Has absolutely nothing to do with PCI Express.
 
  • Like
Reactions: HTC
You can see the Infinity Fabric PCB in some pictures of the MI60 installed in servers:
View attachment 124142

Has absolutely nothing to do with PCI Express.
nvlink 2.0 on turing can do the same 100gb/s,which helps with gpu-gpu scaling
but cpu-gpu we're not in any danger of running into bottlenecks on pci-e 3.0 and if we do pci-e 4.0 is already here
 
Realistic graphics rendering workload though, it doesn't help. The biggest bottleneck is GPU -> VRAM -> GPU, not PCIE.

It helps if you want to do compute. One of the reasons games don't use the GPU much for accelerating other things besides graphics is because that requires extra memory transfers and that usually destroys the real time performance required for games. Many wonder why GPUs aren't use extensively for physics, AI, etc, it's mostly because of this.

This stuff is useful and only now does it become somewhat feasible. The extra bandwidth was never meant to improve graphics workloads, look at what is actually tested in the benchmark : compute performance for physics calculations.
 
Back
Top