How would you transfer data between GPU and RT accelerator? Context being games and real-time RT.
When talking about things like movies where a lot of time per frame can be spent the considerations are completely different.
A lot of data is and needs to be common for shaders and RT and there is precious little milliseconds to spend on moving data. PCIe latency on purely transfer layer was IIRC 0.5ms, that will increase with contention and depends on data - how much of it is there and how many transfers will be needed (say for a single frame that we need to ideally fit into 8ms).
To reduce reliance on PCIe we can move the RT accelerator to a dedicated bus - Infinity Fabric or NVLink being nice candidates for example. But this leads to a bunch of questions like - why a separate card? Lets put the RT accelerator chip on the graphics card. When we put it on graphics card - there are power and latency penalties for moving data between chips (or off-die as a whole). So why don't we put it into the GPU itself? What then is considered is the amount of units/performance needed for the current introductory stage of the technology and the area cost of whatever the designer comes up with. Seems like what they came up with the needed level being a couple percent of the die area and that it was acceptable tradeoff
I feel like you're trying to argue with me but I didn't pretend to have a clue what I am talking about. I asked because I don't. Perhaps I also misunderstood your tone, who knows...
Anyway, your reasoning makes sense. I don't know how much sense because I'm no expert but sense exists.
I try to keep an open mind until I get bit for it
And I keep respectable amount of skepticism until I'm proven it actually was cool and I was wrong to question someone's move. Never get disappointed this way.
It's constantly being fixed with new drivers.
It was broken from the start, that matters more. What matters even more is it's still worse than nVidia's numbers despite 40 (!) months passed after people noticed AMD are behind in that department and 13 (!) months passed since it became a total casino.
Who cares if you only game?
No one but if a device B has less purposes than a device A then it doesn't deserve to have equal price.
Everyone who bought a game only to realise it only has DLSS/XeSS and not FSR. And at desired quality settings, they don't get satisfactory performance and native rendering sucks as well. Missing feature = should be discounted.
That's an opinion not everyone agrees with (I don't).
You can just measure artifacting and nosing levels of DLSS, XeSS, and FSR. This is a fact FSR is not only behind DLSS, it's also behind XeSS already. Both DLSS and XeSS are evolving with at least twice the FSR's pace. I also dare to remind you AMD haven't introduced frame generation yet. Yes, FSR3 technically exists and a couple games no one cares about support this feature but popular games are still only capable of getting 3rd party mods which is great because the games actually get these mods and these mods work decently well but it sucks because AMD are at least 14 months late for this party. And even isopropyl alcohol has been consumed. That said, it's another couple bucks proficite.
True, but not really an issue.
Do I need to remind not everyone has cheap/free electricity? Do I need to remind not everyone is willing to spend additional money on a PSU? This is also a thing to further discount.
every card in every tier is £50-100 cheaper than the equivalent Nvidia one?
Untrue. RTX 4060 is priced nearly identically to RX 7600 and it's better in EVERYTHING. It's faster, it has DLSS, it eats less power, it's more compact, it can be used in non-gaming loads with more ease. RTX 4070 is becoming cheaper than RX 7800 XT and, just for a moment, RTX 4070 has access to CUDA, DLSS, way faster RT and way lesser power usage, just like 4060 VS 7600. All that whilst not being noticeably slower in non-RT in the first place. And I am pointing out on the most pro-AMD price segment possible. At 700+ dollars, AMD GPUs make quite little sense because raster performance is "very few people care if it's 200 or 220 FPS" level and feature set is nowhere near being here.
No, they're trying to be cheaper, which is something.
This is a bad thing. AMD should've tried to become more interesting. That means making their GPUs exactly the same price as nVidia ones but MUCH faster/better in at least couple things. Just imagine having RX 7700 XT's performance under the name of RX 7600 and the wattage of RX 7600 for $300. 4060 would've been completely outclassed and AMD would've made this market more competitive. Yet they didn't. Prices only go down because the demand itself is record low due to mining and COVID crises. 7800 XT's "success" is only made up of low production. We'd see The Great Chinese Walls of 7800 XTs if they were produced in the same amounts as 7700 XTs.
I don't wanna wear a tinfoil hat preemptively but it all feels like AMD is a daughter company of nVidia only made for anti-monopolism reasons.
The new connector is not an Nvidia thing, they just adopted it first.
So they could've said "buzz off, this is a cancer idea" but they did what they did.