• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ASUS Radeon RX 9060 XT Prime OC 16 GB

Maybe not now, but I do expect to see some 'choice' silicon from recent times and/or near-future end up that way.

Esp. if nVidia wants to exit the consumer graphics market, I could see nV 'backporting' their last gen. of AI/MI GPGPUs onto a less 'congested' production node.
AI/MI tools internal to nVidia, probably would help make short work of an otherwise very complicated process. As long as a 'sufficient' node had ample production space, I could see it happen.
If they did, people would complain about "old nodes" or "bait and switch" etc. Has happened before.
 
Esp. if nVidia wants to exit the consumer graphics market, I could see nV 'backporting' their last gen. of AI/MI GPGPUs onto a less 'congested' production node.
That would just be impossible. Older node = lower power efficiency. And power consumption (including heat production) is the primary limitation and cost for datacenters. It would take something like a 95% price cut to make an older, less efficient node worth it over the long term for major customers. Plus, then regulators would be up in arms over more carbon production and water usage.
 
TSMC isn't even the actual monopoly. They're just the best foundry so they're always a few years ahead of Samsung and Intel with smaller nodes and their quality and yield is better. ASML is the actual monopoly, they're the ONLY manufacturer of the photolithography machines for small nodes.
Correct. There are fabs worldwide making ancient lithography Power ICs, etc. IIRC, there are even still 6x86-class SoCs being manufactured for industrial control, etc.

No hard feelings towards ASML, but I find it absolutely insane they're a global monopoly. One would think there'd be 'national security interests' in facilitating more suppliers.

If they did, people would complain about "old nodes" or "bait and switch" etc. Has happened before.
Not Wrong. TBF though, nVidia has already earned a reputation, all on their own.
 
If they did, people would complain about "old nodes" or "bait and switch" etc. Has happened before.
Which would be an issue if there isn't communication. Personal opinion, but if either AMD came with a 9050 or NV with a 5050 EXPLICITLY saying they'd be produced on older tech but still with the current architectures so they'd be cheaper to produce and market, I'd applaud even if I wouldn't personally buy one. But hey, that wouldn't leave smiles on the faces of the shareholders.
 
I understand but at the same time, what is the limit?

Where does is stop mattering?

I dont believe that the human eye will perceive changes at infinitum.

We have hard limits everywhere, so the question persist, until when does these numbers matter?
At the moment, it depends on the physical capabilities of the user. I myself can very much tell the difference between 120Hz and 240Hz in a FPS game, so much that black frame insertion would be noticable to me, but for others not so much.

I can agree that 240+ Hz is diminishing returns depending on how well the game handles latency and input lag. Same with 2000Hz+ polling rate on devices.
 
This is hypothetical testing and merely a thought experiment.
Handicapping a modern processor to show disadvantage of PCIe3 is just silly.
Like putting 1980s tires on a 2020 Ferrari...

CPU+mobo combos that only offer PCIe 3.0 are now, what, 5 to 7 years old?
Intel 10th gen and AMD Ryzen 2000 series (Zen+)?
These systems have CPU constraints outside of just PCIe bandwidth.
You can put a 5800X3D in any old A320 motherboard...
 
6800XT has similar performance to 7800XT, because it has significantly more compute units.
7800XT should not be thought of as 6800 XT's real sucessor.
I've always considered the 6800XT as the vanilla 6900 since it shares far too much in common with the 6900XT to be anything else.
 
No hard feelings towards ASML, but I find it absolutely insane they're a global monopoly. One would think there'd be 'national security interests' in facilitating more suppliers.
I'm sure China is not happy about ASML being the sole supplier for EUV machines, but it's too complicated for them to steal and replicate, so they're stuck lol. And for the US and EU, it's better for their national security to keep ASML as the sole supplier, under their control. Blacklisting China and Russia from advanced EUV sales has crippled their domestic semiconductor industry. China has to resort to smuggling in whatever Nvidia chips they can get. So it works for the US and EU natsec.

You can put a 5800X3D in any old A320 motherboard...
Sure, but how many people are in the position of getting a $350 5800X3D or 5700X3D, a $400 5060-Ti, but can't spend another $80 for a cheap B550 mobo to get PCIe 4.0?
 
That would just be impossible. Older node = lower power efficiency. And power consumption (including heat production) is the primary limitation and cost for datacenters. It would take something like a 95% price cut to make an older, less efficient node worth it over the long term for major customers. Plus, then regulators would be up in arms over more carbon production and water usage.
nVidia has long decided that us plebian consumers do not need any Datacenter-class graphics/capabilities. NVidia would already be cutting-down the uArch, for production/cost efficiency.

TBQH, with what we've seen w/ GDDR7, nV can just eat up the newest nodes' production capacity for datacenter products, and produce 'consumer graphics' on the same (prev-gen) node(s) as AMD and Intel
-which, would further pressure their competitors
 
Sure, but how many people are in the position of getting a $350 5800X3D or 5700X3D, a $400 5060-Ti, but can't spend another $80 for a cheap B550 mobo to get PCIe 4.0?
Those already running their 3xx or 4xx systems just fine? Why swap the motherboard, if theirs support the X3D's?
 
Sure, but how many people are in the position of getting a $350 5800X3D or 5700X3D, a $400 5060-Ti, but can't spend another $80 for a cheap B550 mobo to get PCIe 4.0?
Fair, but... There's something not often brought up about A320 and A520:
(largely) Because the boards are so 'stripped down', they POST and load into Windows faster than any other contemporary. At least you get a full x16 lanes, even if only Gen3.

I was legitimately upset by the fact my Asus A320M-K + Air Cooled 5700X3D was faster to POST+boot, and 'snappier' than my Asus Tuf X570 + AIO'd 5800X3D.
 
Those already running their 3xx or 4xx systems just fine? Why swap the motherboard, if theirs support the X3D's?
Because they would lose GPU performance on PCIe 3.0 and it's still cheaper than switching to AM5.
 
Because they would lose GPU performance on PCIe 3.0 and it's still cheaper than switching to AM5.
Gen4x4 was sufficient for Navi 24 but, it only had x4 lanes.
On paper, this is half the chip as the Navi 48 powering the Radeon RX 9070 series, but with one key change that sets it apart from its predecessors, Navi 33 and Navi 23—the chip comes with a full PCI-Express 5.0 x16 host interface, just like Navi 48, and not a truncated PCI-Express 5.0 x8.
RX 9060 XT @ Gen3x16 is still similar in bandwidth to Gen5x4.

I think we need some PCIe testing on Navi 44 and 48...
 
(largely) Because the boards are so 'stripped down', they POST and load into Windows faster than any other contemporary.

That is an operating system issue of windows 11 pro. Not a hardware problem. Other live cds and operating system boot much faster.
A livecd is a fair comparision. Quite faster as the installed windows 11 pro on my desktop, laptop 1, 2, 3
 
Didn't expect much from the 9060XT and AMD proved to again, they are fast asleep behind the wheel. Mid range GPU's has become a joke...

 
Because they would lose GPU performance on PCIe 3.0 and it's still cheaper than switching to AM5.
Personal opinion: I'd rather take the slight performance loss against the financial loss of this sidestep. Case in point: I run a 7900XTX on a X470 mb with a 5700X3D.
 
Didn't expect much from the 9060XT and AMD proved to again, they are fast asleep behind the wheel. Mid range GPU's has become a joke...

Quoting Clownus Tech Tips?

I mean, it's not like his reputation went down the drain alongside his apology video for misreporting information, testing methodology and work ethics.

Next argument, please.
 
At what point it stops mattering from a player physical point of view?

I understand but at the same time, what is the limit?

Where does is stop mattering?

I dont believe that the human eye will perceive changes at infinitum.

We have hard limits everywhere, so the question persist, until when does these numbers matter?
Not sure if you want a serious answer, but hypothetically? Up to a 1000 FPS at 1000Hz of refresh. That’s where any amount of persistent motion blur would be removed to the human eye in terms of displayed frames and also would provide input that would be as responsive as possible within the limits of computer I/O. So… yeah. Obviously, this comes with the caveat of extreme diminishing returns, but high level players absolutely can and will feel and see the difference - just this year ESL switched to using 600Hz Zowie monitors for their tournaments starting from Katowice and all the players commented on it being a very significant improvement compared to 360Hz ones used previously.
 
Last edited:
That is an operating system issue of windows 11 pro. Not a hardware problem. Other live cds and operating system boot much faster.
A livecd is a fair comparision. Quite faster as the installed windows 11 pro on my desktop, laptop 1, 2, 3
I disagree. It was consistently faster to POST and boot, even into Windows 10 IoT Enterprise LTSC 21H2.
 
Wonder why they didn't want to remove doom as well, since it's a huge outlier as well. Do you have any idea why they are fine with doom but not the other outliers?

doom-eternal-1920-1080.png

Simply because W1zzard had already said he’d replace it with The Dark Ages. CS2 is just too lightweight to be a meaningful GPU benchmark.

1749064820763.png
 
PCIe x16 is something this card has in its favor compared to its competitors.

Yes, this is one if its biggest draw cards. Most people with 78/9800X3D's/14900/285k would not be purchasing this card. It's better for older systems like AM4/1200 and previous sockets.

Look at the 5800X3D, with 95.9% of the 9600X performance. It runs on motherboards from the Zen 1 era. I'm running a 5700X3D on a B350 board. I could get that performance, or get a 5060 Ti and see a huge performance loss from PCIe 3.0 x8.

Agreed.

We have to remember, all of these card are slower or barely faster than a 5 year old 3070

Very much doubt I would buy a mined on 5 yr old 8Gb 3070 over any of those new cards you listed but yes, performance is very similar.
 
Last edited:
Well if I can get the Sapphire Pulse 9060xt 16GB for £320 plus postage tomorrow I`m in there like swimwear, considering the cheapest 5060ti is £400 plus postage! Overclockers put this page up for ~1 hour the other day by mistake so I`m hoping thems the real prices...
 

Attachments

  • 9060xt.png
    9060xt.png
    350.2 KB · Views: 28
Very much doubt I would by a mined on 5 yr old 8Gb 3070 over any of those new cards you listed but yes, performance is very similar.
Not suggesting one should, but lots of people have cards like the 3070, 60ti etc.,and there is nothing thats an upgrade 5 years later that doesnt cost 600 euros.
 
Sure, but how many people are in the position of getting a $350 5800X3D or 5700X3D, a $400 5060-Ti, but can't spend another $80 for a cheap B550 mobo to get PCIe 4.0?

The question is why spend 350 on a 5800x 3d, you can buy 13600k and a brand new mobo for 350 and get your shiny new pcie slots.

Ah yeah, ngreedia i guess.
 
Back
Top