• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 9060 XT "Updated" Spec Leak Indicates PCIe 5.0 x16 Interface

T0@st

News Editor
Joined
Mar 7, 2023
Messages
3,149 (3.94/day)
Location
South East, UK
System Name The TPU Typewriter
Processor AMD Ryzen 5 5600 (non-X)
Motherboard GIGABYTE B550M DS3H Micro ATX
Cooling DeepCool AS500
Memory Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16
Video Card(s) PowerColor Radeon RX 7800 XT 16 GB Hellhound OC
Storage Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD
Display(s) Lenovo Legion Y27q-20 27" QHD IPS monitor
Case GameMax Spark M-ATX (re-badged Jonsbo D30)
Audio Device(s) FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs
Power Supply ADATA XPG CORE Reactor 650 W 80+ Gold ATX
Mouse Roccat Kone Pro Air
Keyboard Cooler Master MasterKeys Pro L
Software Windows 10 64-bit Home Edition
Late last week, Amazon's prematurely published product pages provided an early insight into XFX's alleged implementation of a 3320 MHz boost clock—90 MHz above AMD's speculated reference setting for not-yet-official Radeon RX 9060 XT 16 GB and 8 GB graphics cards. A couple of hours ago, momomo_us shared a fresher set of leaked specifications. The tenured PC tech industry observer outlined two separate tranches of unnamed hardware clocks—presumably linked to factory overclocked Radeon RX 9060 XT variants: "up to 3320 MHz, Game: 2780 MHz" and "up to 3290 MHz, Game: 2700 MHz." As interpreted by VideoCardz, these numbers align with older RDNA 4 rumors—previously, insiders claimed that Team Red's Navi 44 GPU was capable of clocking higher than the larger "Navi 48" sibling. According to their investigative article: "new information (indicates) that the RX 9060 XT's game clock is actually higher than the boost clock of the RX 9070 (non-XT)...The highest boost clock reported for the RX 9060 XT so far is 3320 MHz, which significantly exceeds the RX 9070 XT's 3100 MHz found on its fastest variants."

The latest momomo_us bulletin contains mostly repeated information, but graphics card analysts were confounded by the mentioning of a PCIe 5.0 x16 bus Interface. Prior disclosures have theorized the utilization of PCIe 5.0 x8; as listed within TechPowerUp's GPU database entry. VideoCardz weighed in with some credence: "small GPUs like Navi 44 typically don't require wider interfaces, as performance gains are minimal. However, with PCIe 5.0, using only 8 lanes would result in slightly reduced performance on PCIe 4.0 systems, something already seen with the GeForce RTX 5060 Ti 8 GB variant. By choosing a wider interface, AMD may be attempting to avoid similar issues." Leading up to AMD's debuting of first wave RDNA 4 gaming cards, opposing speculators fought over the company's selection of PCIe 4.0 or PCIe 5.0 standards. A public delivery of finalized Radeon RX 9060 XT specifications is likely due on May 21, during Computex 2025.



View at TechPowerUp Main Site | Source
 
Wait what? Wouldn't it be cheaper to just ship a 16gb only model on an X8 interface instead of an 8gb model with an x16?
 
128bits in this day and age is abysmal.
 
Wait what? Wouldn't it be cheaper to just ship a 16gb only model on an X8 interface instead of an 8gb model with an x16?
How much more does a PCIE 16x interface cost verses a 8x? They probably decided it wasn't worth the cost and decided to eliminate any potential issues by keeping the same x16 connection. No need for AIB's to time redesigning boards from 9070's. Reduce cooling mass and VRM's and they already have a 9060 XT ready board.
 
Interesting to see AMD isn't cheaping out on the pci-e interface, a full x16 interface makes it more ideal for older motherboards.
 
Seems like PCIE 5 is less important than the 16x, IMO, as such a card might go into an older system on a slower bus. The extra lanes will be preferred over the bus speed, but maybe that doesn't require much extra cost or effort on the GPU-side.
128bits in this day and age is abysmal.
I dunno. Infinity cache helps offset a lack of raw bandwidth, and increasing GDDR speeds help reduce the need for a wider bus. Even then, it's not like total bandwidth is everything. A 6600XT has 256GB/s bandwidth on a 128bit bus, and that card ourperformed the 5700XT with its 448GB/s 256bit bus. There's always a right-sizing for bus width depending on the GPU, and what we saw the 4060TI is that 128GB isn't a problem for the target market. I think the bigger concern is memory capacity, and that's where 128bit creates the uneasy choices of either a cramped 8GB or an overkill 16GB. That's where hopefully the 3GB GDDR7 chips will help down the road.
 
Interesting to see AMD isn't cheaping out on the pci-e interface, a full x16 interface makes it more ideal for older motherboards.
Of course you would have something positive to say about an 8GB AMD card lol :rolleyes:

Carry on..
 
128bits in this day and age is abysmal.
Memory bus width isn't a parameter that's expected to just keep growing with time.

The memory bus width that's practical to implement is largely dependent on die size. Memory controllers account for a small proportion of the die on large dies, so high end cards with large dies tend to have wider memory buses because it boosts performance and is not much extra cost to add, relatively speaking. Lower end cards have smaller dies, so they tend to end up with narrower memory buses.

You can see for example, on the big GK110 die, there's plenty of space along the edge for memory controllers, so it ended up with a 384-bit memory bus.
Or scroll down to near the bottom of this article for a comparison of TU116 and TU106.

Increasing memory bandwidth requirements have been met by increases in memory speed (i.e. newer GDDR and HBM memory) and better memory compression (e.g. p10-p11 in this Maxwell whitepaper https://www.techpowerup.com/gpu-specs/docs/nvidia-gtx-980.pdf), not by jacking up bus width on small chips.
 
Why new low-end cards even have reduced lanes, does it save that much space from the die or is it for just to make the cards even worse than they are? I mean, when PCIe came, even the lowest of the low-end had full X16 bandwith.

Memory bus width isn't a parameter that's expected to just keep growing with time.

The memory bus width that's practical to implement is largely dependent on die size. Memory controllers account for a small proportion of the die on large dies, so high end cards with large dies tend to have wider memory buses because it boosts performance and is not much extra cost to add, relatively speaking. Lower end cards have smaller dies, so they tend to end up with narrower memory buses.

You can see for example, on the big GK110 die, there's plenty of space along the edge for memory controllers, so it ended up with a 384-bit memory bus.
Or scroll down to near the bottom of this article for a comparison of TU116 and TU106.

Increasing memory bandwidth requirements have been met by increases in memory speed (i.e. newer GDDR and HBM memory) and better memory compression (e.g. p10-p11 in this Maxwell whitepaper https://www.techpowerup.com/gpu-specs/docs/nvidia-gtx-980.pdf), not by jacking up bus width on small chips.
Yeah, and low-end cards aren't saved just with higher bandwith. At least they aren't gimped with 64-bit bus (with few exceptions) like they used to.
 
Why new low-end cards even have reduced lanes, does it save that much space from the die or is it for just to make the cards even worse than they are? I mean, when PCIe came, even the lowest of the low-end had full X16 bandwith.


Yeah, and low-end cards aren't saved just with higher bandwith. At least they aren't gimped with 64-bit bus (with few exceptions) like they used to.
I dunno, the link speeds are so much faster now that more pegs only matter if you’re on a slower link speed. I’d bet it doesn’t matter on these tiers of cards, especially the models with more VRAM.
 
PCIe 5.0 x16 Interface for this card !?! What’s the point of that?
 
I dunno, the link speeds are so much faster now that more pegs only matter if you’re on a slower link speed. I’d bet it doesn’t matter on these tiers of cards, especially the models with more VRAM.
True, but on the other hand, I find it more typical for mid-end cards to be in older systems as a budget-wise upgrade, and with older interface AND fewer lanes, that could be a problem. With full x16, I doubt that even PCIe 3.0 would be a problem.
 
True, but on the other hand, I find it more typical for mid-end cards to be in older systems as a budget-wise upgrade, and with older interface AND fewer lanes, that could be a problem. With full x16, I doubt that even PCIe 3.0 would be a problem.
The 6600 and 7600 were 8x, weren’t they? If the 9060 goes 16x, that will be a departure.
 
But its still an 8GB GPU :confused:

Not really sure what there is to be excited about :kookoo:
 
Last edited:
The 6600 and 7600 were 8x, weren’t they? If the 9060 goes 16x, that will be a departure.
Yes they were.

But its still an 8GB GPU :confused:

Not really sure what there is to be exited about :kookoo:
AMD could've pull up a wiser move than NV and just not release a 8GB model, but there they go... why? I could see a RX 9060 GRE or something similarly named cut-down card being a 8GB one, but the XT..? Hell, with 7600 XT, the whole XT's purpose was to be a 16GB card.
 
AMD could've pull up a wiser move than NV and just not release a 8GB model, but there they go... why? I could see a RX 9060 GRE or something similarly named cut-down card being a 8GB one, but the XT..? Hell, with 7600 XT, the whole XT's purpose was to be a 16GB card.

This is the new 9600XT :D
 
This is the new 9600XT :D
At least this time the doubled VRAM matters, unlike 20+ years ago :D wonder why AMD didn't reuse the legendary 9000 series modeling, would be better than these.
 
Why new low-end cards even have reduced lanes, does it save that much space from the die or is it for just to make the cards even worse than they are? I mean, when PCIe came, even the lowest of the low-end had full X16 bandwith.


Yeah, and low-end cards aren't saved just with higher bandwith. At least they aren't gimped with 64-bit bus (with few exceptions) like they used to.
Reduces power in pci-e lanes.
 
How much more does a PCIE 16x interface cost verses a 8x? They probably decided it wasn't worth the cost and decided to eliminate any potential issues by keeping the same x16 connection. No need for AIB's to time redesigning boards from 9070's. Reduce cooling mass and VRM's and they already have a 9060 XT ready board.
I have no idea, but people have been saying ram is super cheap. Like 20$ or less. They could have shipped with 16 but I guess greed got the better of them. Fake msrp, outdated 8pin connectors and cheaping out on vram, someone inside the company must be sabotaging them with the goal to shutdown the Radeon department.
 
outdated 8pin connectors
What's wrong with traditional PCIe connectors? I'd rather have as many 8-pins as needed instead of even one of those fire hazard connectors.

If it works, don't fix it.
 
Fake msrp
No less fake than Nvidia's MSRP's.
outdated 8pin connectors
The 5060Ti also uses the "outdated" 8 pin connector.
I'd rather have a connector that isn't prone to melting than the newer connector.
cheaping out on vram
And like I said you can thank the market leader for it.
shutdown the Radeon department.
I know the diehard Nvidia fans wish for no competition at all, but a complete monopoly wouldn't be a good thing. I mean look at the rediculous BS Nvidia already pulls with a melting power connector, pushing prices up for everyone, and releasing broken drivers because they don't care about the gaming market.
 
Back
Top