• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's Third Largest Ada GPU, the AD106, Features PCIe x8 Interface

Regarding raster, 384bit bus/24Gbps GDDR6X is supporting in the same degree AD102 as 128bit bus/18Gbps GDDR6 is supporting AD106.
If 4060 is full AD106 with 18Gbps GDDR6 it will be in worst case 200W (like 3060Ti) and the FHD performance should be at 3070Ti level at least!
We should wait regarding 8X PCI-express scaling, but the performance essentially will be at OC RTX 2080Ti level which is PCI-express 3.0 16X which offers similar bandwidth with 4.0 8X (not that we can conclude anything concrete from that though)
 
Last edited:
This will have have no effect on performance. Current graphics cards still can't even max out the PCI-e 3.0 x16 bandwidth, much less 4.0 or 5.0.
 
This will have have no effect on performance. Current graphics cards still can't even max out the PCI-e 3.0 x16 bandwidth, much less 4.0 or 5.0.

this one is x8 not x16
 
AD102
AD103
AD104
AD106
AD107

Third largest would be AD104, AD106 at 203mm2 is one third of 4090, but sadly only one quarter of the CUDA cores 4608 or even less, 3840. Still considering how late into the cycle 3050 was released, why are they even talking about AD106 so soon. 4050 is at least a 3060/ 3060 Ti if true.
 
By the way, is PCIe x12 dead forever? It's part of the standard and it would be useful if bifurcation to 12 + 4 were possible, so one more M.2 port could be added.
Is x12 part of the standard? I've looked specifically for any mention of that across various sources previously, and not found any mention of it (including PCI-SIG documention). I did discover the existence of PCIe x24 while looking, but from what I could find the supported lane widths were 1, 4, 8, 16 and 24.
 
Thay should also make a x4 one, at a lower price, for those with pcie5.
So whoever have 'older' pcie4 will pay more in order to get the full passthrough.

I have more bad suggestions for NV but everything in due time.
I still waiting for RX7400/7500 at pci-e gen5 x2 lanes with many luck them can decide this time offer to users pci-e gen5 x1 lanes (this will be outstanding product :roll:)

:)
 
The 3050 doesn't care if you're running it at 8x @3.0:

cant imagine why the falloff for the 4050 would be any higher, while the 4060 may be around 5% slower? At least they aren't castrating the Ad107 GPU like AMD continues to do with RX 7500!
relative-performance_1920-1080.png
 
Lower bit bus, now 8x pcie. It seams they had problems with higher clock speeds on consumer grade PCBs.
 
Is x12 part of the standard? I've looked specifically for any mention of that across various sources previously, and not found any mention of it (including PCI-SIG documention). I did discover the existence of PCIe x24 while looking, but from what I could find the supported lane widths were 1, 4, 8, 16 and 24.
I saw it first on Wikipedia
Up to and including PCIe 5.0, x12 and x32 links were defined as well but never used.
then in various other sources, old rather than new, with passing mention:

I can't access the original source of PCI-SIG documents but here is another source for PCIe 2.0 base specification, and it mentions x12:

What version of documentation were you looking at, perhaps the 6.0 spec leaves out x12 and introduces x24?

Edit: Surprise, you can buy PCIe 5.0 x24 connectors from Amphenol, they have 230 pins and would hang over the edge on a mini-ITX motherboard:
Does anyone know where these are used?
 
Last edited:
Is x12 part of the standard? I've looked specifically for any mention of that across various sources previously, and not found any mention of it (including PCI-SIG documention). I did discover the existence of PCIe x24 while looking, but from what I could find the supported lane widths were 1, 4, 8, 16 and 24.

Aren't lanes just lanes? There's no mechanical x12 slot but I don't see why they can't just handshake with 12 lanes just like they would if on a lower/higher lane count slot.

Boards will waste lanes, but nvidia still gets the space savings on die from a smaller pcie phy.
 
this one is x8 not x16
I think you misunderstood my point. Current GPUs can't even max out the bandwidth of 16 lanes of PCI-e 3.0 so 8 lanes of 4.0 (which is the same as 16 lanes of 3.0) or 5.0 will not even be close to a problem or cause any kind of bandwidth bottlenecking. This is a non-issue.
 
I think you misunderstood my point. Current GPUs can't even max out the bandwidth of 16 lanes of PCI-e 3.0 so 8 lanes of 4.0 (which is the same as 16 lanes of 3.0) or 5.0 will not even be close to a problem or cause any kind of bandwidth bottlenecking. This is a non-issue.

you're assuming everyone has a 4.0 mobo. That's the issue here.
 
nvidia still gets the space savings on die from a smaller pcie phy.
This! The PCIe phy is a large bunch of exceedingly complex analogue electronics, even more so if it's 5.0. Pretty sure it takes a significant part of the die, and may even be a cause of lower yields.
 
I saw it first on Wikipedia

then in various other sources, old rather than new, with passing mention:

I can't access the original source of PCI-SIG documents but here is another source for PCIe 2.0 base specification, and it mentions x12:

What version of documentation were you looking at, perhaps the 6.0 spec leaves out x12 and introduces x24?

Edit: Surprise, you can buy PCIe 5.0 x24 connectors from Amphenol, they have 230 pins and would hang over the edge on a mini-ITX motherboard:
Does anyone know where these are used?
x24 has some adoption in servers of various kinds.

Aren't lanes just lanes? There's no mechanical x12 slot but I don't see why they can't just handshake with 12 lanes just like they would if on a lower/higher lane count slot.

Boards will waste lanes, but nvidia still gets the space savings on die from a smaller pcie phy.
No. Lanes come from controllers, which group lanes in various ways. Very few PCIe controllers consist of a collection of individually addressable lanes, as that's rather inefficient in terms of die space when you'll be running them grouped. Bifurcation support depends on how these lanes are grouped in hardware, and how the controller(s) are able to sync and split these configurations. Starting with current consumer CPU PEG lanes as an example, that's "a x16 controller" that's internally made up of two x8 hardware blocks in order to facilitate CF/SLI bifurcation. On modern platforms these x8 blocks can again be split into x4+x4. My guess is that there's some problem with half the lanes from one controller being paired with the lanes from the other one, rather than them running fully synced or not at all. Though it might just be a case of "this is such a niche use case, we can't spend our budget on R&D and QC for this".
 
On modern platforms these x8 blocks can again be split into x4+x4. My guess is that there's some problem with half the lanes from one controller being paired with the lanes from the other one, rather than them running fully synced or not at all. Though it might just be a case of "this is such a niche use case, we can't spend our budget on R&D and QC for this".
One more thing. PCIe x12 would send 3 bytes in 2 transfers, and everything is more difficult (if only a little bit) in the binary world if you have to split anything into units whose size isn't a power of 2.
 
"Regardless, having a PCIe lane count of 8 could possibly impact performance of the GPU on systems with PCI-Express Gen 3"
That's absolute nonsense. There isn't a GPU on the market, nor will there be anytime in the near future, that could possibly be bottlenecked by a PCIe Gen3 x8 slot. There are countless people right now that are using the highest of high end GPU's in a Gen3 X8 slot with zero performance loss, so there's no chance a next gen mid-range card could be bottlenecked by doing the same.
 
Back
Top