• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's Third Largest Ada GPU, the AD106, Features PCIe x8 Interface

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
It looks like NVIDIA is finally taking AMD's route in the mid-range by giving the third-largest silicon in its next-generation GeForce "Ada" RTX 40-series a narrower PCI-Express host interface. The AD106 silicon will be NVIDIA's third largest client GPU based on the "Ada" architecture, and succeeds the GA106 powering the likes of the GeForce RTX 3060. This chip reportedly features a narrower PCI-Express x8 host interface. At this point we don't know if the AD106 comes with PCI-Express Gen 5 or Gen 4. Regardless, having a PCIe lane count of 8 could possibly impact performance of the GPU on systems with PCI-Express Gen 3, such as 10th Gen Intel "Comet Lake," or even AMD's Ryzen 7 5700G APU.

Interestingly, the same leak also claims that the AD107, the fourth largest silicon powering lower mid-range SKUs, and which succeeds the GA107, features the same PCIe lane-count of x8. This is unlike AMD, which gives the "Navi 24" silicon a PCI-Express 4.0 x4 interface. Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony. It also reduces the pin-count of the GPU package. NVIDIA's calculation here is that there are now at least two generations of Intel and AMD platforms with PCIe Gen 4 or later (Intel "Rocket Lake" and "Alder Lake," AMD "Zen 2," and "Zen 3,") and so it makes sense to lower the PCIe lane-count.



View at TechPowerUp Main Site | Source
 
A midrange card with an 8 pin and a 6 pin power connector... excellent.
 
Expecting 3070 perf for 4060 and 220-250W TDP. What i'm not sure just yet is the price, but for sure it will be $400+. Let's see what AMD brings to table this time perf/price ratio.
And if Ngreedia goes with new power connector for all cards or not.
 
I can understand it on lower end products that cost $180 or less but having a narrower pci-e bus for a $400-dollar mid range card is absurd!
try to read and u ll understand... "Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony. It also reduces the pin-count of the GPU package."

so its all about cuting cost.
 
Woaa! They gonna save us 0.5 dollar per PCB!
Nah... it'll save nvidia some money, but we'll pay the same regardless. Whether it matters or not, we'll see when the products come out.
 
guess the 30 Series will be the better choice since it uses the whole 16x PCIe Gen4 lanes compared to the 40 Series "supposedly" 8x Gen4/5 lanes.
 
These are the fourth and fifth largest GPUs. AD102 (4090), AD103 (4080), and AD104 (4070) are first, second, and third.
 
Woaa! They gonna save us 0.5 dollar per PCB!
you did the math quick .. some kind of electrical engineer you are .. lol
guess the 30 Series will be the better choice since it uses the whole 16x PCIe Gen4 lanes compared to the 40 Series "supposedly" 8x Gen4/5 lanes.
what the point to have 16x G4 if you dont lose performance with 8x?
 
If performance will be above 3070ti then 4060 is good gpu. If not, then meh.
 
A midrange card with an 8 pin and a 6 pin power connector... excellent.
You know that's not a 4000 series in the picture?
 
if it doesn't hinder the card it's fine, if it does it's absurd.
 
I don't expect any performance impact on gen5-boards, but as the article mentions, these "midrange"-cards are often bought as an upgrade for older systems and could be used even in gen3-boards, where the performance impact could be very noticeable.

Together with a steep rise in boardpower, supposably in price, too, and a smaller GPU with narrower memory interface (256Bit instead of 320/384Bit on x080, 192/160Bit instead of 256Bit on x070/x060Ti) and still only an increase in VRAM capacity by 50% instead of 100% on average, this again gives us the impresseion that NV gives us less for our money than in the last generations.
 
Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony.
PCIe is not very strict about trace lengths, the tolerances are quite broad to account for different lengths. So is that really an issue?
I don't see how Nvidia or AMD could cut costs by more than a couple $ this way.
 
Oh man, I'm really going to wait and see on this one it seems, once again.

So far there isn't a single product in the Nvidia ADA stack I'm really excited for. It all looks... handicapped.
 
By the way, is PCIe x12 dead forever? It's part of the standard and it would be useful if bifurcation to 12 + 4 were possible, so one more M.2 port could be added.
 
Oh man, I'm really going to wait and see on this one it seems, once again.

So far there isn't a single product in the Nvidia ADA stack I'm really excited for. It all looks... handicapped.
... and overpriced, not to mention hungry as hell.
 
Expecting 3070 perf for 4060 and 220-250W TDP. What i'm not sure just yet is the price, but for sure it will be $400+. Let's see what AMD brings to table this time perf/price ratio.
And if Ngreedia goes with new power connector for all cards or not.
3070 is already ~ 215W TDP. If this is true then you are getting the same card years later, hence worse product.
 
3070 is already ~ 215W TDP. If this is true then you are getting the same card years later, hence worse product.
If the price is right, witch i doubt, could be an interesting upgrade for me at least from a 2060.
I will upgrade 2060 if only: => 3070 perf, 200W TDP max and $300-350.
But the Nvidia tactics last two years kinda make me turn to red team again. Not that red team is no saint.......
Waiting for reviews for both vendors and make a decision based on my expectations.

P.S. also 2060 is a 170W card, but runs just fine at 125W with minimal loss.
 
I don't expect any performance impact on gen5-boards, but as the article mentions, these "midrange"-cards are often bought as an upgrade for older systems and could be used even in gen3-boards, where the performance impact could be very noticeable.

that depends if the card is Gen 5 or 4, you can't draw those conclusion just yet
 
Thay should also make a x4 one, at a lower price, for those with pcie5.
So whoever have 'older' pcie4 will pay more in order to get the full passthrough.

I have more bad suggestions for NV but everything in due time.
 
For midrange GPUs PCI-e 4x16 or maybe even PCI-e 3x16 should be far more useful than PCI-e 5x8. I don't know how many of these cards will be combined with top and/or last-gen CPUs and motherboards. Most systems waiting (and waiting and waiting...:banghead:) for a GPU upgrade are still PCI-e 3.
 
Back
Top