• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Cancels GeForce RTX 4090 Ti, Next-Gen Flagship to Feature 512-bit Memory Bus

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,683 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA has reportedly shelved plans in the short term to release the rumored GeForce RTX 4090 Ti flagship graphics card, according to Kopite7kimi, a reliable source with NVIDIA leaks. This card had been extensively leaked over the past few months as featuring a cinder block-like 4-slot thickness, and a unique PCB that's along the plane of the motherboard, rather than perpendicular to it. From the looks of it, sales and competition in the high-end/halo segment are too slow, the current RTX 4090 remains the fastest graphics card you can buy, and the company seems unfazed by the alleged Radeon RX 7950 series, given that AMD has already maxed out the "Navi 31" silicon, and there are only so many things the red team can try, to beat the RTX 4090.

That said, the company is reportedly planning more SKUs based on the AD103 and AD106 silicon. The AD103 powers the GeForce RTX 4080, which nearly maxes it out. The AD104 has been maxed out by the RTX 4070 Ti, and there could be a gap between the RTX 4070 Ti and the RTX 4080 that AMD could try to exploit by competitively pricing its RX 7900 series, and certain upcoming SKUs. This creates scope for new SKUs based on cut-down AD103 and the GPU's 256-bit memory bus. The AD106 is nearly maxed out with the RTX 4060 Ti, however there's still room to unlock its last remaining TPC, use faster GDDR6X memory, and attempt to slim the vast gap between the RTX 4060 Ti and the RTX 4070.



In related news, Kopite7kimi also claims that NVIDIA's next-generation flagship GPU could feature a 512-bit wide memory interface, in what could be an early hint that the company is sticking with GDDR6X (currently as fast as 23 Gbps), and not transitioning over to the GDDR7 standard (starts at 32 Gbps), which offers double the speeds of GDDR6.

View at TechPowerUp Main Site | Source
 
512-bit wide GDDR6X sounds great. Hopefully the next gen can also get 48GB or more of VRAM, it would benefit the local LLM community massively.
 
Maybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?
 
Still want to see that cooler and PCB design in the future for other cards, it's too interesting.
 
In related news, Kopite7kimi also claims that NVIDIA's next-generation flagship GPU could feature a 512-bit wide memory interface, in what could be an early hint that the company is sticking with GDDR6X (currently as fast as 23 Gbps), and not transitioning over to the GDDR7 standard (starts at 32 Gbps), which offers double the speeds of GDDR6.

There is GDDR6 memory 20 Gbps. 32 is not double of 20.
 
It depends on AMD. If it releases the long awaited Radeon RX 7950 XTX, then nvidia will have to do something.

1690465796560.png

 
Dosent matter for me. Just means my rtx 4090 will stay the fastest card until nvidia or amd next gen/refresh cards comes out.
 
If GDDR6X remains perhaps ada- ext is 800mm2 still on N4. They are going for a 2080Ti extremely big die. And 600W to max out the 12VHPWR connector.
 
It depends on AMD. If it releases the long awaited Radeon RX 7950 XTX, then nvidia will have to do something.

View attachment 306512
the company seems unfazed by the alleged Radeon RX 7950 series, given that AMD has already maxed out the "Navi 31" silicon, and there are only so many things the red team can try, to beat the RTX 4090.
Wiz's review shows you can gain about 7% from an overclock on 7900XTX, so I'm not sure how they're going to pull off a 30% uplift when they've already run out of core. Here's hoping though!
 
Wiz's review shows you can gain about 7% from an overclock on 7900XTX, so I'm not sure how they're going to pull off a 30% uplift when they've already run out of core. Here's hoping though!

Fermi is a great example. GF110 vs GF100. New revision, vastly improved performance and even lowered power consumption.

1690469515684.png


GTX 480:

1690469550238.png


GTX 580:

1690469572510.png
 
Fermi is a great example. GF110 vs GF100. New revision, vastly improved performance and even lowered power consumption.

View attachment 306518

GTX 480:

View attachment 306519

GTX 580:

View attachment 306520
Those efficiency and IPC gains are a great point but in that case NIVIDA also added more shaders and TMUs between those two cards whereas AMD is already using their biggest die with all components enabled. I don't think we'll see a revision like that on a mid cycle refresh, especially since the RX6X50 cards were also just an overclock and no increase in core count or IPC
 
Perhaps they want to take an even better crack at the power connector. These would pull the most current through the connector and be even more likely to fail from a bad connection.
 
Was looking forward to the 4080Ti. Really no talk about that. I wanted a cheaper AD102 card to buy.
 
Was looking forward to the 4080Ti. Really no talk about that. I wanted a cheaper AD102 card to buy.

Unless they where planning a 200+ price cut to the 4080 a 4080ti doesn't make a lot of sense. Currently it could slot in around 1400 usd I doubt that would be overly appealing to most.
 
That 4 slot abomination must be the winner of the World's Ugliest Video Card.

That reminded me of another monster, but winner of the World's Coolest Video Card:
1690473195882.png
 
Maybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?

Nah, they figured they could charge double for an AI accelerator card for enterprise customers.
 
What is sad is that the GPU is getting more expansive. GPUs are getting larger and larger, and cell phones are getting smaller and smaller!
It is sad not to see how a good redesign compresses the modern-day GPU. At this rate, every time the GPU is updated (2 years max) you have to get a new case and PSU.
 
Back
Top