• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Cancels GeForce RTX 4090 Ti, Next-Gen Flagship to Feature 512-bit Memory Bus

What is sad is that the GPU is getting more expansive. GPUs are getting larger and larger, and cell phones are getting smaller and smaller!
It is sad not to see how a good redesign compresses the modern-day GPU. At this rate, every time the GPU is updated (2 years max) you have to get a new case and PSU.
Increasing performance requires additional size and power as we reach the limit of shrinking the silicon. There are options available that consume less power and/or are smaller form factor.
 
I have resigned to that I will probably be buying a used GPU, when those that upgrade every gen sell of their 4000 series card after 5000 series is out. If Nvidia do a end of gen 1080ti style discount, it might make me buy new at end of this gen though. As I do need a VRAM upgrade.

Shame AMD have no SGSSAA else I would have hopped over.
 
This should be tagged as rumor by an honest publicist.
But nah, anything for a few more clicks, amirite?
 
This should be tagged as rumor by an honest publicist.
But nah, anything for a few more clicks, amirite?
It does say "reportedly".
 
if Navi was starved for bandwidth though.
It's EXTREMELY starved. It loses way more perfromance at 4K compared to competition Team Green GPUs. You don't notice this starvation on obsolete and tiny resolutions such as 720p and 1080p but at 3440x1440 and beyond, it becomes obvious all AMD GPUs of latest two gens (RX 6800 non-XT excluded) are vastly suffering from insufficient VRAM performance. 6700 XT, for example, is very close to RTX 3070 at 1080p and is slower than 3060 Ti at 4K. Their "super dooper cache" can only help when the resolution is low. High resolutions require REAL bandwidth, yet AMD hasn't provided with it.

As per RTX 4090 Ti, launching it would've made some sorta "sense" only if AMD or anyone else had something faster than the plain 4090. And this never happened. Jacketguy doesn't have to put an effort in Ada. Focusing on getting as much profit as inhumanly possible from RTX 5000 series is his only sensible way of investing his time as of now.
 
It's EXTREMELY starved. It loses way more perfromance at 4K compared to competition Team Green GPUs. You don't notice this starvation on obsolete and tiny resolutions such as 720p and 1080p but at 3440x1440 and beyond, it becomes obvious all AMD GPUs of latest two gens (RX 6800 non-XT excluded) are vastly suffering from insufficient VRAM performance. 6700 XT, for example, is very close to RTX 3070 at 1080p and is slower than 3060 Ti at 4K. Their "super dooper cache" can only help when the resolution is low. High resolutions require REAL bandwidth, yet AMD hasn't provided with it.

No it's not. The most recent GPU review on TPU: https://www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/30.html

The 6700 XT is a little ahead of the 3060 Ti at 1080p and 1440p by a similar margin and falls slightly behind at 4K, where both cards are useless at 46fps. 3070 is ~10% faster at both 1080p and 1440p. The 6700 XT is a well-balanced card at the resolutions it's targetted to.

If you want to look at the 6800 XT and 3080, they are separated by less than 3% at all resolutions. The 3080 scales slightly better to 4K but we're talking about a few frames, which is nothing you will notice while playing a game. These are very small differences.
 
No it's not. The most recent GPU review on TPU: https://www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/30.html

The 6700 XT is a little ahead of the 3060 Ti at 1080p and 1440p by a similar margin and falls slightly behind at 4K, where both cards are useless at 46fps. 3070 is ~10% faster at both 1080p and 1440p. The 6700 XT is a well-balanced card at the resolutions it's targetted to.

If you want to look at the 6800 XT and 3080, they are separated by less than 3% at all resolutions. The 3080 scales slightly better to 4K but we're talking about a few frames, which is nothing you will notice while playing a game. These are very small differences.
Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.
 
Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.

Not owning a 7900 XTX I really don't know but the OC potential of some of the XTXs suggest that AMD is conservatively clocking them resulting in lower performance, see the OC Cyberpunk performance here:


There are good guesses why this is happening but nobody knows for sure outside of AMD. Hell, they don't seem to know hence the missing 7700 and 7800 series.
 
Flagship? Those are cool I guess, but I care about the 5060 TI price range much more. Maybe that'll give me a reason to upgrade my 3060 TI.
 
Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.

XTX gets more cache, bandwidth and memory but the 4080 uses faster memory... possibly one of the contributing factors?

Outside of architectural advances/hardware prowess theres no doubt some games are just better optimised for Nvidia cards. There are several contributing factors which puts Nvidia in a more rewarding position with developers/game engines. The obvious one: Nvidias market share (gaming) is MASSIVE!! Almost the whole cake! The bigger the player the greater the influence (IMO). Nvidia uses this influence through dedicated dev-interaction departments (or think tanks) translating to increased developer/game engine relations, shared proprietary tech for testing/implementation (some games are better optimised on Nvidia tech) and then theres sponsored favouritism - bigger pockets, greater reach. AMD's no different but with smaller pockets on a smaller scale (they've got a long way to play catch up with the king of the hill). In short, a ~2% margin is best ignored.

Nowadays i don't concern myself with dev/GE interactions but question whether there's some level of premeditated conformity between both manufacturers in playing the market. You scratch my back and i'll scratch yours is good business sense (under the table of course). I think i better shut up and go back to being a good law abiding consumer :respect:
 
@wheresmycar
I'm guessing your "premeditated conformity" between GPU manufacturers only applies to the duopoly right? Because Intel is far too small a player now in the GPU market to shoulder them aside.
 
Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.
The rumour mill suggests the card was gimped because of an artifacting issue. The story goes that AMD released higher than actual performance estimates before release expecting this artifacting issue to be resolved. It wasn't at launch so they released the card with lower performance to avoid a major drama. Personally I give this little credence as I would have expected such an issue to be resolved by now. Unless you believe another a conspiracy theory that goes like this...
 
@wheresmycar
I'm guessing your "premeditated conformity" between GPU manufacturers only applies to the duopoly right? Because Intel is far too small a player now in the GPU market to shoulder them aside.

Yep, a 2-prong hunch. Hope Intel goes the full mile this time around.
 
Was looking forward to the 4080Ti. Really no talk about that. I wanted a cheaper AD102 card to buy.
I'm waiting as well for the RTX 4080Ti hopefully Q4 2023. Something should be said by Intel 14th Generation launch party in September hopefully.

Cheers
 
No it's not. The most recent GPU review on TPU: https://www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/30.html

The 6700 XT is a little ahead of the 3060 Ti at 1080p and 1440p by a similar margin and falls slightly behind at 4K, where both cards are useless at 46fps. 3070 is ~10% faster at both 1080p and 1440p. The 6700 XT is a well-balanced card at the resolutions it's targetted to.

If you want to look at the 6800 XT and 3080, they are separated by less than 3% at all resolutions. The 3080 scales slightly better to 4K but we're talking about a few frames, which is nothing you will notice while playing a game. These are very small differences.
I play fine at 4K60 with my 6700 XT OC. Though I use FSR quality if needed.
 
So, what about the 4080ti?
 
So, what about the 4080ti?
Think there's a lot of us that want to entry level AD102 GPU.... save a few hundred bucks with similar performance as a RTX 4090....

Since there's a huge gap and nvidia's timeline now no new gpus till 2025 I would say there would be a refresh launch Q4 this year or maybe at CES probably a super or something series I would expect.

Cheers
 
Would love to trade my 4090 in for one.

Nvidia's ability to iterate on PCB and cooler design in such a short period of time is seriously impressive. I hope they keep this momentum going forward.
 
It's EXTREMELY starved. It loses way more perfromance at 4K compared to competition Team Green GPUs. You don't notice this starvation on obsolete and tiny resolutions such as 720p and 1080p but at 3440x1440 and beyond, it becomes obvious all AMD GPUs of latest two gens (RX 6800 non-XT excluded) are vastly suffering from insufficient VRAM performance. 6700 XT, for example, is very close to RTX 3070 at 1080p and is slower than 3060 Ti at 4K. Their "super dooper cache" can only help when the resolution is low. High resolutions require REAL bandwidth, yet AMD hasn't provided with it.

As per RTX 4090 Ti, launching it would've made some sorta "sense" only if AMD or anyone else had something faster than the plain 4090. And this never happened. Jacketguy doesn't have to put an effort in Ada. Focusing on getting as much profit as inhumanly possible from RTX 5000 series is his only sensible way of investing his time as of now.
What is unique with the 6800 non-xt? It has the same memory set up as the other Navi 21 cards.
 
Possible reason: the marketing of a 600 to 900 watt card might be more detrimental than releasing it.
According to the TPU GPU DB: "Being a triple-slot card, the NVIDIA GeForce RTX 4090 Ti draws power from 2x 16-pin power connectors, with power draw rated at 600 W maximum."

600 Watts isn't that much more than what 4090's can draw now. My 4090, when playing Dishonored 2 at maxed DSR resolution, posted the following peak power draw stats:
"aftrbrner: 574.9 Watts, GPU 16-pin HVPWR power max.: 551.2 Watts, GPU PCIe +12V Input Power max.: 15.6W"
 
What is unique with the 6800 non-xt?
Its core, it's too weak to actually benefit from having more than 500 GBps VRAM bandwidth (gain is real, yet is far from beling linear) and is too strong to have less than 400 GBps VRAM bandwidth (losses are very close to linear). The actual b/w is 512 GBps minus the latencies. All other Navi 20 GPUs, on the contrary, are able to make use of more VRAM b/w than they already have. The greater the resolution the greater the effect.
 
Back
Top