• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4090 Founders Edition

Where's the 4080? Cheeky... or sneaky?

A solid card. Good uplift in rasterization, reasonable power draw (and according to GN a nifty power adapter included), temperatures good. Not a lot not to like but for the halo pricing.

But is it enough to be top dog?
 
Yes a 5800X was a bad choice for GPU review but it is understandable. Every reviewer has to retest every gpu using the latest and greatest cpu when there is no always time for that.
Wizzard will do that, obviously, when 13900K arrives. And this time it would be great if we had gpu usage at some point, so we know if there is or not a cpu bottleneck.

I don't think the performance loss at 2160p is more than 5% compared to Core i9-12900K / Ryzen 7 5800X3D.

Look at a review with Core i9-12900K:
GeForce RTX 4090: 4K Gaming Performance - Nvidia GeForce RTX 4090 Review: Queen of the Castle | Tom's Hardware (tomshardware.com)
 
Small typo on page 35: you can enable DLAA + FG
DLAA is a real technology, it's NVIDIA deep learning anti aliasing, similar to DLSS but without the lower render resolution then upscaling. FG is frame generation.

DLAA is better than TAA, but doesn't offer the performance benefits of DLSS.
 
who's 4nm is this TSMC or Samsung ?

from what I'm reading 5nm TSMC is 88% more dense than 7nm on TSMC
That 4nm TSMC is a like an inbetween node with a facelift of 5nm for around 6%-22% increase in density compare to 5nm, with some power tweaks & frequnce tweaks for high prefromance.
so around 94%-108%+ cause it was samsungs 8nm, more dense vs ampere
it seems like nvidia is just brute forcing everything. ¯\_(ツ)_/¯
 
who's 4nm is this TSMC or Samsung ?

from what I'm reading 5nm TSMC is 88% more dense than 7nm on TSMC
That 4nm TSMC is a like an inbetween node with a facelift of 5nm for around 6%-22% increase in density compare to 5nm, with some power tweaks & frequnce tweaks for high prefromance.
so around 94%-108%+ cause it was samsungs 8nm, more dense vs ampere
it seems like nvidia is just brute forcing everything. ¯\_(ツ)_/¯

It uses the TSMC N4 node.
 
It's now time to take aim at the CPU duopoly, pull your fingers out and give us faster CPUs.
 
It uses the TSMC N4 node.
thanks listening to the LTT saying the clocking being 35% higher makes this card look like meh
 
thanks listening to the LTT saying the clocking being 35% higher makes this card look like meh

Yeah, architecturally speaking, this isn't exactly out of the ordinary. There are more execution units and significantly faster clocks, in addition to the power limit being raised very high (something that most Ampere models struggle with due to their very conservative power limits, unless you have ROG Strix/KPE GPUs or the 3090 Ti).

Still, the jump is very healthy especially at 4K. I would wager the RTX 4090 Ti (aka this GPU perfected) will be one to remember :)
 
Less than 15% improvements vs 3090ti on 1080p and if you don't care about that useless RT bs. Amazing! So much power draw and wasted transistors on RT and tensor cores... Btw the article would attract more ppl if it included CSGO and Overwatch 2.

Amazing to see this GPU with 78 BILLION transistors has less FPS vs a 3080 (less than 28b transistors) on some games that do not use RT and on 1080p all the while using up to 600 WATTS!!! This has to be a new record of stupidity.
Someone would have to be brain dead to purchase this card for gaming at 1080P.

Competitive players are at 1080p on 24in monitors that do 240 or 360hz and they do not enable gsync/freesync. Many of them don't have to pay for their stuff either.
Not sure what games you're talking about. Most competitive FPS gamers w/sponsors that play games such as Warzone play at 1440.
 
Last edited:
who's 4nm is this TSMC or Samsung ?

from what I'm reading 5nm TSMC is 88% more dense than 7nm on TSMC
That 4nm TSMC is a like an inbetween node with a facelift of 5nm for around 6%-22% increase in density compare to 5nm, with some power tweaks & frequnce tweaks for high prefromance.
so around 94%-108%+ cause it was samsungs 8nm, more dense vs ampere
it seems like nvidia is just brute forcing everything. ¯\_(ツ)_/¯

N4 isn't even an in-between 'half node'. It's more like N5+. 6% greater density.

But Samsung's 8LP was really just an enhancement to their 10LP node. It was only 12% more dense.

For comparison, this is not even a full node jump over Intel's 14nm node, it's closer to a 1/2 node jump from that and is not even 2/3 the density of Intel 7 (61 MT/mm2 vs 106 MT/mm2).

So in fact TSMC N5 and N4 are almost 3X more dense than Samsung 8LP (180 MT/mm2 vs 61MT/mm2).
 
N4 isn't even an in-between 'half node'. It's more like N5+. 6% greater density.

But Samsung's 8LP was really just an enhancement to their 10LP node. It was only 12% more dense.

For comparison, this is not even a full node jump over Intel's 14nm node, it's closer to a 1/2 node jump from that and is not even 2/3 the density of Intel 7 (61 MT/mm2 vs 106 MT/mm2).

So in fact TSMC N5 and N4 are almost 3X more dense than Samsung 8LP (180 MT/mm2 vs 61MT/mm2).
That makes this card even worse than what I just said.
 
N4 isn't even an in-between 'half node'. It's more like N5+. 6% greater density.

But Samsung's 8LP was really just an enhancement to their 10LP node. It was only 12% more dense.

For comparison, this is not even a full node jump over Intel's 14nm node, it's closer to a 1/2 node jump from that and is not even 2/3 the density of Intel 7 (61 MT/mm2 vs 106 MT/mm2).

So in fact TSMC N5 and N4 are almost 3X more dense than Samsung 8LP (180 MT/mm2 vs 61MT/mm2).
Unfortunately, GPUs don't reach this reported density... this number is probably based on the best possible scenario: a small chip with little cache.
 
@W1zzard glad we've got a larger number of games to compare and as always appreciate the reviews.

Out of the 3 titles "I play" @ 1440p, all 3 are showing 60-100% increase in FPS over my current 2080 TI (yep im comparing 2 Gens behind as it best reflects personal relevance). I have to admit, thats pretty impressive. A 4th played title sees the 4090 performing below AMDs top 3 cards which was a little odd but we'll let it pass.

2 of the titles "i play" shown in the ray tracing chart easily see x2.5~x3 increase in perf. I haven't bothered with ray tracing as taxing performance aint my thing... but 40-series with RT enabled easily shoots beyond my displays 144hz max refresh rate....i have to admit thats impressive!!!

Needless to say, DLSS seeing equally impressive gains in the titles im playing.

Oh well, performance looks great, its one hell of a teaser BUT nah.... sorry NVIDIA i aint wasting $1600 on a graphics card to over-fill your already full pockets. IMO, for gaming its utter madnessss to even consider forking out this sort of cash for a GPU. I'll pay you half ($800) for an on-par but a couple of pegs down 4070 or similar unless RDNA3 pulls the rug and steals your boots first. I can't get over it... even $800 IMO is way too much to invest in graphics muscle but i'm willing to commit if the numbers shine. No wander why SLI was long shot in its knees.... you naughty buggers always planned for this type of x2-and-north-of it increase in cost seeing there was a small market for it. Flagship cards are no longer flagships but "pocketpits"

I read some comments suggesting "oh but its a flagship card you dont need to buy into it so why complain" - something along those lines. Wrong! These are price pre-engineered "Pocketpit" cards which is a reference point for NVIDIA to set the stage for higher MSRPs across the lower tier board. We've seen this since the 1000-series with each Gen hoarding widely hungrier premiums with each launch. Going by the current trajectory soon we'll be on 5090 @ $1800-$2000 and then 6090 @ $2300-$2500.... with 1-2-3 peg down SKUs correspondingly following these fattened up reference points which is tough to stomach for the average neglected miles wider majority consumer base. It's damn right tier-down chaos!! I'm so close to pulling a finger at NVIDIA and accepting anything AMD drops for the sake of it (providing its a decent lift over my current 2080 TI). EDIT: actually no, a little more frankness, after years of buying into NVIDIA and kinda feeling snubbed - even if AMDs RDNA3 trails marginally and completely cocks up RT (or other features) by comparison.... i think im well on my way already to the red team. Whats with these colours anyway... Red AMD, Green NV and Blue Intel...spells out RGB and ive had enough of that too.
 
Last edited:
Will it melt down a Corsair SF Series 750W Platinum PSU or can be ok?

It'll probably be fine. SF has pretty robust protection IIRC, so it should shut down before melting down.
 
yeah, but identical launch day prices
Not really, MSRP isn't a real price. Real price is what buy it for, a good starting point is average prices in different markets.
I actually think the 4090 is a good price, you need to think of it as the new Titan though. This card isn't meant for average gamers, it's meant for 4k stuff and 4k gamers pretty much exclusively. I honestly am not sad at all its out of my price range. I just hope RDNA3 can get me at 165 fps 165hz 1440p in games like cyberpunk 2077 with raytracing turned off.

That's all I want. lol
Except it's not a Titan, had it been one it would be a steal, a true Titan is more akin to a quadro than a GeForce, but people are just confused (probably on purpose by nVidia, even though they never say it's a Titan).
 
you show us the LTT benchmark where they forgot to disable DLSS.
this is not a real result.

They said it was FSR
They have issued a 'correction' and promised an update in the video
no dlss.jpg
 
N4 isn't even an in-between 'half node'. It's more like N5+. 6% greater density.

But Samsung's 8LP was really just an enhancement to their 10LP node. It was only 12% more dense.

For comparison, this is not even a full node jump over Intel's 14nm node, it's closer to a 1/2 node jump from that and is not even 2/3 the density of Intel 7 (61 MT/mm2 vs 106 MT/mm2).

So in fact TSMC N5 and N4 are almost 3X more dense than Samsung 8LP (180 MT/mm2 vs 61MT/mm2).
Tsmc n5 has been exposed its 137Mt/mm2 not 180.. plus Samsung 8lp are high density cells not high performance thats why it doesn't clock high
 
Rasterization performance is great. Price? Crap. RT performance? Not that great. DLSS 3? cool I guess but I am not interested in theoretical input of frames vs the real deal.

Now my question is, why isn't there possibly a separate card for RT like there was a Physx card?
 
Tsmc n5 has been exposed its 137Mt/mm2 not 180.. plus Samsung 8lp are high density cells not high performance thats why it doesn't clock high

None of them are what the calculated densities are, largely because the calculated density is based on transistor size only. The size of the interconnects / traces hasn't much advanced. Also there's no standard way of measuring more complex circuits, and so on.

Nevertheless, the 'predicted' max density based on transistor dimensions is still useful to compare relative size.
 
DDB3EA48-0FC8-45A3-A807-EA4A2F8193C3.jpeg

So I like AMD as much as the next guy, but I’m fairly certain this is a mistake.
 
I think w1z is correct with this one. But I agree it's not obvious as it's not explained clearly.
4090 does indeed make the highest jump between new generations since (at least) 6xx series. (I didn't check older generations but it probably goes until 8xxx series, TPU reviews only).

This 'gap' is between the fastest new generation card 'at launch' versus the fastest old generation card at that time.

So it's between 4090 and 3090ti and it's 45% at 4K.
It was close to this (43%) between 3090 (launched 1 week after 3080 but still counts I guess) and 2080ti.
39% between 2080ti and 1080ti.
Only 23% between 1080 and Titan X (1080ti released 1 year later).
Yes, the "gap" referenced is a new flagship vs last generation's flagship. No, the conclusion in this review is not warranted by the data. It's straight up nonsense that only makes sense if you let Nvidia's marketing department dictate how you evaluate their "generations" of GPUs.

Here are the actual "generations" of Nvidia GPUs, by architecture:
Fermi (GTX 4xx and 5xx)
Kepler (GTX 6xx and 7xx)
Maxwell (GTX 9xx)
Pascal (GTX 10xx)
Turing (RTX 20xx)
Ampere (RTX 30xx)
Ada (RTX 40xx)

Using any other criteria to determine a "generation" of GPU is ignorant and misleading. When one speaks of "generations" of graphics cards, one speaks of GPU architectures, not specific SKUs.

And here are the performance jumps from the top flagship card in each generation to the next, based on TPU's own data:

Tesla 2.0 (GTX 285) --> Fermi (GTX 580) = 67% performance jump gen on gen
Fermi (GTX 580) --> Kepler (GTX 780 Ti) = 104% performance jump gen on gen
Kepler (GTX 780 Ti) --> Maxwell (Titan X) = 45% performance jump gen on gen
Maxwell (Titan X) --> Pascal (Titan Xp) = 72% performance jump gen on gen [Titan Xp and 1080 Ti were very similar, 1080 Ti a couple points faster]
Pascal (GTX 1080 Ti/Titan Xp) --> Turing (RTX 2080 Ti) = 39% performance jump gen on gen
Turing (RTX 2080 Ti) --> Ampere (RTX 3090 Ti) = 56% performance jump gen on gen
Ampere (RTX 3090 Ti) --> Ada (RTX 4090) = 45% performance jump gen on gen

If you assume the "true" Ada flagship will be a 4090 Ti, and that card will be 10% faster than the 4090, then Ada is a 59% performance jump gen on gen.

In no way is this remarkable. It's decidedly unremarkable. Ordinary. Expected. Typical. If you exclude Kepler, which WAS extraordinary and remarkable, the average gen on gen performance jump is 54% per generation.
 
Last edited:
DLSS 3.0 frame insertion does indeed wait for frame 2 in your example before generating the intermediary frame.

I'm sorry, but the only way I can picture this and how it happens is the scene in Fight Club where Tyler splices a single frame of porno into a family movie.....just a magical frame that's not really needed, but it shows up without you even really noticing.
 
Wow makes the the RTX 3090Ti look like an RTX 3060 vs the 3090TI = (3090Ti vs RTX 4090)
 
Someone would have to be brain dead to purchase this card for gaming at 1080P.


Not sure what games you're talking about. Most competitive FPS gamers w/sponsors that play games such as Warzone play at 1440.
Warzone is not a competitive game. I'm talking about CSGO, Quake Live/Quake Champions, Overwatch, Valotant, StarCraft 2.
 
Back
Top