Thursday, December 28th 2023

NVIDIA RTX 4080 SUPER Sticks with AD103 Silicon, 16GB of 256-bit Memory

Recent placeholder listings of unreleased MSI RTX 40-series SUPER graphics cards seem to confirm that the RTX 4070 Ti SUPER is getting 16 GB of memory, likely across a 256-bit memory interface, as NVIDIA is tapping into the larger "AD103" silicon to create it. The company had maxed out the "AD104" silicon with the current RTX 4070 Ti. What's also interesting is that they point to the RTX 4080 SUPER having the same 16 GB of 256-bit memory as the RTX 4080. NVIDIA carved the current RTX 4080 out of the "AD103" by enabling 76 out of 80 SM (38 out of 40 TPCs). So it will be interesting to see if NVIDIA manages to achieve the performance goals of the RTX 4080 SUPER by simply giving it 512 more CUDA cores (from 9,728 to 10,240). The three other levers NVIDIA has at its disposal are GPU clocks, power limits, and memory speeds. The RTX 4080 uses 22.4 Gbps memory speed, which it can increase to 23 Gbps.

The current RTX 4080 has a TGP of 320 W, compared to the 450 W of the AD102-based RTX 4090, and RTX 4080 cards tend to include an NVIDIA-designed adapter that converts three 8-pin PCIe connectors to a 12VHPWR with signal pins denoting 450 W continuous power capability. In comparison, RTX 4090 cards include a 600 W capable adapter with four 8-pin inputs. Even with the 450 W capable adapter, NVIDIA has plenty of room to raise the TGP of the RTX 4080 SUPER up from the 320 W of the older RTX 4080, to increase GPU clocks besides maxing out the "AD103" silicon. NVIDIA is expected to announce the RTX 4070 Ti SUPER and RTX 4080 SUPER on January 8, with the RTX 4080 SUPER scheduled to go on sale toward the end of January.
Source: VideoCardz
Add your own comment

60 Comments on NVIDIA RTX 4080 SUPER Sticks with AD103 Silicon, 16GB of 256-bit Memory

#51
Dahita
gmn 17Also an ad102 die with the 20gb card is a possibility
A 4095 with 30GB and an automatic coffeemaker integrated is also possible, if you are talking about adding 4th reference...
Posted on Reply
#52
gmn 17
DahitaA 4095 with 30GB and an automatic coffeemaker integrated is also possible, if you are talking about adding 4th reference...
This is from TPU GPU DB
Posted on Reply
#53
Dahita
gmn 17This is from TPU GPU DB
It's in the title: "NVIDIA RTX 4080 SUPER Sticks with AD103 Silicon". No Ti in sight.
Posted on Reply
#54
Beginner Macro Device
gmn 17Also an ad102 die with the 20gb card is a possibility
Technically yes but practically it would damage nVidia's profits, unless marketed correctly but they don't wanna spend much more time adjusting their marketing. That's why we won't see 20 GB Ada GPUs.
Posted on Reply
#55
gmn 17
Beginner Micro DeviceTechnically yes but practically it would damage nVidia's profits, unless marketed correctly but they don't wanna spend much more time adjusting their marketing. That's why we won't see 20 GB Ada GPUs.
I agree it's in dreamland for now.
Posted on Reply
#56
f0ssile
Beginner Micro DeviceI did not. It's incorrect to compare raw VRAM bandwidth of GPUs of different architectures. Both GPUs handle this resource differently and I can't say which GPU does this job better. When I don't know what is right I decide not to spread fallacy but rather keep myself silent.

Because I don't know how they work and how Ada VS RDNA3 fares in this department.

Yes, I'm biased towards balance and value. RX 7800 XT lacks the former, yet offers the latter. 4070 offers both. You're talking like I am a huge fan of naming what otherwise would be a 4060 Ti at best, a 4070 and at 600 USD before taxes, granting "godlike" 12 GB. No, I'm not, this GPU is an awful rip off.

And 7800 XT is a very great GPU in isolation. Fast, furious, great overclocker, loads of VRAM for its $, but... it's not enough because 4070 is just a smarter product. Just like a 7700 XT is a better item than a 4060 Ti. Just like buying a 6800 non-XT is a no-brainer at <400 USD if obtainable brand new.

Why do I need to? One GPU dies of insufficient VRAM, another dies of insufficient RT performance and no one can assure you the latter won't be more severe. Just check it out: www.techpowerup.com/review/avatar-fop-performance-benchmark/5.html

I don't say things will go exactly the way of increased RT loads in newer games. What I say is this is more than possible. That's a gamble but at this win probability, I'd place my bet on a 4070.
But did you re-read what I quoted?
It seemed like the 7800 XT was hopeless...
the pros of him were hidden by the cons, including stuff that has nothing to do with gaming...
...if you only use CUDA, get the 4070 even if it's half as good, right?
The 6900 XT sucks even more, it is the epitome of crap.

Before, bandwidth mattered more than quantity (they are different things, but you only look for diversity when you want it)...
Then both count for nothing, the important thing is to have CUDA and 50 watts less consumption.
Who knows if at the time of the 3080 consumption was so important, here the past would become important...

I bet that the average 12% more RT will be important, no fortune tellers needed there, eh...
I repeat that the 4070 is worth it as long as it doesn't saturate the VRAM...
So we could get to the 4070 which is less successful due to its own fault, think what a prediction.

The test is nice and coincidentally very favorable to Nvidia, in which you can also see what happens at 4K.
In the future it could also happen under 4K, but what can I tell you...
In your opinion, why does the 4070 drop below 4K from its clear advantage at 1440p?
Is it because of the CUDAs who are mischievous at 4K, or maybe the quantity and the gang are taking part in the dance?

Interesting to digress on differences in VRAM and/or normal map occupancy. Would you like to get to 16GB from 12? It's worse than I thought...
But you make a 5-year prediction of 4070 with 12GB more than enough, there you go too far, eh...
Hasn't history already said that barely enough VRAM won't last 5 years?
Do textures and normal maps become sub-optimal when the 4070 doesn't arrive?
But I bet you will play with low textures just to activate the RT, you know what a luxury.

Reread my post, it's still good, in fact, it's better than before...:)

PS: Guess which GPU I'm looking at the screen from.
PS2: In the meantime, I saw an even more interesting guy... :ohwell:
Posted on Reply
#57
efikkan
f0ssileBut you make a 5-year prediction of 4070 with 12GB more than enough, there you go too far, eh...
Hasn't history already said that barely enough VRAM won't last 5 years?
May I ask, where is the evidence that RTX 4070 has "barely enough" VRAM?
If anything, the evidence points to it running out of steam long before VRAM, just look at the scaling in 4K vs. 1440p, even in RT, RTX 4070 and 4070 Ti continues to scale pretty much the same. If 4070 Ti continues to scale with only 12 GB, there is no reason to think the little brother wouldn't. ;)

I've participated in discussions about extra VRAM for "future proofing" for probably close to 15 years, and if the past predicts the future, then "future proofing" with more VRAM is probably going to be irrelevant in >95% of cases. Yet we often see the claim repeated, even some video reviewers keep repeating the anecdote about extra VRAM. But you will only find some fringe edge-cases where it really matters, and the explanation is simple; new more advanced graphics tends to be more taxing on computational performance first, then bandwidth secondary, even before VRAM capacity becomes an issue. There are also the physical limitations; like with e.g. 504 GB/s, and a target FPS of >60, the game wouldn't have enough time to access more than 8.4 GB total in theory (in practice it's much less than this), so having 16+GB of VRAM without lost of more bandwidth and good resource management isn't necessarily going to be truly accessible. We've heard this complaint now with pretty much every Nvidia generation going back to the 400-series or before, but I've yet to notice any upper mid-range or high-end GPUs that turned out to be bad performers long-term due to lack of VRAM.

And do you know which team is known for dropping driver support first? Team red. Long term driver support is actually important if you're looking to get the most mileage out of the purchase, and AMD usually drops their normal driver support before 5 years. And don't trick yourself into thinking a card with much more (sometimes slower) VRAM is going to enable you to play the most demanding games 5+ years from now at the higher settings (assuming games continues to advance).

So the bigger question is really, what are you giving up to get some more VRAM? (generally speaking)
Are you picking an alternative which is either more expensive, sacrificing some performance or features, or complaining just to find something to criticize? :)
f0ssileDo textures and normal maps become sub-optimal when the 4070 doesn't arrive?
But I bet you will play with low textures just to activate the RT, you know what a luxury.
Normal maps, along with most other types of maps (incl. reflectiveness etc. used with RT) are often easily compressed, and most buffers (stencil buffers, etc.) are even moreso.
Whenever we are talking about memory compression in GPUs, we're not talking about the files stored on your PC, but rather the lossless and transparent compression done by the memory controller of the GPU.

In addition, modern GPUs are incredibly advanced when it comes to resource management compared to back when I first learned OpenGL and DirectX in the early 2000s. Now with dynamic tiled rendering, clever scheduling, memory compression and caches, they can achieve much more throughput than the specs alone will lead you to believe.
As I always say, the truth is revealed in good benchmarking, not "good specs".
Posted on Reply
#58
f0ssile
You're mixing too many things, they all seem related to you, except the ugly duckling amount of VRAM.
You are trying to weigh the value of the quantity, weighing it partially as you can do with RAM between 16GB and 32GB, is inappropriate to say the least.
Would you make the same argument as though in that case? If not, why?

Although 5 minutes are not suitable for weighing differences in quantity, whether the VRAM should saturate immediately is your guess to take.
I feel the Deja-Vu of the conversation with the other: The facts are his, the other has to explain the origin of the universe to demonstrate it...

In 25 years have you never seen future proof limits on the amount of VRAM? Are we in the same universe as above?
I, on the other hand, remember for more than 20 years conversations calibrated on the quantity that is useless without power, as if more resolute textures had to weigh on the GPU.
Coincidentally, the Nvidia VRAM seems optimal for you too, on closer inspection also for the 3080, so much so that it also rectifies the other user's strange statement.

Allocation that is always useless is another constant, as if it cannot predict an optimized use of resources.
We seem to hear about the uselessness of the cache in general, even on Win etc...

As with RAM, quantity has very little to do with frequency/bandwidth.
When you need the quantity if you don't arrive you choke. It's ridiculous just to think about mixing it up like it's nothing.
Normal maps were an example of the other that takes up space, also related to textures.
What does compressibility mean? If complexity grows, employment grows, the concept seemed simple to me...

Lossless color compression is another thing that has little to do with it, yet another excuse to digress.
With 8GB of VRAM there are cases of heavy stuttering at 1080p. Are you saying that you have to become a fortune teller to predict the end of 4070?

I remember well the 770 2GB, which I think was medium-high range, or am I wrong?
From a future proof perspective, didn't the 4GB one last just a hair longer?
No, what am I making up, it wasn't like the 2GB mid-range 960, the 4GB one can still be seen in the minimums...

Wow, we missed the re-recycled history of AMD drivers, we needed the astral peak of stereotypes.
Besides, you didn't last long before changing sides, first it was CUDA, now the drivers...

I thought I saw fresh drivers for the old r480, which a friend of mine still uses effectively:
www.amd.com/en/support/graphics/radeon-400-series/radeon-rx-400-series/radeon-rx-480

Nothing, AMD won't even get there in 5 years, eh, there you did the quadratic mean in an isolated case, I imagine.

Who knows if it will still be able to raise the quality of the textures at 1080p?
Is that future proof, or inconvenient past because it's not convenient? In the aftermath...
It should also serve to make it clear that the various optimizations cannot squeeze the memory too much. But 8GB and 12GB are always current, eh...

There's no point in digressing about the rest, you seem to muddy the water to make it seem deep (Cit. 1)
The same old story, the eternal return of the same (Cit. 2).
You think I don't know what you're talking about, but if I know, let's change the subject, it's better...

Ah, perhaps I had asked too much by asking you to guess...



Ask me whenever you want, maybe you are capable of thinking that I will do it outside the home.
I've always had both AMD and Nvidia (often at the same time), I don't think I'm the one who sees substance badly.
Posted on Reply
#60
theouto
If the 4070 super is replacing the current 4070 in its price bracket as it says there, and they change the pricing of the 4070 accordingly, maybe this launch won't be so worthless after all!
Posted on Reply
Add your own comment
Jun 12th, 2024 17:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts