• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4060 Ti Possible Specs Surface—160 W Power, Debuts AD106 Silicon

Joined
Dec 31, 2020
Messages
766 (0.65/day)
Processor E5-2690 v4
Motherboard VEINEDA X99
Video Card(s) 2080 Ti WINDFROCE OC
Storage NE-512 KingSpec
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
GTX 780 was a 110 chip the big one. 560sq.mm then 980 got smaller to 398. 1080 298, 2080 -398 again. 3080 was lucky to get 102 because the 103 7680/320 bit was scrapped, and it made sense to carve it out of the defective 102 instead of a fully functional 103, that is a rarity and would have been too weak to compete with 6800 xt. rTx 2070 was the first 106-based 70 tier. So nothing can be set in stone forever.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Not all. The sub 800w mostly don't have one. You don't need ATX3 PSU for mid range or lower GPU. It just a waste of money.
I thought all new PSUs, even the non atx3, came with 16 pins. Maybe I was wrong.

We have that.

https://www.techpowerup.com/review/zotac-geforce-gtx-660/17.html
GTX 660 from 10 years ago gets 43fps at 1280x800 "very high" in Metro2033.

The RTX 3060 has no problem running the exact same game at 1080p "ultra" at 144Hz without dropping a single frame. It can manage the same feat at 100-120FPS in the Redux edition which is a far more demanding HD remaster with better lighting, shadows, volumetric effects etc.

My point is that not only are games themselves getting more demanding over time, but that our expectations of resolution and framerate are also increasing year-on-year. A lot of the GPU reviews from 10+ years ago are testing at 1024x768. That's only 85% of 720p, and back in those days, antialiasing was a luxury that you could only enable if you had framerate to spare. The convention of "FullHD, 60fps" used to be high-end, and now it's entry-level, regardless of what game you're looking at from any decade.
Yeah, people don't get that. A 3060ti can hit an average of 100 fps at 1440p ultra according to TPUs review. You go 10 years back, even high end GPUs couldn't hit 100 fps at 1080p at games of the time.
 
Joined
Feb 20, 2019
Messages
7,193 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
You're still missing the point, that classification indicates where the GPU is in terms of performance. If a product gets a GPU higher or lower on that rung it means it's performance tier has moved accordingly.
It's just an internal naming scheme. You're trying to spot patterns in something that is changed arbitrarily from generation to generation at Nvidia's whim.
The biggest consumer graphics dies have ranged wildly from 104 to 110 to 200 to 102 over the last decade. The smallest have ranged from 119 to 208 to 117 to 107.

There's simply no precedent for trying to create a rule and apply to GPU die internal codenames. They're constantly in flux and both AMD and Nvidia have proven, quite consistently, that they'll mix and match multiple dies to single products on a whim, whilst often doing something differently to the previous generation. The only pattern that really holds any truth is that lower last numbers are bigger dies within each generation.

This classification corresponding with a performance tier is entirely arbitrary and barely conforms to any pattern because within a couple of generations the tentative pattern is broken.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
It's just an internal naming scheme. You're trying to spot patterns in something that is changed arbitrarily from generation to generation at Nvidia's whim.
The biggest consumer graphics dies have ranged wildly from 104 to 110 to 200 to 102 over the last decade. The smallest have ranged from 119 to 208 to 117 to 107.

There's simply no precedent for trying to create a rule and apply to GPU die internal codenames. They're constantly in flux and both AMD and Nvidia have proven, quite consistently, that they'll mix and match different dies to single SKUs on a whim, whilst often doing something differently to the previous generation. The only pattern that really holds is that lower last numbers are bigger dies for each generation.

This classification corresponding with a performance tier is entirely arbitrary and barely conforms to any pattern because within a couple of generations the tentative pattern is broken.
Nvidia is doomed anyways. Even if they release the 5070 that's 3 times as fast as the 4090, people will go nuts cause it uses a smaller die than last year. Of course all those arguments usually come from specific people supporting a specific company (cough cough) that releases beta products and charges them more than nvidia does, but the outccry will be about nvidia - even though they have the cheaper products. It is absolutely insane.
 
Joined
Jan 8, 2017
Messages
8,860 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It's just an internal naming scheme. You're trying to spot patterns in something that is changed arbitrarily from generation to generation at Nvidia's whim.

That's just not true, they don't change this stuff on a whim, that much is self evident.
 

64K

Joined
Mar 13, 2014
Messages
6,104 (1.66/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
I don't think I've ever gotten through to anyone that Nvidia pulled some shit with the GTX 680 long ago. The specs clearly shouted that it was a upper midrange Kepler and it has been confusing people ever since. Read W1zzards review of the 680. He flat out says it was a midrange GPU in the conclusion page.

Meanwhile knowledgeable people on tech sites were calling the 680 a high end GPU just because it had the 8 in it's name which in the past signified high end like with the 280/285/480/580. Some of those people bought a Kepler Titan because they thought it was the Kepler Flagship. Buyers remorse was painful when the 780 Ti came out and was not only faster than the Titan with more shaders but also $300 less.

The 3080/3080 Ti were both high end GPUs but the 3080 was a gimped 3080 Ti.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I don't think I've ever gotten through to anyone that Nvidia pulled some shit with the GTX 680 long ago. The specs clearly shouted that it was a upper midrange Kepler and it has been confusing people ever since. Read W1zzards review of the 680. He flat out says it was a midrange GPU in the conclusion page
And yet, from that specific review, the 680 beat amds flagship, the 7970, in every single resolution, in both performance and efficiency. So why the heck does it matter what type of die it used?
 

64K

Joined
Mar 13, 2014
Messages
6,104 (1.66/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
And yet, from that specific review, the 680 beat amds flagship, the 7970, in every single resolution, in both performance and efficiency. So why the heck does it matter what type of die it used?

It matters because that is when Nvidia started muddying the water in order to overcharge. They set a MSRP for an upper midrange GPU at $500 and they have been overcharging for upper midrange GPUs ever since.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
It matters because that is when Nvidia started muddying the water in order to overcharge. They set a MSRP for an upper midrange GPU at $500 and they have been overcharging ever since.
How do they overcharge when their card not only was CHEAPER than the 7970, it was also faster and more power efficient. What the heck is your definition of overcharge, cause it sure as hell doesn't seem to align with what I mean by overcharge. If anything, they undercharged
 

64K

Joined
Mar 13, 2014
Messages
6,104 (1.66/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
What was AMDs high end at the time has nothing to do with it. The upper midrange Nvidia GPUs are a rip off ever since the 680. Look at the upper midrange Ada 4080 for $1,200 MSRP. If no one says anything and just forks over the money then Nvidia will just keep boosting prices to even more ridiculous levels.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
What was AMDs high end at the time has nothing to do with it. The upper midrange Nvidia GPUs are a rip off ever since the 680. Look at the upper midrange Ada 4080 for $1,200 MSRP. If no one says anything and just forks over the money then Nvidia will just keep boosting prices to even more ridiculous levels.
So you are saying that - even though their card is faster on every single resolution and more power efficient, they should charge less because it's a smaller die?

Okay, does that work in reverse as well? Say they release the 5090, the full x02 die, but it's slower than Amd's small 8600xt, should they keep asking 2k for it even though performance is terrible cause it uses the big die? How does that make any senses to you? Are you paying for performance or for die sizes? What difference does the die size make to you? I don't care if they sell me the smaller die humanly possible, if the performance is good enough that's all that matters
 
Joined
Feb 20, 2019
Messages
7,193 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
That's just not true, they don't change this stuff on a whim, that much is self evident.
I disagree entirely. It is true and here's the proof - all the die numbers/codenames going back to Nvidia's entry to the GPU market in 1995:
Nvidia are not consistent today and historically have been changing every aspect of their silicon codenames. Any pattern lasts for 2-3 architectures at most before being altered beyond recognition.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Joined
Sep 1, 2020
Messages
2,015 (1.55/day)
Location
Bulgaria
How do they overcharge when their card not only was CHEAPER than the 7970, it was also faster and more power efficient. What the heck is your definition of overcharge, cause it sure as hell doesn't seem to align with what I mean by overcharge. If anything, they undercharged
I have some memories for 7970 GHz edition from Sapphire with 6GB VRAM which was very capable card for it's time.
 
Joined
Jan 8, 2017
Messages
8,860 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I disagree entirely. It is true and here's the proof - all the die numbers/codenames going back to Nvidia's entry to the GPU market in 1995:
Nvidia are not consistent today and historically have been changing every aspect of their silicon codenames. Any pattern lasts for 2-3 architectures at most before being altered beyond recognition.

They’ve used the 04, 06, 02, etc classification for over a decade, so no they clearly don’t change this on a whim. You’re trying to be obtuse on purpose by going back to 1995.
 
Joined
Jun 14, 2020
Messages
2,678 (1.93/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I have some memories for 7970 GHz edition from Sapphire with 6GB VRAM which was very capable card for it's time.
Sure, nobody said they werent good cards, i had a 7870 back then, a huge upgrade over the 6850 i had before.
 
Joined
Feb 20, 2019
Messages
7,193 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
They’ve used the 04, 06, 02, etc classification for over a decade, so no they clearly don’t change this on a whim. You’re trying to be obtuse on purpose by going back to 1995.
I'm not denying that. I'm denying that their naming of cards and dies is consistent.

The Ti models in particular are even less consistent than other models, if there's a Ti at all, and then how does "Super" fit into all this?!

You're insisting that a 4060Ti should use AD104, but on what precedent? It happened for Ampere, but it didn't happen for either the RTX or GTX lines of Turing, there wasn't one at all for Pascal, or Maxwell.

Your point is that xx60 Ti should use '104' die, but there's no pattern in the last decade to support that, why are you trying to force this '104' rule on something that has quite literally happened only once before (Ampere) in the last 8.5 years?

I'm only bringing up older cards to prove that there is no pattern, it's arbitrary based on Nvidia's own choice and has very little in the way of consistency. Nvidia has become more consistent since Pascal at least, but only if you ignore Ti models, Supers, and the fact that Turing was split into two distinct RTX and GTX lines.


Geforce 4 Ti4600 = NV25
Geforce FX 5600 = NV31
Geforce 6600 = NV43
Geforce 7600 = G73
Geforce 8600GT = G84
Geforce 9600 = G96 and G94a and b
Geforce GT 260 = GT200-100 and 103
Geforce GTX 460 = GF104
Geforce GT 560 = GF114
Geforce GTX 660 = GK106
Geforce GTX 760 = GK104
Geforce GTX 960 = GM204
Geforce GTX 1060 = GP106
Geforce GTX 1660 = GP106
Geforce RTX 2060 = TU106
Geforce RTX 3060 = GA106
 
Last edited:

64K

Joined
Mar 13, 2014
Messages
6,104 (1.66/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
I'm not denying that. I'm denying that their naming of cards and dies is consistent.

The Ti models in particular are even less consistent than other models, if there's a Ti at all, and then how does "Super" fit into all this?!

The Super is a refresh of the original GPU. Kind of like the Kepler 770 was a refresh of the 680.
 
Joined
Feb 20, 2019
Messages
7,193 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
The Super is a refresh of the original GPU. Kind of like the Kepler 770 was a refresh of the 680.
Not quite. The GTX770 was a straight up rebrand of the same silicon and board with a minute factory overclock of 2%

The Supers changed core configuration. 2060Super gained CUDA cores, ROPs, bus-width etc. 2070Super was entirely different silicon, using the 104 die instead of the 106.
Unlike the rebrand of the GTX680 -> GT770, 20-series Supers were different core/ROP/bus/RT/VRAM configurations across the entire product stack with way more than just a 30MHz clock bump and new sticker on the box. Yes, they used the same silicon but it was chopped up very differently with configurations that didn't match any of the prior vanilla 20-series models.
 
Joined
Sep 17, 2014
Messages
20,775 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
160 bit bus, $599, raster about 370 Ti levels at best. AD104 should have been for 4060 Ti, AD103 for 4070/4070Ti, AD102 4080/4090. AD106 for 4050.
Yep. And they can't because...............
R
T

How are we liking that price of RT yet? Glorious innit

Not quite. The GTX770 was a straight up rebrand of the same silicon and board with a minute factory overclock of 2%
And faster VRAM, which is what held the 680 back. I owned a 770, was a pretty neat card, basically GK104 in optimal form. Experience was better than the SLI GTX 660's I had prior to it, and that was a free 'upgrade', so to speak :D Good times... You actually had choices. Now its 'okay, where does my budget land in the stack, and how much crap am I getting along with it that I really don't want to pay for but still do?'
 
Joined
Oct 1, 2021
Messages
105 (0.12/day)
System Name Phenomenal1
Processor Ryzen 7 5800x3d
Motherboard MSI X570 Gaming Plus
Cooling Noctua NH-D15s with added NF-A12x25 fan on front
Memory 32 GB - 2 x 16 GB Ripjaws V CL16 @ 3600
Video Card(s) Dell RTX 3080 10GB
Storage Boot SSD: SATA 500GB - 1tb pcie3 nvme / Spinning Drives: 1tb + 1tb + 1tb + 2tb + 6 tb
Display(s) Gigabyte M27Q 27" 1440P 170 Hz IPS BGR monitor
Case Montech X3 Mesh - Black
Audio Device(s) Realtek ALC1220P
Power Supply 750 Watt Antec Earthwatts - 4 rail
Mouse Razer Viper
Keyboard Corsair K70 LUX RGB
Software Windows 11
Not sure why you think it's a disaster just because prices are high...
Perhaps you can explain why people, such as yourself, continue to defend these high prices. Thanks.
 
Joined
Sep 17, 2014
Messages
20,775 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
GTX 780 was a 110 chip the big one. 560sq.mm then 980 got smaller to 398. 1080 298, 2080 -398 again. 3080 was lucky to get 102 because the 103 7680/320 bit was scrapped, and it made sense to carve it out of the defective 102 instead of a fully functional 103, that is a rarity and would have been too weak to compete with 6800 xt. rTx 2070 was the first 106-based 70 tier. So nothing can be set in stone forever.
I view the shifting of a stack in a big way as a manufacturer having to resort to bigger measures to keep its promises afloat.

That's literally what Nvidia has been doing since they implemented RTX. To even sell something that looks like we need it, they need to pass their old boundaries every gen. Its a clear break from the norm of several decades, where you might have the occasional 'step away from norm' because competition, etc. - the norm is now that things escalate further every gen, because die size is at a feasible max with 600~ish mm2, shrinks are not enough, raster performance is basically maxed out per sq/mm of die space, and stronger RT eats away at it directly for Nvidia. They're quickly heading for a brick wall, and I think they know it.

It's just an internal naming scheme. You're trying to spot patterns in something that is changed arbitrarily from generation to generation at Nvidia's whim.
The biggest consumer graphics dies have ranged wildly from 104 to 110 to 200 to 102 over the last decade. The smallest have ranged from 119 to 208 to 117 to 107.

There's simply no precedent for trying to create a rule and apply to GPU die internal codenames. They're constantly in flux and both AMD and Nvidia have proven, quite consistently, that they'll mix and match multiple dies to single products on a whim, whilst often doing something differently to the previous generation. The only pattern that really holds any truth is that lower last numbers are bigger dies within each generation.

This classification corresponding with a performance tier is entirely arbitrary and barely conforms to any pattern because within a couple of generations the tentative pattern is broken.

Numbers might change, but the stack order doesn't. And the stack order relates to chip SKUs. The 104, for example, was for a long time the second biggest chip, and generally populated the x70/x80 slots, and only one other chip was above it. Now there's a 103 slotted in between, so what this reads as, is that the performance delta across the whole stack has expanded a bit. The same thing happened when Nvidia added Titan to Kepler. We just got a new performance level, above the 104. At the same time, the 104 was clearly not all Kepler gave; they sold a dual chip 690 with two 104's inside as well. The gist: 104 was never truly the top of an Nvidia card stack. Ever.

When Pascal released they also started off with maxed out 104's - not much changed at all since Kepler or Maxwell. The 1080ti came later, to succeed the 980ti, and both were cut down 'Titans', another 'rule' that existed since the Titan got pushed to 780s.

This principle still holds true, and you can literally see Nvidia's struggle when they tried to position ADA. The immense gap from the 102 to 103 is annoying. That's why they had trouble selling an even worse than 103 chip as an x80, but in fact, that's what they really do strive to do: 104 = x80. Its called x70ti now, but its clear Nvidia miscalculated here. And they still do, given the price points.

I consider Samsung 8nm an odd one out because its very clear now those chips/node were pretty crappy. The foundry basically forced the reality on Nvidia there, you can bet your ass they really didn't plan on pushing x80 on 102 there with an abysmal 1700mhz clock and still horrible perf/w. The 3090/ti story is of similar ridiculousness, ti barely has a right to exist.

The waters do become very muddy when you go below the 106's, but at and above that, there is a pattern and its pretty clear.
 
Last edited:
  • Like
Reactions: N/A
Joined
Jan 8, 2017
Messages
8,860 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I'm denying that their naming of cards and dies is consistent.
Over a decade and 6 consecutive architectures with almost the same naming schemes with a few exceptions here and there seems pretty consistent to me.

What's your timeframe at which point you consider this consistent ? 20, 30 Years ? Centuries ?
 
Joined
Nov 27, 2022
Messages
38 (0.08/day)
I think there is one thing which is constant across generations: power consumption. AIBs buy graphics cards based on price and TDP not codenames or memory bandwidth and such.
Based on this metric you can compare the cards and tell which one is which ones successor in Nvidia's mind so to speak. Whoever said the GTX 680 was supposed to be a midrange card (at least based on the TDP of previous generations) is imo right but market circumstances "promoted" it into the 80 class (AMD was slower).
As it happens I made an excel table based on TDP. I have another one with AMD cards if you're interested.
I think this table shows that the cards above 250W are more expensive now exactly because they relatively perform better than "old day&age" high-end SKUs (those had 250W TDP).
As you can see Nvidia moved up the numbers so at the same TDP you either get a "lower class" GPU for the same price or you have to pay more for the "same" class with the same naming. People usually upgrade within the same class so when they see the lower class card for the same money they rather buy the more expensive card. Brilliant and very simple marketing by Nvidia.
But this is just my thinking.

nv_succession.png
 
Last edited:
Joined
Sep 17, 2014
Messages
20,775 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Look at that progress! I also love how deserted those bottom rows are in the last two columns. Such progress.

I scribbled some lines.

1674156204421.png
 

Attachments

  • 1674156016070.png
    1674156016070.png
    382.1 KB · Views: 33
  • Like
Reactions: N/A
Top