• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

4070 Ti Super 16GB - 66SM and sadly non-superfluous?

Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Did ya'll see this?

Again, I don't know if people look at these things and instantly see potential reasons behind them like I do, but this one was a big ol' that.

It's amusing because the obvious midway point between 4070 Token Insufficient and 4080 is 68SM, and they look to be deliberately avoiding that. But, you say, 'tis but a few percent. Yeah, but it's such a 'nVIDIA' few percent.

I haven't msged them, but perhaps Kopite sees what I do, which is perhaps why he said he doesn't believe and/or agree with it. I don't agree with it (in practice) either, but that's (potentially) marketing (and market seperation) folks.

Let's break it down:

4060Ti is a 1080p GPU (Until it's 8GB buffer relegates it to the landfill now soon, relatively speaking). It is 34 SMs, half what one would hope a 4070Ti 16GB would be (sufficient use of RAM).
4070Ti is a 1440p GPU (Unless you use it for 1080p RT features and it's 12GB buffer relegates it to 1080p/4k 'Performance' DLSS soon, relatively speaking).
4080 is a 4k GPU (Unless you use it for 1080/1440p RT features and/or the future relegates it to 1440p/4k 'Quality' DLSS soon, relatively speaking).
4090 is whatever. It doesn't matter because it's the fastest thing. You already know it doesn't usually hit 4k120 (down to 4k 'balanced' in Alan Wake) in many of the latest titles. Use that as a reference when you think about this.

So, the interesting thing is that if you look at not only the performance of the cards, but generally how games have landed lately, you can see that 4070 Ti 12GB is not long for the highest settings and maintaining any kind of >1080p native resolution because ram WILL eventually creep them out. This will very likely be your eventual 4k 'Performance' (1080p) card by the time of PS5pro, if not Blackwell. If you still don't agree, give me a minute.

The thing is, with 68 SM's and more-importantly 16GB (so it won't be a limiting factor), that *coulda/shoulda/woulda* put such a card into 4k Quality in nVIDIA-sponsored titles (where nVIDIA's arch is ~9-10% stronger per flop). In AMD, or what I consider 'neutral' titles, it could be ~10% perf behind what AMD *could* do with Navi 4 absolute perfromance (or if you like, use a 7900xt for a general reference). So, IOW, in some titles one would be 4k 'quality' and the other 4k 'balanced', while in another it could be reversed; a tie. Give or take what you think of FSR/DLSS, it would be a good matchup. Competition. People keep overspending on nVIDIA cards with similar/less raster, world keeps turning, life is good life.

The difference between 4k Performance and 4k Balanced is ~10% performance, when ram is not a factor.
The difference between 4k Quality and 4k Balanced is ~10% performance, when ram is not a factor.

So, because of this, we can now assume nVIDIA very-much wants this to be a 4k 'balanced' card. One, because 66 is 10% more than 60. Two, because they limited the SMs so it wouldn't encroach on 4080's territory bc ram is sufficient.

It's not an extremely massive revelation (if 66SM is true) or anything, but I just found it a funny way to reorient the stack (to be oddly coherent). Sure, there's always a limitation somewhere (but 68 would have been a damn good match for 16GB given nVIDIA's clocks), and sure...You can turn down one setting and probably get whatever 4k resolution it's around to maintain 60, but we're talking about stock review charts/perception dammit, and this is just sooo nVIDIA. Watch those 4k DLSS charts with settings cranked when this card releases (if not games a little later; built towards PS5Pro), it will be ~58fps (or ~59 capable with overclocking; only mildly exaggerating) so you question if you should splurge on a 4080....or a GB205. If you want the *really* condensed version, they probably don't want the product to have a guarenteed 50TF...which is actually a pretty important number wrt now and probably the future (consoles). nVIDIA just refuses to give people a good, long-lasting product they can't obsolete ever again. Those days are over, they replaced them with money and tears.

I don't want to speak for him, but I could imagine Kopite or someone else may think "Why would nVIDIA put themselves in a position where potentially AMD could be FSR Quality and nVIDIA below DLSS Balanced?"

The answer is because nVIDIA (generally) makes the market. They *clearly* intend for that to continue in the PC space.

4070 TI will probably be the defacto for the 4k Performance upscaling resolution at some point.
4070 TI Super will almost-certainly be the defacto for 4k DLSS Balanced resolution at some point.
4080 will be (is?) the defacto for 4k DLSS Quality.

You may or may not understand why this matters.

Depending upon where nVIDA clocks the part, they *may* be able to screw AMD out of Balanced FSR in nVIDIA sponsored titles at stock.
Observational math: 8192*3200 (guess) / 1.0909_ (nvidia lead in nvidia titles) = 8448*?.
In this case it would be ~2845mhz (not oddly at all, essentially where guarenteed clock yields stop on 5nm. It's almost like nVIDIA wants to make the cheapest thing possible that will compete but not actually be 'great' for longevity.)

That's a very fine line.

Thus, and in other words, they turned a relative defeat into a victory by changing the goal post from Quality to Balanced, saving the usefulness of 4080's raster in the process...just with that little (potential) change, if it wasn't the plan all along.

So, the ball is in AMD's court. Will they do the same split with Navi 4 (7900xt which is FSR Q/7800xt is aimed for native 1440p)? I doubt it, because the flop/ram potential (if it is what's expected) of their next product at 375W just makes too much sense to limit it. It's also (supposedly) the highest product for their 'next' generation, so it wouldn't make sense to limit it versus the previous one (they should give it ~40% uplift over 7800xt, absolute potential of perhaps ~50%). This is also a highly contentious market where they could use a win/stalemate, even if they're kind of relegated to working around what nVIDIA does to an extent (outside of scaling from their own console chips).

TBH, I wonder if AMD really even has begun measuring cards relative performance this way (upscaling) to this day. Maybe, but perhaps not, given that 7800xt compute perf is aimed for "straight" 1440p, not 4k upscaling, and FSR appeared to be put together fairly hastily, while nVIDIA clearly planned this out (perhaps with contingincy). Upscaling 4k is indeed the future (so they should take note if they haven't already). Maybe they'll be lucky in yields and shoot for giving people 4k FSR 'Quality' OOTB. nVIDIA may think they're aiming for ~20% (although that might just be the PS5pro) if they want a clean victory. I hope, and believe (given the theoretical potential) it will better than that. If I were AMD, I would be hoping they are able to hit 3200mhz with 8192sp ootb, because nvidia is clealy expecting lower (down to 7680/3ghz). If AMD does somehow meet the maximum potential of the stock ram with good yields, which I think will be ~3264-3265mhz, nVIDIA could still clock the part at something like ~2910mhz OOTB to screw them regardless, even with literally zero OC room, because that's what nVIDIA does. Trust, when you crunch the numbers nVIDIA sometimes clocks their stuff to win in weirdly obscure metrics/factors by the smallest amount possible, but still win (on paper) where they can when comparing (sometimes even different market) products. It's absurdly well thought-out and planned. I don't see AMD do this nearly as much; perhaps that says something about the companies. Huang competative AFMF, while simultanously cheap and coniving. It's quite an impressive mix of traits.

It'll be a shame if Navi 4 is clocked/configured lower (at stock), I was hoping we could have a real stupid arguement (without envolving overclocking) about which is better: 4k FSR Quality or DLSS Balanced.
Now, sadly, it will probably have to involve overclocking. It will still be stupid, though.
The answer, fwiw, is they're both (generally) fine imho; neither of them started at 1080p (or lower).

In my mind, it just went from an even race to AMD likely having a *very* slightly better raster card in a worse short-term market for them, probably. It all depends if games scale off the PS5(pro) or DLSS; both are clearly posturing for that win.

Maybe you have a different opinion, like they/you just want to have something that can do 'Quality" where a PS5pro is 'Balanced' at the same settings...which is valid. Maybe you don't care. Also valid.

The important lesson to learn is that the lengths nVIDIA will go to tilt the market, both now and in the future, is incredible. Most people don't think about it all the different facets, though.

All that from 256 shader processors. A few nVIDIA percent. For what feels like the 20th time. Just food for thought!
 
Last edited:
Joined
Feb 24, 2023
Messages
2,240 (5.22/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
I understand this might just be my misperception but it feels like you're selling us a tale of nVidia being the only ones to do so, or to be the ones to do so more than anyone else.

Let's roll back to mid-10s: RX 480 and 470 arrived, being utterly VRAM bandwidth starved despite having what's seemed to be healthy ~7 GHz on top of 256-bit bus. Their competition, GTX 1060, had a little more clocks at 8 GHz but had 192-bit bus. But any VRAM performance tweak makes for almost the linear overall performance boost in AMD GPUs, whereas GTX 1060 (6 GB version) seemed to be perfectly balanced. I don't see a way AMD didn't know it and I don't see a way having more than 6 GB VRAM could help RX 480 in any way so cropping VRAM to 6 GB but increasing its bandwidth by 1.5 times would make them feast on 1060s like on breakfast pankoeks.

Late 2010s, RX 5600 XT: once again, VRAM castrated nonsensical GPU with no real purpose at its price range. Fails to outperform 1660s by significant enough margin to be fully worth the additional cost. Fails to match 2060 6 GB at raw raster despite already far behind feature wise (DLSS and RT had been exclusive to RTX 2000 series back then). RX 5700 non-XT, though, seemed a really solid chunk of hardware.

Earliest 2020s, RX 6700 XT: AMD knew their RX 6800 non-XT (60 CUs) doesn't really make RTX 3070 crumble, especially considering the latter was cheaper. And still is. Cutting it down to 40 CUs and calling it a day an RX 6700 XT was a disaster as this GPU couldn't even theoretically match 3070 in raw raster no matter how close to the clocking ceiling you go, it even couldn't outperform the way cheaper RTX 3060 Ti. Why not giving it 44 to 48 CUs for this GPU to at least not be behind in everything but VRAM capacity?

RX 6500 XT: nuff said.

Recently, RX 7600: the product nobody wanted (for $250, it's a truckload of last-gen quality performers) which failed to be better than its direct predecessor (regarding GPU complexity) and competitor (6650 XT). Why not using a cut-down higher-tier GPU instead? Say, 40 CU + 10 GB variant like mix of RX 6700 and 6700 XT but based on RDNA3. Even if more expensive, that would've been a success. And what we all know as an RX 7600 could'be been launched with slightly lower clocks and significantly lower TBP as a 7500 XT or 7600 Lite, or whatever when the supply of sub-200 USD GPUs comes to an end.

RX 7700 XT: why did they cut VRAM if not to upsell the 7800 XT? With 16 GB VRAM, it would've been a really great GPU. At 12 GB, it has way less longevity than the 7800 XT and should be much more cheaper to justify this cut-down.

RX 7900 XT: why did they cut Navi 31 by 12.5% but only cut the price by 10%? XTX upselling?

I don't even mention the fact the AMD are still behind, sometimes irrecoverably behind, in terms of feature set. FSR is heavily inferior to DLSS; RT performance is significantly lower than in nVidia GPUs of same calibre; Frame Generation is only present in a couple no-one-plays-it-anyway games and works worse than nVidia's one; AFMF is cool but it's a gimmick and even AMD don't try to pretend it's not a gimmick; DLSS is about to never be supported by AMD GPUs.

All this leads to the situation where nVidia are free to commit to anything and they will be excused, 95% chance at least. Just because Intel are total rookies and AMD don't live in the real world.

66 SMs instead of 68? Yes, that's evil but at 16 GB, this will be a GPU with no real competition from AMD. At $850, it will be substantially cheaper than 7900 XTX and will deliver way more performance than 7900 XT (RT and FG performance also considered). At anything below $1000, it will be in demand. Like you stated, nVidia create the market. And they created it the way they could even stay at the same 60 SM count and only add this VRAM and they still would be 100% fine. Just like 1660 Super which only differed from the plain 1660 in VRAM speed (14 GHz VS 8). And nobody is upset because it was more than enough to put AMD behind once again.

RDNA4 has to be a miracle to turn the tables. At least +50% bang per watt VS RDNA3 and at least 10% less RT performance penalty is needed, otherwise we'll have even more of the same: AMD GPUs are only superior in basic raw raster and lightest RT gaming whilst nVidia ones are pulling more and more ahead in the rest (FPS per W, RT, upscaling quality, frame generation, anti-lag, etc).

I hate nVidia for being that greedy but I hate AMD even more for appreciating this greed.

What I'm really waiting and excited for is an RTX 4050 or something like that of <70 W TBP so I could use it as a GPU in my briefcase PC. It's been TOO long without improvements in this segment with 1650 D6 and RX 6400 still being the best choices for such PCs. Hilarious.
 
Last edited:
Joined
Sep 17, 2014
Messages
20,953 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Did ya'll see this?

Again, I don't know if people look at these things and instantly see potential reasons behind them like I do, but this one was a big ol' that.

All that from 256 shader processors. A few nVIDIA percent. For what feels like the 20th time. Just food for thought!
Yes, there is a reason for every square mm of hardware on a product these days... and none of those reasons are 'lets make the best thing we can make here at X price'.

The reasons are primarily 'how can we manipulate and nudge people to an upsell while we actually sell as little as possible'. Its not even about competition directly... Nvidia is its own competition. Pascal was the best and worst GPU gen for them. It immediately prompted a longer delay for the next gen and they had to chase the RT card to even make a dent performance wise, in a market/space that was not ready for it at the time of Turing. Maxwell 980ti 6GB was already a writing on the wall in that respect - it lasted almost as long, equalling a 1070.

But we have to also appreciate that was the end of a time of long term stagnation on the node. We were stuck on 28nm for a loooong time, consoles plateaud gaming performance in that same space for a similar time. It was not until the PS4 matured that things really started moving forward. Today we're finally at the dawn of a somewhat new and more advanced level of graphics. Too bad in most games it barely pays off (even 'new raster' like UE5.x) versus the performance hit, when compared to ye olde DX11 world.

I think right now we're in very uncertain times. GPU raw performance advances are definitely cutting back a little. Shrinks are finite. RT is a performance hog. Non-proprietary and engine driven RT is moving forward. The looks per $ per FPS are moving around wildly between games. The market for gaming will eventually settle on any number of these advancements. Right now though? If you want to game proper, don't ride the cutting edge, don't overspend on GPU and live with the notion anything current gen won't max things out.

As for what battle is next for AMD/Nv... interesting thoughts, but irrelevant in the grand space of things. Gaming is about the games, in the end. 'Killer app' is the key word to sell any piece of hardware. Its funny to see how desperate Nvidia is trying to elevate games to those killer apps. Alan Wake 2, Cyberpunk, TW3... Its even more hilarious to see the sheep following that rationale, ignoring the overwhelming majority of games doesn't give a rat's behind about RT, PT and upscale, doesn't need it, and looks razor sharp. Speak of digging your own hole of future dissappointment... while wondering why Nvidia keeps pulling an even more extreme stunt the next gen.

RDNA4 has to be a miracle to turn the tables. At least +50% bang per watt VS RDNA3 and at least 10% less RT performance penalty is needed, otherwise we'll have even more of the same: AMD GPUs are only superior in basic raw raster and lightest RT gaming whilst nVidia ones are pulling more and more ahead in the rest (FPS per W, RT, upscaling quality, frame generation, anti-lag, etc).
The most important thing is AMD keeps pace, the idea they need to beat Nvidia at every metric is nonsense. It has never gained AMD market share. AMD loses share when they deliver GPUs too late or when they stall in top end performance (Vega times). And those dips have long lasting effects. It took them 2 generations of RDNA and a long pause prior to it to recover there.

They have a volume market with the consoles and a tight grasp on the gaming market through there as well. In the end, basic raw raster is still the floor underneath every game's performance. Nvidia is just trying to pull the RT/PT train ahead as far as possible so they can hammer their GPUs with so much non raster workload, you won't be able to hit that raw raster floor limitation on team green. Its a masterful game of deception, and its already falling apart in some engines. Nvidia is definitely going to adjust, and AMD might never get to the RT perf we want but will still dominate gaming. Interesting times.
 
Last edited:
Joined
Dec 31, 2020
Messages
772 (0.64/day)
Processor E5-2690 v4
Motherboard VEINEDA X99
Video Card(s) 2080 Ti WINDFROCE OC
Storage NE-512 KingSpec
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
66 or 68 makes no difference for longetivity. 100 vs 103 FPS. And it doesn't cement it decisively in Performance or quality mode.
it's the same relation as 2070 Super 5 GPCs vs 2080 Super with 6 GPCs or 20% more. 4070 TiS - 5.5 GPCs vs 4080 Super 7 GPCs ~21%. this is where it comes from. Segmentation.
 
Last edited:
Joined
Jan 5, 2006
Messages
17,859 (2.67/day)
System Name AlderLake / Laptop
Processor Intel i7 12700K P-Cores @ 5Ghz / Intel i3 7100U
Motherboard Gigabyte Z690 Aorus Master / HP 83A3 (U3E1)
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans / Fan
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MHz CL36 / 8GB DDR4 HyperX CL13
Video Card(s) MSI RTX 2070 Super Gaming X Trio / Intel HD620
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2 / Samsung 256GB M.2 SSD
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p / 14" 1080p IPS Glossy
Case Be quiet! Silent Base 600 - Window / HP Pavilion
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W / Powerbrick
Mouse Logitech MX Anywhere 2 Laser wireless / Logitech M330 wireless
Keyboard RAPOO E9270P Black 5GHz wireless / HP backlit
Software Windows 11 / Windows 10
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
Joined
Dec 31, 2020
Messages
772 (0.64/day)
Processor E5-2690 v4
Motherboard VEINEDA X99
Video Card(s) 2080 Ti WINDFROCE OC
Storage NE-512 KingSpec
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
4070 ti super or 5070. There's no other way
 
Joined
Sep 17, 2014
Messages
20,953 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
4070 TiTi . Make it happen
 
Joined
Jan 5, 2006
Messages
17,859 (2.67/day)
System Name AlderLake / Laptop
Processor Intel i7 12700K P-Cores @ 5Ghz / Intel i3 7100U
Motherboard Gigabyte Z690 Aorus Master / HP 83A3 (U3E1)
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans / Fan
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MHz CL36 / 8GB DDR4 HyperX CL13
Video Card(s) MSI RTX 2070 Super Gaming X Trio / Intel HD620
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2 / Samsung 256GB M.2 SSD
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p / 14" 1080p IPS Glossy
Case Be quiet! Silent Base 600 - Window / HP Pavilion
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W / Powerbrick
Mouse Logitech MX Anywhere 2 Laser wireless / Logitech M330 wireless
Keyboard RAPOO E9270P Black 5GHz wireless / HP backlit
Software Windows 11 / Windows 10
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
4070 TiTi . Make it happen
page front GIF


:D
 
Joined
Feb 22, 2016
Messages
1,493 (0.50/day)
Processor Intel i5 8400
Motherboard Asus Prime H370M-Plus/CSM
Cooling Scythe Big Shuriken & Noctua NF-A15 HS-PWM chromax.black.swap
Memory 8GB Crucial Ballistix Sport LT DDR4-2400
Video Card(s) ROG-STRIX-GTX1060-O6G-GAMING
Storage 1TB 980 Pro
Display(s) Samsung UN55KU6300F
Case Cooler Master MasterCase Pro 3
Power Supply Super Flower Leadex III 750w
Software W11 Pro
Rather wordy exposition on theme someone's kid got a holographic sticker out of coin operated bubble toy machine and forced them to encounter it.

From straight on you see mostly see 4xxx.
Any other angle and you very clearly see 4x(-2)x
Best exemplified by the large jump in performance and price at 4070 level this generation

Any adult and most children know one coin machines are rarely if ever half as good as two coin machines. SUPERsized plastic bubble (fun in their own right) aside.
 
Top