Wednesday, December 14th 2022
NVIDIA GeForce RTX 4060 Ti to Feature Shorter PCB, 220 Watt TDP, and 16-Pin 12VHPWR Power Connector
While NVIDIA has launched high-end GeForce RTX 4090 and RTX 4080 GPUs from its Ada Lovelace family, middle and lower-end products are brewing to satisfy the entire consumer market. Today, according to the kopite7kimi, a well-known leaker, we have potential information about the configuration of the upcoming GeForce RTX 4060 Ti graphics card. Featuring 4352 FP32 CUDA cores, the GPU is powered by an AD106-350-A1 die. On the die, there is 32 MB of L2 cache. To pair, it has 8 GB of GDDR6 18 Gbps memory, which should be enough to power games at 1440p resolution, which this card is aiming for.
The design of the cards reference PG190 PCB is supposedly very short, making it ideal for ITX-sized designs we could see from NVIDIA's AIB partners. Interestingly, with a TDP of 220 Watts, the reference card is powered by the infamous 16-pin 12VHPWR connector, capable of supplying 600 Watts of power. This choice of connector is unclear; however, it could be NVIDIA's push to standardize its usage across all products in the Ada Lovelace family stack. While the card should not need the full potential of the connector, it signals that the company could only be using this type of connector for all of its future designs.
Source:
@kopite7kimi (Twitter)
The design of the cards reference PG190 PCB is supposedly very short, making it ideal for ITX-sized designs we could see from NVIDIA's AIB partners. Interestingly, with a TDP of 220 Watts, the reference card is powered by the infamous 16-pin 12VHPWR connector, capable of supplying 600 Watts of power. This choice of connector is unclear; however, it could be NVIDIA's push to standardize its usage across all products in the Ada Lovelace family stack. While the card should not need the full potential of the connector, it signals that the company could only be using this type of connector for all of its future designs.
82 Comments on NVIDIA GeForce RTX 4060 Ti to Feature Shorter PCB, 220 Watt TDP, and 16-Pin 12VHPWR Power Connector
This is what happens when people don't listen to those that call out their marketing bs. If we are extremely lucky, we might see normalized prices again two generations from now. Otherwise, I suggest, you start taking up another hobby.
For my needs (mainstream gaming, not afraid to tone down some settings), cards having 128 bit memory buses typically didn't cut it. I tend not to have high hopes for them, but I will buy depending on what the benchmarks say, not the sticker on the box.
Now the 60 series is at that wattage, which is see as backwards progression
I'm gunna have to start paying attention to perfomance per watt as the true sign of a series of GPU's being an upgrade or not - it's like the geforce FX days all over again (high wattages, screaming fans, products worse than their previous gen counterparts)
Frames don't mean anything on their own. One can always add more cores, and bam... behold a faster GPU!
Fix.
#sarcasm
And that's Ampere's first line up, for sure in a nutshell. With Ada it seems memory is in a better place, but the bandwidth similarities between two cards with very different core power is an interesting one. How good is that cache and where will it fall short...
On the positive side, RTX 40 series cards will probably be very valuable to collectors 20 years from now… :rolleyes:
(sarcasm)
I hope at least some RTX 40 series AIB cards have standard 8-bit plugs.
And BTW, where did all the new PSUs with ATX 3.0 go? It's been awfully quiet for a while. I wonder if they holding them back, revising the standard, or something else is going on?
Sooner or later even AMD will adopt it, unless they're expecting people to use four 8pin connectors in the RX 8900 XTX OC...
No amount of waiting makes high wattage cards any more efficient, and historically the most loathed cards are the highest wattage ones (especially above 250W) - always remembered as hot, noisy and failing early (as the cooling weakened, general users rarely even replace thermal paste let alone thermal pads on VRM's and VRAM)
This is what we should be seeing at these super high prices:
4080 Ti - cut down AD102, 320-bit, 20 GB
4080 - full AD103, 256-bit, 16 GB (G6X)
4070 + Ti - cut down AD103, 256-bit, 16 GB (G6 non-X)
4060 + Ti - AD104, 192-bit, 12 GB (G6 for 4060, G6X for Ti)
4050 - AD106, 128-bit, 8 GB
While the L2 cache might help with the narrow memory buses, the VRAM capacity will be a huge problem in the coming years, especially with ray tracing.
People should really buy the RTX 30 series and the RX 6000 series while they are still available at good prices. Ignore this new garbage and wait 2 years for the next generation.
But I don't believe Nvidia will launch "bad" cards in the RTX 4060 / 4060 Ti segments.
I do not remember people complaining about the 30 series, at least the 3080 and 3070. I mean I pre-ordered the 3080 on day 1. For $700 it gave me 80% more performance (100% in RT) over my 2070 Super that I paid $500 for a year before. It is one of NVIDIA's best cards ever, incredible value.
The lower tier cards were not as impressive, though. But the bigger problem was that you could not buy them at MSRP. And now that you can, Radeon cards are sooo much cheaper.
If you look at the GTX 1060, you can realize how badly NVIDIA is screwing over the mid-range segment these days. That was 980 performance with more VRAM for just $250.
Even the 2060 offered 1080 performance on average at $350. But the 3060 only offered 2070 (non-Super) performance at "$330". And the 4060 is supposed to offer 3070 performance at $400 according to leaks. Not only is the price going up (which can be understandable), but the performance gains are going down (which is not). It should be one or the other.
While I agree that prices these days are too high, this mostly applies to the high-end models. The mid-range models, albeit being a little pricey, is nowhere that bad. (This is also where there is some competition)
But my bigger point to what THU31, xorbe and many others said; they basically are hung up on specs. It's very common that people prejudge products this way based on specs, even though none of us knows the final spec or performance level. And I'm pretty sure many of people like this will change their tune once the products arrive, realizing they are "better than expected", etc. I've followed this too long not to notice this pattern.
So to restate what I said in #70, I don't think Nvidia will launch "bad" cards in this segment, they can afford to do that. These cards will make up a good share of their revenue.
And VRAM capacity is an actual problem. 8 GB was already problematic on the 3070.
660 -> 960 - 37% performance increase with a 15% price decrease (I am ignoring the 760 because it was a refresh)
960 -> 1060 - 100% performance increase with a 25% price increase (there was also a slightly slower model with 3 GB at just $200, same as 960)
1060 -> 2060 - 56% performance increase with a 40% price increase
2060 -> 3060 - 21% performance increase with a 6% price decrease
3060 -> 4060 - ~45% performance increase with a 21% price increase (if leaks are true)
It seems pretty clear why the 1060 was the number 1 card on Steam for almost 6 years. And it is beyond my comprehension why they do not want to achieve the same success with any card after that. Corporations' obsession with margins instead of overall profits is incredible (I experienced that myself working for one).
RTX 4060/4060 Ti is mostly going to be a 1080p card, or 1440p at medium details.
We've had this discussion with every generation; people make up subjective anecdotes about how much VRAM a card actually needs, when benchmarks show these cards still scale pretty well at even 4K resolutions. I wouldn't worry about 8 GB on a such card, both because I know how graphics actually works, but most importantly because benchmarks show they still run out of memory bandwidth and computational power long before VRAM under normal (realistic) use cases. Sure, you can load custom ridiculous texture packs which eats VRAM, but that's an unbalanced and unrealistic edge case, which shouldn't be the basis for buying recommendations.
And please don't bring up the future-proofing argument, that argument is BS and it hasn't held true in the past. If you take into account the inflation levels of the product's lifespan, then it doesn't look that bad at all. Most years it's ~1.5-2.5%, but as we know the past couple of years combined is ~15% in the official numbers (~25%+ if we look at real inflation after the old models). So using the same assumptions as you, 3060 -> 4060, ~45% extra performance at roughly the same real price, then why complain?
(Not trying to spawn a political discussion here, just trying to look at things from a real world perspective) GTX 1060 was undoubtedly one of the greatest card deals of all times (possibly the greatest seller too?), and its dominance has several contributing factors;
- The card was good (obviously), and despite the critics the cheap 3 GB version was plenty for most buyers in this segment.
- The supplies were great most of the time, could often be found below MSRP, especially the 3 GB version.
- The competition, RX 480/580, was in really short supply.
- The successor, Turing, was late (~2.5 years), and supplies weren't great initially.
I've built three systems with this card myself, and recommended it many times. The Pascal generation has held up pretty great overall too. The solution is more competition. AMD (and Intel) needs to produce good cards in the mid-range and have good availability in all markets to really push prices down. Otherwise the current trend will continue.