• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory

Isn't Ada a dual issue design just like Ampere? Which means that it's actually 8192 units that double up FP32 workloads (for example, 3090 is marketed as 10496 CUDA cores but actually contains 5248 shader units spread across 82 SM blocks/compute cores out of the 84 present in a full die such as 3090 Ti)?

In any case, while the physical die area of Navi 31 is smaller in comparison to AD102, it doesn't contain the humongous L2 cache its competitor has, nor any of its specialty features such as tensor processing cores, which should bring the die area that's dedicated to shader processing on Navi 31 significantly closer to the amount of area used in AD102 for the same purpose. I genuinely believe AMD has designed it targeting AD102, from their earliest narrative and claimed performance gains over the 6950 XT... except that they failed so miserably to achieve that I actually pity them.

I don't understand why the RX 7900 XTX turned out to be such a disaster, it must contain very severe and potentially unfixable hardware errata, because if you look at it objectively, it's architected really well. I am no GPU engineer, but I don't really see any major problem with the way RDNA 3 is architected and how its resources are managed and positioned internally. Even its programmability seems to be at least as flexible as the others. At a first glance, it seems like a really thought out architecture from programmability to execution, but it just doesn't pull its weight when put next to Ada. I refuse to believe AMD's drivers are that bad, not after witnessing first hand the insane amount of really hard work the Radeon team put on it, even if I sound somewhat unappreciative of said efforts sometimes (but trust me, I am not). It's a really good read and even for a layman you should be able to more or less end up with an understanding of the hardware's inner workings:


Despite my often harsh tone towards AMD, I really think they have the potential to reverse this and make an excellent RDNA 4 that will be competitive with Blackwell, regardless, I don't think I will end up with a 5090 on my system if NVIDIA keeps their pricing scheme that way.

Dunno. Wouldn't be the first time something turned out unexpectedly bad. The 2900XT quickly pops up in my mind. The follow-up HD3870 that came out a few months after had 30% less memory bandwidth and the same amount of shaders, ROPs, and TMUs, but performed identically and drew less power.
 
great an 4060ti with 128bit and 8gb (like 3050) for 499$ :laugh:
 
Could you repeat what you said, in English this time?
For you, no.

Knowing now your attitude in general, I wouldn't piss on you if you were on fire. By the way, it's would not could. For that proper English you require.
 
Last edited:
Oof, things got spicy in here :eek: calm down boys we're all mates here :)

Dunno. Wouldn't be the first time something turned out unexpectedly bad. The 2900XT quickly pops up in my mind. The follow-up HD3870 that came out a few months after had 30% less memory bandwidth and the same amount of shaders, ROPs, and TMUs, but performed identically and drew less power.

It seems as such. I have no idea, really. Nvidia had something similar with the GTX 480 and 580, the 580 released only some 9 months later... it's not that the 480 performed inadequately, but rather that its GF100 core's power and thermals were horrible and that was a problem they rectified with the GF110 chip used in the latter.

Despite not at all being the kind of product I am usually interested in buying, I'm eagerly awaiting the reviews to know how would 7600 XT (N33) scales vs. 6600 XT (N23). If the monolithic design indeed shows large gains over the previous generation, I would guess something related to the chiplet architecture is the root cause for the 7900 XTX's underwhelming performance. Interconnect bandwidth, perhaps? I suppose we will know sooner or later.
 
Back
Top