Sunday, August 13th 2023
NVIDIA Blackwell Graphics Architecture GPU Codenames Revealed, AD104 Has No Successor
The next-generation GeForce RTX 50-series graphics cards will be powered by the Blackwell graphics architecture, named after American mathematician David Blackwell. kopite7kimi, a reliable source with NVIDIA leaks revealed what the lineup of GPUs behind the series could look like. It reportedly will be led by the GB202, followed by the GB203, and then the GB205 and GB206, followed by the GB207 at the entry level. What's surprising here, is the lack of a "GB204" succeeding the AD104, GA104, TU104, and a long line of successful performance-segment GPUs by NVIDIA.
The GeForce Blackwell ASIC series begins with "GB" (GeForce Blackwell) followed by a 200-series number. The last time NVIDIA used a 200-series ASIC number for GeForce GPUs was with "Maxwell," as the GPUs ended up being built on a more advanced node, and with a few more advanced features, than what the architecture was originally conceived for. For "Blackwell," the GB202 logically succeeds the AD102, GA102, TU102, and a long line of "big chips" that have powered the company's flagship client graphics cards. The GB103 succeeds AD103, as a high SIMD count GPU with a narrower memory bus than the GB202, powering the #2 and #3 SKUs in the series. There is curiously the lack of a "GB104."NVIDIA's xx04 ASICs have powered a long line of successful performance-thru-high end SKUs, such as the TU104 powering the RTX 2080, and the GP104 powering the immensely popular GTX 1080 and GTX 1070 series. The denominator has been missing the mark for the past two generations. The "Ampere" based GA104 powering the RTX 3070 may have sold in volumes, but a its maxed out RTX 3070 Ti hasn't quite sold in numbers, and missed the mark against the Radeon RX 6800 (similar price). Even with Ada, while the AD104 powering the RTX 4070 may be selling in numbers, the maxed out chip powering the RTX 4070 Ti, misses the mark against the RX 7900 XT with a similar price. This has caused NVIDIA to introduce the AD103 in the desktop segment—a high CUDA core-count silicon with a mainstream memory bus width of 256-bit—out to justify high-end pricing, which will continue in the GeForce Blackwell generation with the GB203.
As with AD103, NVIDIA will leverage the high SIMD power of GB203 to power high-end mobile SKUs. The introduction of the GB205 ASIC could be an indication that NVIDIA's performance-segment GPU will come with a feature-set that would avoid the kind of controversy NVIDIA faced when trying to carve out the original "RTX 4080 12 GB" using the AD104 and its narrow 192-bit memory interface.
Given NVIDIA's 2-year cadence for new client graphics architectures, one can expect Blackwell to debut toward Q4-2024, to align with mass-production availability of the 3 nm foundry node.
Source:
VideoCardz
The GeForce Blackwell ASIC series begins with "GB" (GeForce Blackwell) followed by a 200-series number. The last time NVIDIA used a 200-series ASIC number for GeForce GPUs was with "Maxwell," as the GPUs ended up being built on a more advanced node, and with a few more advanced features, than what the architecture was originally conceived for. For "Blackwell," the GB202 logically succeeds the AD102, GA102, TU102, and a long line of "big chips" that have powered the company's flagship client graphics cards. The GB103 succeeds AD103, as a high SIMD count GPU with a narrower memory bus than the GB202, powering the #2 and #3 SKUs in the series. There is curiously the lack of a "GB104."NVIDIA's xx04 ASICs have powered a long line of successful performance-thru-high end SKUs, such as the TU104 powering the RTX 2080, and the GP104 powering the immensely popular GTX 1080 and GTX 1070 series. The denominator has been missing the mark for the past two generations. The "Ampere" based GA104 powering the RTX 3070 may have sold in volumes, but a its maxed out RTX 3070 Ti hasn't quite sold in numbers, and missed the mark against the Radeon RX 6800 (similar price). Even with Ada, while the AD104 powering the RTX 4070 may be selling in numbers, the maxed out chip powering the RTX 4070 Ti, misses the mark against the RX 7900 XT with a similar price. This has caused NVIDIA to introduce the AD103 in the desktop segment—a high CUDA core-count silicon with a mainstream memory bus width of 256-bit—out to justify high-end pricing, which will continue in the GeForce Blackwell generation with the GB203.
As with AD103, NVIDIA will leverage the high SIMD power of GB203 to power high-end mobile SKUs. The introduction of the GB205 ASIC could be an indication that NVIDIA's performance-segment GPU will come with a feature-set that would avoid the kind of controversy NVIDIA faced when trying to carve out the original "RTX 4080 12 GB" using the AD104 and its narrow 192-bit memory interface.
Given NVIDIA's 2-year cadence for new client graphics architectures, one can expect Blackwell to debut toward Q4-2024, to align with mass-production availability of the 3 nm foundry node.
71 Comments on NVIDIA Blackwell Graphics Architecture GPU Codenames Revealed, AD104 Has No Successor
There are technologies like chiplets and vertical stacking that can mitigate some of these fundamental limitations and allow us to eke a little more life out of copper and silicon, but those are really just band-aids and side-steps.
GTX 900 - 28nm
GTX 1000 - 16nm/14nm
RTX 2000 - 12nm
RTX 3000 - 8nm
RTX 4000 - 4nm
RTX 5000 - 3nm if 2024, which may not be worth it if TSMC has managed to get 2nm to volume production by end of 2024 (assuming Apple doesn't snipe all that capacity)
What, exactly, is going to be the socio-economic impact when we're told that the chips we've been used to getting faster and cheaper for decades, can only keep becoming faster if we're willing to pay orders of magnitude more for them because faster is only possible with those new, vastly more expensive, materials and methods?
By Turing you're already fully compliant with DirectX 12 Ultimate and looking strictly at RT performance and general efficiency gains, and Ada's advanced features like SER aren't used by games yet.
So for eSports segment, GPUs have been good enough for the past 6 years. For AAA flagships, the past 4. And Ada all but made currently released RT games easy to run, so it makes more sense than ever to issue this surcharge if you want to go above and beyond in performance... Because otherwise your experiences are probably going to be the same.
2. Technology that gets rid of the traditional polygon rasterisation - transition to unlimited detail with unlimited zoom of points | atoms in the 3D instead of polygons and stripes.
Something close to it is actually used in Unreal Engine 5 - Nanite.
Instead of using 1 big monolithic GPU, why not using multiple small ones, that are easy and faster to produce due to better yields.
They call it "chiplets" nowadays, so why for AMD to abandon the High-end and Enthusiast segment, if you can stack 4 or more of those, basically doubling performance of the video card with each pair added. I thought they were going to smoke enVideea with this...
Confusing times.
radeon 6800 xt launch date was november 2020, so it took 3 years for 7800 xt to replace it. its prob fair to say its not just covid, but we are on a 3 year cycle.
im glad i got my 7900 xt for the price i did, im set for many years. i was playing assassins creed brotherhood at 160 fps ultra setting last night (one of the few ac games that supports high refresh) and the fans didn't even kick on on the gpu, cause the card itself is so powerful. absolutely blew me away. lol
not a single frame drop either, was 160 the entire time. same as in uncharted 4. unbelievable the thing this power has, now in uncharted 4 the fans do kick in to high gear and she gets got. its about the only game that makes my card extra hot.
Most users are still stuck on 1080p which 2 gens back can handle.
Something like a 6800xt can handle 4k 60 and be found for circa 500 these days.
After the leaps ahead of that gen, looks like we're in for stagnation for a while?