I take issue about Blackwell being a "clean" extension to Ada. Sure, Nvidia was hamstrung by the lack of progress at TSMC at developing any new nodes. Even then, normalized testing between Ada and Blackwell found no improvement performance wise according to
https://www.techpowerup.com/338264/...lackwell-vs-ada-lovelace-amd-rdna-4-vs-rdna-3 .
What was improved in Blackwell is better ray tracing (which I cheer, because there are effects that cannot be feasibly made without ray tracing), video encoding/decoding hardware improvements including support for 4:2:2 chroma video (great for pros), full DisplayPort 2.1a/b support (AMD, you need to catch up to Nvidia about this on your consumer lines!), and PCIe 5.0 support (get a non-Founders Edition for best results because the ribbon inside the FE appears to be degrading signal integrity). It also adds multi-frame generation which is useful for turn-based RPGs, but is worse than useless for real-time action games, especially those that rely on instant predictions and reactions like landing perfect parries in
Street Fighter 6 which has a 2-frame window at 60 Hz because that is latency dependent. (Fighting games have standardized at counting frames at 60 Hz to describe motion and which frames do what.) Frame generation adds latency because it uses real frames to predict intermediate frames, so the real frames must be delayed to insert the AI-generated frames. The RTX 50 series also adds hardware flip metering.
However, there are some serious downgrades. First, the removal of 32-bit CUDA and everything that depends on it like 32-bit PhysX and 32-bit OpenCL, both which Nvidia uses thunks to translate both into CUDA. Unless a thunk to translate 32-bit CUDA to 64-bit CUDA is developed, all three in 32-bit applications are useless. This renders some old CUDA code that was intentionally compiled to 32-bit versions because its smaller pointers created greater code density that allowed the code to fit better in caches useless without a rebuild to 64-bit, messing up several use cases. Second, the 12V-2x6 power connector is over-rated and pushed welll past its real safety margin by the RTX 5090 and RTX 5080, especially Founders Edition cards which draw current mostly from one side instead of more evenly like partner cards do. The RTX 4090 pushed the real limits. The RTX 5090 blew past them. The RTX 5080 is more frugal with power and therefore the partner cards are safer, but lacks the VRAM to handle some of the most demanding but beautiful games that the RTX 4090 handled at 4K native like
Indiana Jones and the Great Circle in ultra full path tracing mode.
I haven't seen anything about loss of ability to run 32-bit applications from within a 64-bit operating system from Intel or AMD or Intel GPUs, so now AMD now holds the 32-bit legacy game king crown. I don't know if Intel is good enough at that to share the crown with AMD on that.
The RTX 50 series is more of a side-grade than a clean extension at best, and a serious downgrade at worst due to loss of useful features.
EDIT: Add why multi frame generation is useless to real time action games