- Joined
- Feb 3, 2017
- Messages
- 3,986 (1.33/day)
Processor | Ryzen 7800X3D |
---|---|
Motherboard | ROG STRIX B650E-F GAMING WIFI |
Memory | 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5) |
Video Card(s) | INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2 |
Storage | 2TB Samsung 980 PRO, 4TB WD Black SN850X |
Display(s) | 42" LG C2 OLED, 27" ASUS PG279Q |
Case | Thermaltake Core P5 |
Power Supply | Fractal Design Ion+ Platinum 760W |
Mouse | Corsair Dark Core RGB Pro SE |
Keyboard | Corsair K100 RGB |
VR HMD | HTC Vive Cosmos |
They key bit in 2 is that new techniques may have different tradeoffs, work with different/new APIs or in entirely new ways. Which all will cause friction. Ray-tracing and ML are but recent examples of the same thing. When we want to keep using the same standard FP32 compute there really is not that much on the table any more in terms of architecture.There are two main ways to speed up image rendering:
So, what’s the result? More transistors mean exponentially higher costs, making it unaffordable for many users. On the other hand, improving architecture and optimizing algorithms only requires time investment, without significantly increasing manufacturing costs.
- Increase the number of transistors in the GPU chip.
- Develop new algorithmic techniques (AI is also one of them).
I think you might not be completely correct on needing "only" the time investment. Time investment is cost. If you think about all the things GPU hardware vendors do - especially Nvidia with their software focus - then they spend a lot of time and money in that area.