- Dec 25, 2020
- 1,308 (1.71/day)
- São Paulo, Brazil
|System Name||Project Kairi v0.5 (temporary downgrade)|
|Processor||Intel Xeon E5-4669 v3|
|Motherboard||Gigabyte X99-Ultra Gaming rev 1.0|
|Cooling||id-Cooling Frostflow X 360 + Gelid GC-Extreme|
|Memory||64 GB (4x 16) Corsair Dominator Platinum DDR4-2133 @ 9-11-11-21 1.35v|
|Video Card(s)||ASUS TUF Gaming OC GeForce RTX 3090 24 GB GDDR6X|
|Storage||WD Green SN350 480 GB|
|Display(s)||Samsung The Frame 2022 32-inch (1080p60)|
|Case||Cooler Master MasterFrame 700|
|Audio Device(s)||EVGA Nu Audio (classic)|
|Power Supply||EVGA 1300 G2 1.3kW 80+ Gold|
|Mouse||Logitech G305 Lightspeed K/DA + Logitech G840 XL K/DA|
|Keyboard||Logitech G Pro K/DA with GX Brown switches|
|Software||Windows 11 Pro for Workstations 22H2|
|Benchmark Scores||Older build pic (with R9 5950X installed): https://i.imgur.com/yxc0HrZ.jpg|
You're taking this (very old, pre-RDNA2) slide quite out of context. AMD isn't going to be running raytracing server farms for Radeon owners, cloud computing is aimed at the application specific market.
And not for me? Not sure what you mean by that. I mean, I know I'm only a hobo that still has an RTX 3090 (smh I don't have a 4090 yet, what am I, poor?), but... I dunno, I enjoy raytraced gaming, even if my wood GPU only gets 100 fps or whatever, wtf how am I so poor, playing at 1080p and not using frame generation
I see people talking about cost per mm2 for production costs. Yep, sure that's increased. But did you even look to see the mm2 used by each GPU?
RTX 3080 : 628.4mm2
RTX 4080 : 379mm2
Even if their per mm costs have increased the die size has drastically decreased, by 40%. It definitely does NOT justify the massive cost increase to the cards. Nvidia are being greedy, it's a corporation after all, we expect them to do that. The problem is AMD isn't being competitive, and neither is Intel (in this high-end space). Nvidia has the market by the balls, you don't buy AMD cause they aren't very future proof, and you don't buy nvidia (but you will) cause they're too expensive.
Those people who say they don't believe in Ray Tracing, go live on an intel integrated and tell me you're still fine with it for gaming. Graphics goes forwards, Ray Tracing solves problems that typical shader based raster programs find difficult to scale, and we've been finding difficult to remedy for a decade now without dedicated hardware. Traditional triangle based rasterization is at it's limit of being efficient, and you might not think it, but taking the Ray Tracing route is about making certain effects MORE efficient, because otherwise you have to brute force them with traditional shader programs, which end up slower (grab a GTX 1080 and use it to run Quake 2 RTX, they have the entire ray tracing stack running in shaders).
The explanation for the die sizes is quite straightforward: The RTX 4080 is built on a much more advanced lithography node and is a lower segment ASIC (AD103) compared to the highest tier die (the AD102), and the RTX 3080 had a seriously cutdown GA102, only 68 out of the 84 computing units present in the GA102 are enabled in an RTX 3080. This number increases to 70 on 3080-12GB, 80 in the 3080 Ti and 82 in the 3090, with the 3090 Ti having a fully enabled processor. Using high yield harvested (low-quality!) large dies with several disabled units tends to hurt power efficiency. Add first-generation GDDR6X memory and it's no wonder the original Ampere has seen better days.
But indeed, I agree, Jensen has raised prices this generation quite a bit. But not only that, they've also left an insane amount of space for refresh SKUs, and even a comfortable lead above the RTX 4090 for an eventual 4090 Ti or potential 30th anniversary release Titan Ada or something, as the RTX 4090 has only 128 out of the 142 units of the AD102 processor enabled.