• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Calling all low and mid GPU owners - shall we swap RT for more performance or lower prices?

Would you be open to sacrificing the capability to run Ray Tracing ?

  • Yes, for 30% lower price.

    Votes: 31 48.4%
  • Yes, for 30% more performance.

    Votes: 21 32.8%
  • No, I love RT even with low performance.

    Votes: 12 18.8%

  • Total voters
    64

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I think the poll sort of says everything. A nice whopping 11% actually care about RT whereas there is an event split for the rest wanting more performance or more cost effectiveness. To be honest, that doesn't surprise me. I couldn't give to craps about RT. All I want is a cost effective upgrade from my Vega 64 should I ever be in a position to build a new machine. That means a reasonable price and more performance than what I have now. If it comes with RT, fine, but it's not a requirement by any stretch of the imagination. This difference probably becomes even more stark when you start talking about mobile GPUs.

All in all, I still think RT is a niche market and nVidia is really trying to change that. I can't say it's succeeding though.
 
Joined
Nov 27, 2023
Messages
1,075 (6.98/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
Ultra settings don't make as much sense at 4K as they do at 1080p or even 1440p. You don't need that much antialiasing for example. With DLSS/FSR/XeSS advancement, it's also no more mandatory to play at native 4K as it occasionally looks worse than DLSS (and I expect it to become widespread by late 2020s), and yeah, performance gains are much more visible than supersampling artifacts.
Antialiasing is the least of problems, actually. We are no longer using forward renderers and MSAA is dead. Modern temporal methods are almost free performance wise, but you do pay in ghosting and artifacts, depending on implementation. Mostly the expensive settings are now stuff like volumetrics, global illumination, shadows, complex AO and so on. Whether those look better in 4K and lowering them makes or does not make sense is subjective.
 
Joined
Jan 14, 2019
Messages
9,902 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
As long as it takes. What we are fighting for is the future of "gaming" design, not AI designing disguised as gaming "advancement". If we are not lied to, we might support it.
The only choice you have, is where you spend your money, unfortunately. If you disagree with a GPU design, or philosophy (although I'd argue not to buy PC hardware based on philosophy), don't buy it. Although, this won't stop the masses from spending a month's salary on the latest Ultra Super RT GPU just because its model number is one higher than the one they currently have.
 
Joined
Dec 25, 2020
Messages
4,650 (3.81/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Razer DeathAdder Essential Mercury White
Keyboard Redragon Shiva Lunar White
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
It seems to me that you are very good at constructing long texts devoid of any argument other than saying that everyone who thinks for themselves and questions is a faboy and absolutely everything Nvidia does is right, and the future undeniable, unquestionable. A suggestion: you might as well replace the text with a photo of you kissing a Huang statue, achieving the same effect!

RT proves inefficient, consuming die space, energy, and resources, particularly in the specified context. With manufacturing processes becoming increasingly expensive and cache no longer substantially shrinking, proportional advancements are unlikely in the foreseeable future. Given the current stage, as we're beyond the beginning of the century, the imminent challenges make it apparent that there's insufficient margin for genuine RT viability in average consumer GPUs.There is no counter argument other than generic texts because no one refutes reality.

Thanks for proving my point. It's just clubist resentment after all. Mind you, I'm the same guy who openly throws the fact that the 4090 itself is a low quality die, but nah, I'm just very good at long winded texts to defend a corporation lmao
 
Joined
Jan 14, 2019
Messages
9,902 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
The title poses a bit of a trick question. The reason it is, is because it needs to be understood why silicon is designed the way it is - both from a technical and from a marketing and manufacturing perspective.

When a company like NVIDIA approaches silicon design, things like RT cores are not being considered 'extra' or 'bonus' or 'optional'. These things are in the very basic building blocks of a unified and scalable architecture. The outlier being GTX 1660 and the transition period between 10 series and 20 series.

Typically, when making smaller dies, the question isn't if RT core real estate can be replaced with more traditional CUDA cores, but typically what size silicon and what performance output you want out of a product - since the building blocks already exist and include those RT cores. Not using them or designing special small cores that do not include them might in fact be very cost inefficient. You will have to reroute and redesign the architechture to not leave any wasted space. The only decision left for the silicon design team is when to press the go-ahead and implement these kinds of cores as a part of your building blocks - a decision made by NVIDIA earlier, and by AMD a bit later with RDNA.

As a result, the entire stack is ought to get RT cores, regardless of how viable the performance is when we talk about more entry level products such as the RTX 3050 or even the RTX 4060. The idea behind persisting with having all the none-traditional type of compute transistors in the silicon is that as the technology matures, and as using the same building blocks help shaping products, what didn't use to be that viable will start being somewhat viable. There's quite a difference between how an RTX 2060 deals with Raytracing and how an RTX 4060 does in terms of end-result.

So yes, it sucks that my 200mm^2 GPU can't really handle Raytracing to a satisfying level (to me) and instead of more theoretical CUDA cores i get silicon space dedicated to RT. But, that's how a master-design works with scalable silicon, and that's NVIDIA's (or AMD's decision, if we were to talk about RDNA/2/3) in order to be able to properly work their design teams and their rented fab nodes.

Eventually, more and more none-gaming applications are also starting to use these silicon real estate parts, like modeling, architecture (houses and apartments in this context) and many other types of CAD, simulation and engineering tasks. By now, everybody is implementing RT-intended hardware in their silicon including ARM based mobile SoCs. Its the current natural progression of GPUs.
It's all about costs and benefits. If it would cost Nvidia or AMD 50% more to redesign an architecture to cram 20% more shader cores into a chip instead of RT or AI, they won't do it. The GTX 16 series is an outlier. Somehow I doubt that Nvidia made truckloads of money on it (although personally, I think the 1660 Ti was quite good, and even the 1650 was a decent upgrade over a 1050 Ti).
 
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
Whether those look better in 4K and lowering them makes or does not make sense is subjective.
In my case, these settings at High mostly look better than Ultra at 1080p (didn't own a 1440p display though) and it prevents about 10 to 30 percent of GPU punishment depending on a game and location. Considering I can't quite tell the difference between native and FSR: Quality at 4K (if we don't count buggy implementations like that in the CP2077 game for example) it's another 30 to 40 percent performance boost. This way 30 FPS become 50. And at Balanced, it's still far ahead native 1080p Ultra in terms of picture quality, and most importantly, I'm having this 60 FPS experience. Going UW1440p would be better but that 4K display was $150 and honestly, I never felt like it's possible for such a cheap device to be so good. Not a single regret.
Although, this won't stop the masses from spending a month's salary on the latest Ultra Super RT GPU just because its model number is one higher than the one they currently have.
I just know the GPU I wanna upgrade to doesn't yet exist so I'm waiting for future generations. Yet I'm about to buy an RX 6900 XT not because I want to upgrade my main rig but because I want to have a fast backup GPU (RX 480 → RX 6700 XT). And I also wanna butcher the said 6900 XT just for the memes.
 
Joined
Jan 14, 2019
Messages
9,902 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
And even cutting out both RT and Tensor cores would not bring performance boost to 30%. Price is a whole different discussion though, especially in that market segment.
We wouldn't save on price, either, as the money saved on manufacturing a smaller die would first have to be spent on designing it.
 
Joined
Dec 25, 2020
Messages
4,650 (3.81/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Razer DeathAdder Essential Mercury White
Keyboard Redragon Shiva Lunar White
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Sure its the future. But define 'future'. Might easily take another 10 years.

It took similarly long for DirectX 11 to be widely adopted. Budget concerns are what slows down progress at this point, making high end video games with advanced graphics has become as, if not more expensive than making movies and few studios can afford these multimillion budgets, and at the same time, having a great game with an alternative art direction or simpler graphics is quite feasible, so there you have it. :)
 
Joined
Jan 14, 2019
Messages
9,902 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Last edited:
Joined
Sep 17, 2014
Messages
20,953 (5.96/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
It took similarly long for DirectX 11 to be widely adopted. Budget concerns are what slows down progress at this point, making high end video games with advanced graphics has become as, if not more expensive than making movies and few studios can afford these multimillion budgets, and at the same time, having a great game with an alternative art direction or simpler graphics is quite feasible, so there you have it. :)
Its not just budget.

Look at the die size of a 4090. This is what Nvidia can do at this point. We rely almost exclusively on meagre shrinks now to move forward. That stagnation isn't a budgettary issue nor a game issue. Its even doubtful the hardware can provide. RT adoption isn't business 'as usual'.

Also, its definitely not a given that because a tech gets industry support and 'momentum', that it succeeds and is here to stay. Look at AR and VR, they're in that space too, but its not going places. We've had quite a few of these brainfarts in the days gone past. And most of the time the conclusion wrt these failures is 'It was too early / the tech wasn't ready / the market wasn't ready'. Its all more of the same though: some shit just doesn't stick.
 
Joined
Nov 27, 2023
Messages
1,075 (6.98/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
Thanks for proving my point. It's just clubist resentment after all. Mind you, I'm the same guy who openly throws the fact that the 4090 itself is a low quality die, but nah, I'm just very good at long winded texts to defend a corporation lmao
I am still finding it hilarious that AD102 has never been implemented as a full fat die into anything. Even insanely expensive professional cards don’t max it out. I assume the yields are gutter trash.
 
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
I know that feeling far too well. I have a shelf full of backup hardware. Some of them are actual backup stuff (like my 6500 XT), but others are mainly of sentimental value, like my half-height, half-length, LPP GT 710.
You're way more extreme than me in this department. I only have a one full backup system at once. Usually even less. Like today for example: Z490 Vision D, a couple decent PSUs, RX 480, a FullHD monitor, a couple damaged (but still acceptable for basic tasks) keyboards, and a little cheeky 250 GB HDD is all I have for my backup as of now. Gotta buy an SSD, a CPU, a mouse, and RAM to be 100% backed up.

I wonder would it make any sense to create dedicated RT devices. Like, you have your CPU running its CPU things, your GPU doing its raster and a little ray tracing here and there, and another PCI-e slot inhabitant that strictly enhances RT performance. Like ASICs but made for RT and not for mining, and mounted inside a gaming PC.

Or... might injecting RT cores instead of E-cores make LGA1700-alike CPUs more interesting..?
 
Joined
Dec 25, 2020
Messages
4,650 (3.81/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Razer DeathAdder Essential Mercury White
Keyboard Redragon Shiva Lunar White
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Its not just budget.

Look at the die size of a 4090. This is what Nvidia can do at this point. We rely almost exclusively on meagre shrinks now to move forward. That stagnation isn't a budgettary issue nor a game issue. Its even doubtful the hardware can provide. RT adoption isn't business 'as usual'.

The 4090 being severely cut down (almost as a much as a 6800 relative to the 6900 XT), power budgets thrown sky high to compensate should tell you a thing or two about yield at this scale, particularly considering not even enterprise has a maxed AD102 yet.

Budget concerns are quite real, RT adoption is actually supposed to simplify development and reduce budgets over time as well. There's even market concerns, not everyone has the latest hardware, and limiting your customer base is very unwise in a business sense.

Ada is the third wave of RT ready hardware from Nv, but Turing is similar to Evergreen that it pioneered the "generational leap tech" (with the ATI HD5000 cards it was DirectX 11), so normalizing for AMD's lag, let's say that the RT era truly started with RDNA 2 and Ampere roughly 3 years ago.

By the time the HD 5870 turned 3 in 2012, the largest majority of games still targeted DX9 on Windows XP, including major AAAs like Borderlands 2, while we were still in awe of quad SLI builds pushing Metro 2033 with DX11 tessellation.

If anything we are doing quite fine so far.
 
Joined
Nov 27, 2023
Messages
1,075 (6.98/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
I wonder would it make any sense to create dedicated RT devices. Like, you have your CPU running its CPU things, your GPU doing its raster and a little ray tracing here and there, and another PCI-e slot inhabitant that strictly enhances RT performance. Like ASICs but made for RT and not for mining, and mounted inside a gaming PC.
This is the same idea that was already tried with dedicated physics acceleration by Ageia before being bought out by NVidia. It’s not practical. The trend in computing for years has been heterogeneous hardware that can do all or most things by itself. Dedicated anything has gone the way of the dodo unless we are talking about having acceleration for certain tasks on the SoC like Apple and now Intel and AMD does or (surprise) RT and AI acceleration as part of GPGPU.
 
Joined
Jan 14, 2019
Messages
9,902 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
You're way more extreme than me in this department. I only have a one full backup system at once. Usually even less. Like today for example: Z490 Vision D, a couple decent PSUs, RX 480, a FullHD monitor, a couple damaged (but still acceptable for basic tasks) keyboards, and a little cheeky 250 GB HDD is all I have for my backup as of now. Gotta buy an SSD, a CPU, a mouse, and RAM to be 100% backed up.
I'm way too extreme to recommend others the same. :ohwell: I've got two HTPCs (one of them has an 11700 and now a 2070 in it), and about 2-3 backup CPUs and GPUs from each vendor, also several backup SSDs.

I wonder would it make any sense to create dedicated RT devices. Like, you have your CPU running its CPU things, your GPU doing its raster and a little ray tracing here and there, and another PCI-e slot inhabitant that strictly enhances RT performance. Like ASICs but made for RT and not for mining, and mounted inside a gaming PC.
I think the main issue would be PCI-e bandwidth and latency... although it might finally be a solid case for gen 5 speeds (unlike those ridiculous SSDs with slapped-on CPU coolers).
 
Joined
Oct 8, 2017
Messages
135 (0.06/day)
RT is the first thing i disable (even when i had my 4080) as its waay too subtle.
Maybe its my 43 year old eys but i can BARELY notice RT visually.. the performance hit i can notice though.
I'm in the same boat as you.

All the games I have played have been fine without RT, I would rather have a game play smooth with good visuals than a game that
plays like manure, but I can see a billboard "perfectly" on a window.
 
Joined
Sep 17, 2014
Messages
20,953 (5.96/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
The 4090 being severely cut down (almost as a much as a 6800 relative to the 6900 XT), power budgets thrown sky high to compensate should tell you a thing or two about yield at this scale, particularly considering not even enterprise has a maxed AD102 yet.

Budget concerns are quite real, RT adoption is actually supposed to simplify development and reduce budgets over time as well. There's even market concerns, not everyone has the latest hardware, and limiting your customer base is very unwise in a business sense.

Ada is the third wave of RT ready hardware from Nv, but Turing is similar to Evergreen that it pioneered the "generational leap tech" (with the ATI HD5000 cards it was DirectX 11), so normalizing for AMD's lag, let's say that the RT era truly started with RDNA 2 and Ampere roughly 3 years ago.

By the time the HD 5870 turned 3 in 2012, the largest majority of games still targeted DX9 on Windows XP, including major AAAs like Borderlands 2, while we were still in awe of quad SLI builds pushing Metro 2033 with DX11 tessellation.

If anything we are doing quite fine so far.
I totally respect your view on it; I'm just a lot less certain of progress myself, but then again, I'm a pessimist by nature ;) A big part of that view is also based on the state of the world and the pressure on chips for the last... well, many years really. Things are getting worse - the economical conditions too. Budget concerns won't get lower.

I am still finding it hilarious that AD102 has never been implemented as a full fat die into anything. Even insanely expensive professional cards don’t max it out. I assume the yields are gutter trash.
Yep, Nvidia is definitely going to have to adopt chiplet in gaming segment sooner or later...
 
Last edited:
D

Deleted member 57642

Guest
RTX is pure marketing for 2xxx series and low-mid range 3xxx and 4xxx. It's part of the card and you paid/pay for that feature - but can't actually use it (most games having this feature - are unplayable if Ray Tracing is activated on this card).

And yes - they people who design this cards "knew what they were doing". They knew this cards suck big time (again, practically useless) - but didn't even take the moral approach into account (to keep this feature for high end cards - or at least cards where this feature actually makes sense and can be used). Instead, they use this feature as "a paid demo" for high end or future models. As in, even with a low end RTX card - you might be able to activate Ray Tracing at the lowest resolution and get a 7 FPS Ray Tracing demo... where only the Full Card (cause low-mid range - are more like Sample Cards) - can use this feature at playable rates.

Last but not least - how many products have useless features - which make no sense, but they're still there? Clever design? You could put it like that - if you admire machiavellianism. Or why not, if you admire this guy:


... since it's the same thing.
 
Joined
Apr 12, 2013
Messages
6,750 (1.67/day)
making high end video games with advanced graphics has become as, if not more expensive than making movies and few studios can afford these multimillion budgets
Is it really? Why do end users still do beta testing when the game's released?

How many Marvel movies with crappy CGI do well?
 
Joined
Jun 3, 2008
Messages
393 (0.07/day)
Location
Pacific Coast
System Name Z77 Rev. 1
Processor Intel Core i7 3770K
Motherboard ASRock Z77 Extreme4
Cooling Water Cooling
Memory 2x G.Skill F3-2400C10D-16GTX
Video Card(s) EVGA GTX 1080
Storage Samsung 850 Pro
Display(s) Samsung 28" UE590 UHD
Case Silverstone TJ07
Audio Device(s) Onboard
Power Supply Seasonic PRIME 600W Titanium
Mouse EVGA TORQ X10
Keyboard Leopold Tenkeyless
Software Windows 10 Pro 64-bit
Benchmark Scores 3DMark Time Spy: 7695
Where is the option "Yes, I think RT is a gimmick."?
 
Joined
Jan 4, 2013
Messages
1,164 (0.28/day)
Location
Denmark
System Name R9 5950x/Skylake 6400
Processor R9 5950x/i5 6400
Motherboard Gigabyte Aorus Master X570/Asus Z170 Pro Gaming
Cooling Arctic Liquid Freezer II 360/Stock
Memory 4x8GB Patriot PVS416G4440 CL14/G.S Ripjaws 32 GB F4-3200C16D-32GV
Video Card(s) 7900XTX/6900XT
Storage RIP Seagate 530 4TB (died after 7 months), WD SN850 2TB, Aorus 2TB, Corsair MP600 1TB / 960 Evo 1TB
Display(s) 3x LG 27gl850 1440p
Case Custom builds
Audio Device(s) -
Power Supply Silverstone 1000watt modular Gold/1000Watt Antec
Software Win11pro/win10pro / Win10 Home / win7 / wista 64 bit and XPpro
The pursuit of incorporating Ray Tracing (RT) into games, started by Nvidia, comes at the cost of allocating space on its die exclusively for RT. This valuable space could otherwise be utilized for additional shaders, thereby enhancing overall performance. This trade-off is becoming increasingly apparent in the escalating prices of GPUs, driven by the soaring costs of chip production and development, and a divergence from advancements in density.

Achieving higher performance in Ray Tracing (RT) entails augmenting the dedicated hardware on the die for this purpose. There's no magic involved, and as a consequence, GPUs are likely to become so expensive that what is currently considered low-end will become the equivalent of today's mid-range pricing. Nvidia isn't dismayed by this situation because its profit is directly tied to a percentage of the price; the greater the price, the larger its share.

It is evident that low and mid-tier GPUs lack the necessary power to effectively handle Ray Tracing, except when employed in an extremely limited manner. I won't even delve into mentioning Path Tracing (PT) due to its resource-intensive nature. So, taking all this into account, answer the question below.

- Would you be open to sacrificing the capability to run Ray Tracing on mid and low-end(under U$500) GPUs in exchange for either a 30% boost in performance or a 30% reduction in price, while maintaining the same level of performance as the current lineup?
The very premise is a misunderstanding of market mechanism - competition give lower prices not restrictions like that. Just see how AMDs Ryzen have changed cpu prize/performance ratio - no one wants 4 core cpus today (even though they seem to reapear). So if you want to have more performance with lower price - skip the upgrade cycle or buy used, or buy a Intel GPU.
 
Joined
Oct 6, 2021
Messages
1,440 (1.54/day)
Thanks for proving my point. It's just clubist resentment after all. Mind you, I'm the same guy who openly throws the fact that the 4090 itself is a low quality die, but nah, I'm just very good at long winded texts to defend a corporation lmao
Yeah, you complain about Nvidia and still throw money into Jensen's pocket, great critic. it has everything to do with the topic at hand...

In essence, your argument boils down to the assertion that personal preferences shape opinions, with statements like "You're Y, so you dislike RT, and I'm X, so I favor RT." However, the focus remains on individuals rather than the underlying ideas. I've previously explained... and the reason why I find RT unfeasible and useless in the context presented is also in the initial post. It's clear that most seem to agree that more performance or cheaper GPUs are preferable to the ability to run RT on GPUs in this segment.

The very premise is a misunderstanding of market mechanism - competition give lower prices not restrictions like that. Just see how AMDs Ryzen have changed cpu prize/performance ratio - no one wants 4 core cpus today (even though they seem to reapear). So if you want to have more performance with lower price - skip the upgrade cycle or buy used, or buy a Intel GPU.

I agree that we need competition, but on the manufacturing process side, TSMC has been swimming alone for half a decade. When production and development costs escalate exponentially with each generation, and the density provided does not proportionally align with the price, companies operate based on the anticipation of requiring increasingly more funds.
 
Last edited:
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
you complain about Nvidia and still throw money into Jensen's pocket
Maybe, just maybe, because AMD GPUs are way worse?
 
Joined
Sep 17, 2014
Messages
20,953 (5.96/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Yeah, you complain about Nvidia and still throw money into Jensen's pocket, great critic. it has everything to do with the topic at hand...

In essence, your argument boils down to the assertion that personal preferences shape opinions, with statements like "You're Y, so you dislike RT, and I'm X, so I favor RT." However, the focus remains on individuals rather than the underlying ideas. I've previously explained... and the reason why I find RT unfeasible and useless in the context presented is also in the initial post. It's clear that most seem to agree that more performance or cheaper GPUs are preferable to the ability to run RT on GPUs in this segment.
That is true but until consumers vote with wallets that shit aint happening, or not a lot. Polls do not sell GPUs and people are hypocrites - or put differently, commerce is stronk.

Maybe, just maybe, because AMD GPUs are way worse?
Except they aint ;) the only drawback is lower RT perf. FSR is catching up. You get more raster perf per $ and more VRAM. Im still problem free with RDNA3. No regrets after over 10 years on team green. Nvidia served me well, but Ill be damned before I start serving Nvidia, which is inherent to purchasing their product carrying the RTX sticker. They can stick their planned obscolesense right back in Jensens shit oven. I vote with my wallet.

Chiplet is the future of GPU progress and cheaper silicon. Not monolithic chips loaded with software to make them competitive. Nvidia wants to create a performance blackbox with proprietary bullshit. F.U.C.K. That.
 
Last edited:
Joined
Jul 13, 2016
Messages
2,845 (1.00/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
It's 100% appropriate to compare each card's performance relative to the games that people are playing when it came out. Because that's want people are going to do: buy a current-gen GPU and play current games. Someone who buys a 7600 or 4060 will have a much better gaming experience even with the most difficult games today than someone with a 950 in 2015, because these cards are a tier up. Unfortunately they cost it too.

That's not the problem, the problem is you are trying to compare two entirely different test suites. I stated the reasons why in my last comment, I will not restate them.

That's what I responded to and continue to disagree with as their performance at 1080p Ultra is well above 60fps. By this measure then the 950 was one of the most useless releases ever with it's 45 fps, so I fail to understand how it can be used as a good example with the others so easily outperforming it in their respective times.

You are completely ignoring the fact that perceptions on how much FPS you should get out of a GPU have completely changed since 9 years ago. A 950 hitting 45 FPS at 1080p would not be nearly as bad as a modern graphics card hitting 45 FPS 1080p. This is why relative performance to the flagship is simply better, it provides context of the value you are actually getting.
 
Top