• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Calling all low and mid GPU owners - shall we swap RT for more performance or lower prices?

Would you be open to sacrificing the capability to run Ray Tracing ?

  • Yes, for 30% lower price.

    Votes: 31 48.4%
  • Yes, for 30% more performance.

    Votes: 21 32.8%
  • No, I love RT even with low performance.

    Votes: 12 18.8%

  • Total voters
    64
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
the only drawback is lower RT perf.
Errr... nope?

• Multi-monitor power issues
• Borderline complete absence of CUDA equivalent
• Absence of DLSS
• XeSS works worse on AMD GPUs than it does on Ada and Arc (Ada gets as much as Arc, just for the record)
• FSR is miles behind and it's implemented outright badly in several games
• 20+ % worse power efficiency
• Worse power spikes
• No RTX 4090 competition whatsoever, even no plans on doing so
• Absurd reliance on Infinity Cache

And yeah, RT performance is not just lower. It's much lower. Yes, leather jacket guy is trying his hardest to make it "a performance blackbox with proprietary bollocks," no denial on that. Yet AMD are still playing it Bogart like they're not 3 generations behind in RT and supersampling but rather are a real PITA for nVidia, which is untrue. We have a lot of reasons to hate nVidia but AMD deserve all the hatred even more: they do not try to be better than nVidia. Just like in the meme, "Democratic party is worse than Republican only because it's not better."
 
Joined
Jul 20, 2020
Messages
829 (0.60/day)
System Name Gamey #1 / #2
Processor Ryzen 7 5800X3D / Core i7-9700F
Motherboard Asrock B450M P4 / Asrock B360M P4
Cooling IDCool SE-226-XT / CM Hyper 212
Memory 32GB 3200 CL16 / 32GB 2666 CL14
Video Card(s) PC 6800 XT / Soyo RTX 2060 Super
Storage 4TB Team MP34 / 512G Tosh RD400+2TB WD3Dblu
Display(s) LG 32GK650F 1440p 144Hz VA
Case Corsair 4000Air / CM N200
Audio Device(s) Dragonfly Black
Power Supply EVGA 650 G3 / Corsair CX550M
Mouse JSCO JNL-101k Noiseless
Keyboard Steelseries Apex 3 TKL
Software Win 10, Throttlestop
That's not the problem, the problem is you are trying to compare two entirely different test suites. I stated the reasons why in my last comment, I will not restate them.

You are completely ignoring the fact that perceptions on how much FPS you should get out of a GPU have completely changed since 9 years ago. A 950 hitting 45 FPS at 1080p would not be nearly as bad as a modern graphics card hitting 45 FPS 1080p. This is why relative performance to the flagship is simply better, it provides context of the value you are actually getting.

It's cool, we can agree to disagree.

I'm looking at it from an average consumer's viewpoint. Few people even on this site focus on relative performance so the average person considering a 4060 or 7600 aren't comparing to a $1600+ GPU. That's why I write about real performance for money, not relative. Those consumers who bought a 950 8 years ago got notably lower performance on their games than 2023 consumers get with their 4060 and 7600, which is why describing the 4060 and 7600 as "absolute trash" is completely wrong.
 
Joined
Jun 27, 2019
Messages
1,863 (1.05/day)
Location
Hungary
System Name I don't name my systems.
Processor i5-12600KF 'stock power limits/-120mV undervolt+contact frame'
Motherboard Asus Prime B660-PLUS D4
Cooling ID-Cooling SE 224 XT ARGB V3 'CPU', 4x Be Quiet! Light Wings + 2x Arctic P12 black case fans.
Memory 4x8GB G.SKILL Ripjaws V DDR4 3200MHz
Video Card(s) Asus TuF V2 RTX 3060 Ti @1920 MHz Core/950mV Undervolt
Storage 4 TB WD Red, 1 TB Silicon Power A55 Sata, 1 TB Kingston A2000 NVMe, 256 GB Adata Spectrix s40g NVMe
Display(s) 29" 2560x1080 75 Hz / LG 29WK600-W
Case Be Quiet! Pure Base 500 FX Black
Audio Device(s) Onboard + Hama uRage SoundZ 900+USB DAC
Power Supply Seasonic CORE GM 500W 80+ Gold
Mouse Canyon Puncher GM-20
Keyboard SPC Gear GK630K Tournament 'Kailh Brown'
Software Windows 10 Pro
That's not the problem, the problem is you are trying to compare two entirely different test suites. I stated the reasons why in my last comment, I will not restate them.



You are completely ignoring the fact that perceptions on how much FPS you should get out of a GPU have completely changed since 9 years ago. A 950 hitting 45 FPS at 1080p would not be nearly as bad as a modern graphics card hitting 45 FPS 1080p. This is why relative performance to the flagship is simply better, it provides context of the value you are actually getting.

I actually happened to be one of those ppl who bought a GTX 950 in 2015, the Gigabyte Xtreme version which was closer to a 960 and I was using it with a 1080p 60Hz monitor for almost 3 years until I've upgraded to a RX 570 and then later a GTX 1070.
For me that 950 was enough and it was all I could afford at the time, fast forward to 2022 september when I've bought a second hand RTX 3060 Ti.
Its hard to compare since a lot of things changed since and the fact that the 3060 Ti cost me 3x times more than that 950 back in 2015 but ye I did not feel useless with the 950 back then nor with my 3060 Ti currently.

Btw its kinda funny in way that the topic was supposed to target low-mid range users and then we have ppl throwing around 4090s around here.:oops: 'this is not aimed at you, just in general'
 
Last edited:
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
I actually happened to be one of those ppl who bought a GTX 950 in 2015
I did the same thing in 2016. Felt much worse than RX 6700 XT, had a number of games I wanted to play at higher settings because they looked a bit ugly. With 6700 XT, that's no longer the case. Of course I paid 2 times more but I reckon I got 3+ times more value outta it. My only regret is that I hurried and got a 6700 XT instead of a 6800 non-XT which is about 40% better.
target low-mid range users and then we have ppl throwing around 4090s around here.
But hey, RTX 4090 is an entry-level professional GPU! Higher tier professional GPUs cost five and sometimes even six figures.
 
Joined
Jun 27, 2019
Messages
1,863 (1.05/day)
Location
Hungary
System Name I don't name my systems.
Processor i5-12600KF 'stock power limits/-120mV undervolt+contact frame'
Motherboard Asus Prime B660-PLUS D4
Cooling ID-Cooling SE 224 XT ARGB V3 'CPU', 4x Be Quiet! Light Wings + 2x Arctic P12 black case fans.
Memory 4x8GB G.SKILL Ripjaws V DDR4 3200MHz
Video Card(s) Asus TuF V2 RTX 3060 Ti @1920 MHz Core/950mV Undervolt
Storage 4 TB WD Red, 1 TB Silicon Power A55 Sata, 1 TB Kingston A2000 NVMe, 256 GB Adata Spectrix s40g NVMe
Display(s) 29" 2560x1080 75 Hz / LG 29WK600-W
Case Be Quiet! Pure Base 500 FX Black
Audio Device(s) Onboard + Hama uRage SoundZ 900+USB DAC
Power Supply Seasonic CORE GM 500W 80+ Gold
Mouse Canyon Puncher GM-20
Keyboard SPC Gear GK630K Tournament 'Kailh Brown'
Software Windows 10 Pro
I did the same thing in 2016. Felt much worse than RX 6700 XT, had a number of games I wanted to play at higher settings because they looked a bit ugly. With 6700 XT, that's no longer the case. Of course I paid 2 times more but I reckon I got 3+ times more value outta it. My only regret is that I hurried and got a 6700 XT instead of a 6800 non-XT which is about 40% better.
I think the first game where I felt that my 950 is lacking was the 2016 DOOM but I did finish the game with it regardless. 'on something like medium settings I think'
Wolfenstein New Colossus was a bit probelmatic too and thats when I was starting to consider an upgrade but in overall it did serve me for long enough + I don't always play the newest or most demanding games anyway. 'MMOs/ARPGs weren't that hard to run at the time'

I do agree that my 3060 Ti feels like a better value card even tho ye it did cost me a lot more even on the second hand market. 'matter of fact that 950 was my last brand new card, everything else was and is second hand'
I still can't compare it properly + back then we did not have upscaling either and I do use DLSS with my 3060 Ti whenever possible.
O ye the non XT 6800 was also on my radar in 2022 but it was out of my budget.
 
Joined
Jan 4, 2013
Messages
1,164 (0.28/day)
Location
Denmark
System Name R9 5950x/Skylake 6400
Processor R9 5950x/i5 6400
Motherboard Gigabyte Aorus Master X570/Asus Z170 Pro Gaming
Cooling Arctic Liquid Freezer II 360/Stock
Memory 4x8GB Patriot PVS416G4440 CL14/G.S Ripjaws 32 GB F4-3200C16D-32GV
Video Card(s) 7900XTX/6900XT
Storage RIP Seagate 530 4TB (died after 7 months), WD SN850 2TB, Aorus 2TB, Corsair MP600 1TB / 960 Evo 1TB
Display(s) 3x LG 27gl850 1440p
Case Custom builds
Audio Device(s) -
Power Supply Silverstone 1000watt modular Gold/1000Watt Antec
Software Win11pro/win10pro / Win10 Home / win7 / wista 64 bit and XPpro
I agree that we need competition, but on the manufacturing process side, TSMC has been swimming alone for half a decade. When production and development costs escalate exponentially with each generation, and the density provided does not proportionally align with the price, companies operate based on the anticipation of requiring increasingly more funds.
Well I respect your view point but still think that the only solution to the high prices on GPUs is competition and us consumers holding back and try other alternatives than the market leader. But I also think that RT is overrated - my latest GPU purchase was based on the simple request it should fit in my Fractal Meshify Case and it should support 3 monitors setup.
JankMod.JPG


I had to move my AIO coolers tree fan in front of the mounting bracket so it didnt quit make it - but with some GAFA tape an spacers between them and the dust filter, it did work and I got my 3 monitor setup up and running.

The none reference XTX and 4090 was not a solution unless I would build in a nother case. Perhaps Ill go Green next time or perhaps Blue :D

But on topic - in the current market used cards or last gen is the budget option Ill gues.
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
Yeah, you complain about Nvidia and still throw money into Jensen's pocket, great critic.
Blaming a consumer for buying NVIDIA because AMD isn't capable of making a product that's competitive at the high end. Your true colours are showing again, and they're ugly.

Except they aint ;) the only drawback is lower RT perf.
"They aren't slower except for this specific workload in which they are" great argument there brother, I'm sure the Nobel committee are gonna be contacting you any day now.
 
Joined
Sep 17, 2014
Messages
20,953 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
"They aren't slower except for this specific workload in which they are" great argument there brother, I'm sure the Nobel committee are gonna be contacting you any day now.
They already did, but I refused because you disagree.

Errr... nope?

• Multi-monitor power issues
• Borderline complete absence of CUDA equivalent
• Absence of DLSS
• XeSS works worse on AMD GPUs than it does on Ada and Arc (Ada gets as much as Arc, just for the record)
• FSR is miles behind and it's implemented outright badly in several games
• 20+ % worse power efficiency
• Worse power spikes
• No RTX 4090 competition whatsoever, even no plans on doing so
• Absurd reliance on Infinity Cache

And yeah, RT performance is not just lower. It's much lower. Yes, leather jacket guy is trying his hardest to make it "a performance blackbox with proprietary bollocks," no denial on that. Yet AMD are still playing it Bogart like they're not 3 generations behind in RT and supersampling but rather are a real PITA for nVidia, which is untrue. We have a lot of reasons to hate nVidia but AMD deserve all the hatred even more: they do not try to be better than nVidia. Just like in the meme, "Democratic party is worse than Republican only because it's not better."
Fair enough.
 
Joined
Apr 30, 2020
Messages
855 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 16Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
Ok you all need to stop & understand that raytracing itself has been around for much longer than any graphic card out has been around for.
raytracing is older than rasterization rendering too. Since it was done on cpus first.
I don't think it's an either or situation. Even production-grade ray/path tracing has a lot of tricks (and some fundamental pipeline stages) that would require the same hardware traditional game renderers use.
Resource allocation ratios in the hardware would change for an RT-centric world, granted, but I don't think we're ever going to "remove the rasterization."


View attachment 327207
There is no need for any rasterization output for raytracing it's all calculation-based variations of a point.

From Bing ChatGPT4
Ray tracing is a rendering technique that has been around since the 1960s, while rasterization is a more recent technique that has been used in computer graphics since the 1980s. Ray tracing is a technique that simulates the behavior of light in a scene, while rasterization is a process of projecting triangles onto a grid of pixels and keeping the closest ones.

The cards we're getting are trying to be the jack of all trades for everything.

Secondly I made this point before but the resolution problem is that most people think in the 2D dimensions of the screen area while rendering. When in reality the graphic card has to render more than 8x what you think it does go to from a 1080 scene to a 2160P scene. This is because it renders height, width, & depth.
 
Last edited:

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
7,558 (3.68/day)
Location
Winnipeg, Canada
Processor AMD R9 5900X
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Aqua Elite 360 V3 1x TL-B12, 2x TL-C12 Pro, 2x TL K12
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, Asus Hyper M.2, 2x SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact RGB
Audio Device(s) JBL 2.1 Deep Bass
Power Supply EVGA SuperNova 750w G+, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
I really don’t mind RT, probably one of the few guys who will admit it.

Look, it’s really simple. If AMD was actually interested in getting their cards into people’s systems, they would price accordingly. But the fact that they lag behind while still charging a premium tells me that they are like any other company. More profit. So until they can pull their balls together, they will always be number 2.. or maybe even number 3.
 
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
maybe even number 3.
That's possible, yet not in the nearest couple generations. Intel's drivers need too much love for it to be optimised enough to overtake AMD in a couple years. By 2030, given AMD stay AMD and Intel actively improve their wares, this is a real possibility.
I really don’t mind RT, probably one of the few guys who will admit it.
I love it but can't afford it unfortunately. 4080 is kinda three times more expensive than I can spend. And I also despise nVidia's new power connector.
 
Joined
Oct 6, 2021
Messages
1,440 (1.54/day)
Blaming a consumer for buying NVIDIA because AMD isn't capable of making a product that's competitive at the high end. Your true colours are showing again, and they're ugly.


"They aren't slower except for this specific workload in which they are" great argument there brother, I'm sure the Nobel committee are gonna be contacting you any day now.

The lack of substantial argument is wearisome, and here comes another one that points fingers without presenting any useful info.

Don't tell anyone, but there is a secret option to choose when you are not happy with the options currently available, and also want to strengthen your supposed criticism of a certain company: "Don't buy anything." Unless it is essential, this option is always available. Considering the cost of the hardware and the subpar quality of 90% of recent games, an increasing number of "gamers" are likely to opt for this alternative.

Ok you all need to stop & understand that raytracing itself has been around for much longer than any graphic card out has been around for.
raytracing is older than rasterization rendering too. Since it was done on cpus first.

There is no need for any rasterization output for raytracing it's all calculation-based variations of a point.

However, how does this contribute to the topic regarding the impracticality of hardware for ray tracing on mid and low-end GPUs?
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Look at the die size of a 4090. This is what Nvidia can do at this point. We rely almost exclusively on meagre shrinks now to move forward. That stagnation isn't a budgettary issue nor a game issue. Its even doubtful the hardware can provide.
This is what TSMC can do at this point. Or perhaps more appropriately, the foundry business at large. 4090 is not quite up there at reticle limit but pretty definitely this size is getting there in terms of yield issues. For ASML EUV the reticle limit should be ~850mm and GA100/GH100 are pretty much on the edge there.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
7,558 (3.68/day)
Location
Winnipeg, Canada
Processor AMD R9 5900X
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Aqua Elite 360 V3 1x TL-B12, 2x TL-C12 Pro, 2x TL K12
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, Asus Hyper M.2, 2x SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact RGB
Audio Device(s) JBL 2.1 Deep Bass
Power Supply EVGA SuperNova 750w G+, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
the impracticality of hardware for ray tracing on mid and low-end GPUs?
Impracticality? There is none. These cards aren't pointless, it's just not comfortable to play with RT fully enabled on such GPUs.

RTX 2060 is a total joke at RT, AMD level joke.
RTX 3060 is substantially better and can run some titles even without DLSS and other cheats and it will still deliver playable (30+) frames per second at 1080p.
RTX 4060 is both cheaper and similar speed. Yes, 8 GB at 128-bit bus severely handicap this GPU but all things considered...

Unless AMD start actively interrupting nVidia's sales by actually competing, we'll see an RTX 5060 that's very close to 4060, like, by margin of error better in everything and costs about 5 dollars less. So this bad RT performance is manifested by marketing and effortlessness of nVidia's competition. RX 7600 is good but it's not good enough.

Just push until you hear the click, super simple
I can't properly misconnect power plugs even if extremely drunk so it's a non-issue and non-solution. The very fact nVidia reinvented the wheel and their wheel 2.0 is worse in everything makes me vomit.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
7,558 (3.68/day)
Location
Winnipeg, Canada
Processor AMD R9 5900X
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Aqua Elite 360 V3 1x TL-B12, 2x TL-C12 Pro, 2x TL K12
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, Asus Hyper M.2, 2x SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact RGB
Audio Device(s) JBL 2.1 Deep Bass
Power Supply EVGA SuperNova 750w G+, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
The very fact nVidia reinvented the wheel
I kinda like it :D

Things change every few years, this change being for the upcoming PSU changes. Just be glad we were alive long enough to witness it :D

A lot of these arguments I see going back and forth really do nothing for me. If NVIDIA wanted to do better, they could. And the same with AMD.

In my eyes, both companies are failing the people who made them who they are today. Us.

There is always room for improvement, being a beta tester does suck though, these are R&D things that should have been ironed out before release.
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
But the fact that they lag behind while still charging a premium tells me that they are like any other company. More profit.
I have no problem with AMD charging what they think the market will bear - they're just following NVIDIA's lead. What I don't understand is why AMD continually charge near-NVIDIA prices while their market share is dropping. That's not how a business succeeds, especially in the face of a new third competitor that is playing in the low- and mid-range areas that have traditionally been AMD's strong point.

But there is more to it than that - AMD has continually demonstrated an inability to get those vital mid-range GPUs to market in good time. The 7700 XT and 7800 XT were launched arguably more than 6 months late compared to the 4060 and 4070, and while NVIDIA is just about to drop a refresh of its high-end there's only just been information leaked of lower-end Radeon options, more than a year later. Then there's the fact that W1zz's reviews of the 7700 XT and 7800 XT showed that the "reference" design was massively overbuilt and horribly expensive, which eats into AIBs' and AMD's profit margins and makes the former less likely to produce AMD cards if they can make more money off NVIDIA ones, which further biases the field against AMD.

The lack of substantial argument is wearisome, and here comes another one that points fingers without presenting any useful info.

Don't tell anyone, but there is a secret option to choose when you are not happy with the options currently available, and also want to strengthen your supposed criticism of a certain company: "Don't buy anything." Unless it is essential, this option is always available. Considering the cost of the hardware and the subpar quality of 90% of recent games, an increasing number of "gamers" are likely to opt for this alternative.
Here's another secret: very few people boycott a bad market, they just buy the least worst option. And for a lot of people that's RTX 4090. You may not like it, but that doesn't make it wrong.
 
Joined
Jan 10, 2011
Messages
1,328 (0.27/day)
Location
[Formerly] Khartoum, Sudan.
System Name 192.168.1.1~192.168.1.100
Processor AMD Ryzen5 5600G.
Motherboard Gigabyte B550m DS3H.
Cooling AMD Wraith Stealth.
Memory 16GB Crucial DDR4.
Video Card(s) Gigabyte GTX 1080 OC (Underclocked, underpowered).
Storage Samsung 980 NVME 500GB && Assortment of SSDs.
Display(s) LG 24MK430 primary && Samsung S24D590 secondary
Case Corsair Graphite 780T.
Audio Device(s) On-Board.
Power Supply SeaSonic CORE GM-650.
Mouse Coolermaster MM530.
Keyboard Kingston HyperX Alloy FPS.
VR HMD A pair of OP spectacles.
Software Ubuntu 22.04 LTS.
Benchmark Scores Me no know English. What bench mean? Bench like one sit on?
There is no need for any rasterization output for raytracing it's all calculation-based variations of a point.
Post processing effects aren't typically ray traced. Even those that can be RT'ed are sometimes done the old fashioned way. I've seen pipelines that use depth maps from a RT scene to do DoF effects more quickly in Photoshop.
And let's not forget about basic processes such as camera transformations and related matrix ops. Also: UI elements and menus.
Edit: those are just examples. But as you said, it's all just maths. Both paradigms solve some rendering equation per unit (vertices/pixels or samples). From what little I know of the hardware side, RT hardware deals primerly with ray traversal and collision (denoising is delegated to ML hardware, iirc). I wouldn't be surprised of shader processing at collisions/misses had similarities with traditional approach.

Secondly I made this point before but the resolution problem is that most people think in the 2D dimensions of the screen area while rendering. When in reality the graphic card has to render more than 8x what you think it does go to from a 1080 scene to a 2160P scene. This is because it renders height, width, & depth.
I'm not getting what you're going for here, tbh.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
All in all, I still think RT is a niche market and nVidia is really trying to change that. I can't say it's succeeding though.
I do not think this is quite accurate. Some (mainly lighting and reflection) methods have been inching towards RT for a while now. A bunch of research pieces have been done on trying to use voxels and SDF. An easy example is VXAO. Which incidentally looked quite a bit nicer than say HBAO+ but had a very significant performance penalty. Same applies even more to RT - doing it on shaders is computationally expensive and some help from ASIC units brings very significant performance increases in the area without significant die area cost. Hybrid RT as it is used in games today muddies the performance picture quite a lot as there are penalties for RT effects outside what RT Cores can do.

There is also a forward-looking aspect of "what happens when we want to add more light sources?" There is a cutoff point for things like shadowing and parts of lighting where RT stops being inefficient compared to rasterization methods.

Something that was mentioned here in the thread is how Nvidia was trying to sell to the developers early on that good use of RT would reduce not necessarily development time but once it is dominant or only method can very noticeably reduce the time for artists and level designers and such because lighting works better. Not perfect, but better. Less things to check, move around and fix if they do not look right the first try. That as time goes by is likely to come true.

It took similarly long for DirectX 11 to be widely adopted. Budget concerns are what slows down progress at this point, making high end video games with advanced graphics has become as, if not more expensive than making movies and few studios can afford these multimillion budgets, and at the same time, having a great game with an alternative art direction or simpler graphics is quite feasible, so there you have it. :)
DX10/11 were adopted quite quickly in terms of graphics APIs. Microsoft made their usual business decisions and limited these to Vista and newer is what took wind out of their sails.

DX12 is a more interesting case though. It was introduced and was lingering with some developers trying to make things happen (with help from AMD primarily in the beginning). I would argue that what made DX12 (and Vulkan) really the choice is the current set of technologies including RT. Nvidia was not too keen on DX12 - or passive aggressively hampered its progress - but now threw as much weight as it could behind making games use DX12. RTX needed RT that needed DXR that was DX12 only. Remember the first batch of RTX/RT games where DX11 vs DX12 discussion in terms of both performance and stability tended to end on DX11 side?
 
Last edited:
Joined
Feb 24, 2023
Messages
2,240 (5.21/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / RX 480 8 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / Viewsonic VX3276-MHD-2
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / FSP Epsilon 700 W / Corsair CX650M [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
Just be glad we were alive long enough to witness it
I am glad when things that don't work very efficiently get subjected to overhauls. Like, chiplet design for example. This is a proper solution to increasing challenge to scale CPU and, nowadays, GPU performance for reasonable money.
I am glad when things that just work and cause no questions stay the way they are. Like, using solder under the IHS, never goes wrong.

But when something is only prone to cause issues if handled by a complete cabrón it doesn't require design reworks. Especially those involving some AI or anything along those lines in the device directly responsible for high power output. I know this sounds like whataboutism and exaggeration but when the device is allowed to think it can make mistakes, or even be hacked for that matter. And things can go wildly dire if when that happens. I fully disagree with this "improvement" and wish we will eventually go back to a "stupid" connector. I don't mind changing the connector the way you don't need two for a **90 level GPU. Fine, that's fine. I only mind it being clogged with unnecessary "smart" controllers and minor issues like increased user error possibility due to failed geometry in such plugs.

And knowing nVidia, I understand that my wish for things to revert to being reasonable is kinda mission: impossible.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
7,558 (3.68/day)
Location
Winnipeg, Canada
Processor AMD R9 5900X
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Aqua Elite 360 V3 1x TL-B12, 2x TL-C12 Pro, 2x TL K12
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, Asus Hyper M.2, 2x SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact RGB
Audio Device(s) JBL 2.1 Deep Bass
Power Supply EVGA SuperNova 750w G+, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
Lol and I thought I was stubborn :D

I try to keep an open mind until I get bit for it :)
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
I wonder would it make any sense to create dedicated RT devices. Like, you have your CPU running its CPU things, your GPU doing its raster and a little ray tracing here and there, and another PCI-e slot inhabitant that strictly enhances RT performance. Like ASICs but made for RT and not for mining, and mounted inside a gaming PC.
How would you transfer data between GPU and RT accelerator? Context being games and real-time RT.
When talking about things like movies where a lot of time per frame can be spent the considerations are completely different.

A lot of data is and needs to be common for shaders and RT and there is precious little milliseconds to spend on moving data. PCIe latency on purely transfer layer was IIRC 0.5ms, that will increase with contention and depends on data - how much of it is there and how many transfers will be needed (say for a single frame that we need to ideally fit into 8ms).

To reduce reliance on PCIe we can move the RT accelerator to a dedicated bus - Infinity Fabric or NVLink being nice candidates for example. But this leads to a bunch of questions like - why a separate card? Lets put the RT accelerator chip on the graphics card. When we put it on graphics card - there are power and latency penalties for moving data between chips (or off-die as a whole). So why don't we put it into the GPU itself? What then is considered is the amount of units/performance needed for the current introductory stage of the technology and the area cost of whatever the designer comes up with. Seems like what they came up with the needed level being a couple percent of the die area and that it was acceptable tradeoff :)
 
Joined
Jan 14, 2019
Messages
9,898 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
• Multi-monitor power issues
It's constantly being fixed with new drivers.
• Borderline complete absence of CUDA equivalent
Who cares if you only game?
• Absence of DLSS
Who cares?
• XeSS works worse on AMD GPUs than it does on Ada and Arc (Ada gets as much as Arc, just for the record)
Again, who cares?
• FSR is miles behind and it's implemented outright badly in several games
That's an opinion not everyone agrees with (I don't).
• 20+ % worse power efficiency
True, but not really an issue.
• Worse power spikes
That's true.
• No RTX 4090 competition whatsoever, even no plans on doing so
Who cares when every card in every tier is £50-100 cheaper than the equivalent Nvidia one? Not everyone wants a 4090.
• Absurd reliance on Infinity Cache
Absurd?
And yeah, RT performance is not just lower. It's much lower. Yes, leather jacket guy is trying his hardest to make it "a performance blackbox with proprietary bollocks," no denial on that. Yet AMD are still playing it Bogart like they're not 3 generations behind in RT and supersampling but rather are a real PITA for nVidia, which is untrue. We have a lot of reasons to hate nVidia but AMD deserve all the hatred even more: they do not try to be better than nVidia. Just like in the meme, "Democratic party is worse than Republican only because it's not better."
No, they're trying to be cheaper, which is something. Not everyone cares about RT and DLSS.

Edit: This is not a for-AMD post. All I'm saying is, everything is down to perspective. There are no good guys and bad guys in the PC hardware industry. Buy whatever product suits your needs, not the philosophy behind it.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
7,558 (3.68/day)
Location
Winnipeg, Canada
Processor AMD R9 5900X
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Aqua Elite 360 V3 1x TL-B12, 2x TL-C12 Pro, 2x TL K12
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, Asus Hyper M.2, 2x SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact RGB
Audio Device(s) JBL 2.1 Deep Bass
Power Supply EVGA SuperNova 750w G+, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
Joined
Dec 25, 2020
Messages
4,650 (3.81/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Razer DeathAdder Essential Mercury White
Keyboard Redragon Shiva Lunar White
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Yeah, you complain about Nvidia and still throw money into Jensen's pocket, great critic. it has everything to do with the topic at hand...

Since 2008, these were the main GPUs on my PCs, not in any particular chronological order, in fact most of them were 5 years or less ago:

- HD 4670 (Sapphire 1 GB)
- GTX 275 (XFX)
- HD 5970 (Sapphire OC)
- Triple GTX 480
- GTX 690 (Zotac)
- GTX Titan (EVGA Signature)
- R9 280X (Sapphire Toxic)
- R9 290X (MSI Lightning)
- Dual GTX 980 (EVGA Kingpin, Galax HOF)
- Dual R9 Fury X
- Vega Frontier Edition
- Radeon VII
- RTX 3090 (ASUS TUF OC)
- RTX 4080 (ASUS Strix OC White)

I've had both of them at roughly an equal ratio, and I was far more involved with AMD due to joining their beta tester program early on. And if you see the clear pattern, I moved from AMD to Nvidia because I've had a first-class, VIP seat to what I deem to be AMD's decline. My standards rose, their products... didn't meet my expectations. I'm much better off than I used to be in my youth, and I can afford the nicer things nowadays, so why on Earth would I subject myself to playing a Radeon? I've only had grief and poor experiences every time I insisted on going with AMD. The most recent was my laptop's Ryzen 5600H's integrated Vega GPU and the AMD HDMI audio driver causing a boot loop. Why would I willingly subject myself to a second-class experience by giving up on all the cool things a GeForce card can do, deal with the BSODs, the endless bugs? What's in it for me? Not even the satisfaction of "wow I reported this rather interesting bug I reproduced and documented, and they fixed it!" I get anymore. The last severe bug I reported and documented went EOL alongside the GPU family 2 years after the fact. :banghead:

That is true but until consumers vote with wallets that shit aint happening, or not a lot. Polls do not sell GPUs and people are hypocrites - or put differently, commerce is stronk.

You're absolutely correct about that. I voted with my wallet, and I voted for Nvidia.

Lol it totally is bro.. but that's ok :)

Honestly, I apologize in advance for probably being the culprit in starting the brand war argument, but I definitely saw a correlation between the pushback against RT and AMD's severe deficiency at it. If you put it through a Venn diagram, I am sure that the overlap between people who "absolutely hate RT" and people who own, or are loyal to Radeon cards is quite high. Especially considering that apart from the GTX 16 series (whose production is being officially discontinued next month), every single Nvidia GPU at every price point released in the past 5 years supports hardware accelerated RT.

I can't properly misconnect power plugs even if extremely drunk so it's a non-issue and non-solution. The very fact nVidia reinvented the wheel and their wheel 2.0 is worse in everything makes me vomit.

The new connector is not an Nvidia thing, they just adopted it first. The PCI-SIG is to blame here, in my opinion, and I must remind you that both AMD and Intel, as well as most major AIB, OEM and ODM partners such as ASUS are also members of the PCI-SIG.
 
Last edited:
Top