• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

What's your GPU upgrade threshold?

What is the minimum boost of your choice?


  • Total voters
    76
Joined
Feb 24, 2023
Messages
3,952 (4.95/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
I mean, how much more performance your not yet purchased GPU should provide compared to your older one for that to justify the effort.
 
When i get bored... sometimes its even a sidegrade like my previous (4080 to 7900XTX)
 
Used to be I'd upgrade to a single card that provided the same performance in game I played as my current SLI setup. Then I'd add a second card for SLI - then a couple generations later I'd get a new card that performed the same as my current SLI setup, then pick up a second card for SLI.

Generally I need a good 50%+ performance gain. I tend to sit on a GPU for 4-6 years before I upgrade to something in my price range with decent performance gains over what I'm using.
 
Double at roughly the same-ish price. I am willing to flex 50 or so bucks up (or down, obviously). Don’t need tons of performance, so I always shop in the 400-ish dollar range. So if I was buying today I’d go for a 7700XT or a (sigh) 4060Ti to replace my old and battered 1070. Neither is a great option for me. Could grab a 4070S since it’s really overall probably the best deal NV has right now, but I can’t really justify spending those money on a card I’d barely use. Don’t really play AAA anymore.
But yeah, current market is dogshit and the “good value mid-range” now costs near-flagship money and said flagships cost “I am not paying this much just to play games” money. Obviously, if one needs GPU for work the calculus changes. But as a toy? Nah. Hence me not upgrading my personal rig.
 
(4080 to 7900XTX)
One must really get completely bored to perform a swap like that. Or, use AMD-biased software mostly.
I tend to sit on a GPU for 4-6 years before I upgrade
If you were to upgrade a rougly 500 to 600 dollar 2070 to a roughly 600 dollar 4070S it would've been quite hotter than +50%. It's a little more than 2 times faster.
But as a toy? Nah. Hence me not upgrading my personal rig.
Yeah, that's why I also postponed it till... "OH NO EVERYTHING RUNS LIKE SHIT" phase. Will buy some NV RTX GPU, likely a 3080 Ti, just to have an NV option on hand. It'll only coincidentally be an upgrade over my current GPU. I want my new GPU to run native 4K smoother than my current one runs 1080p+upscaling (basically, I'd jump to a 4080S now if I had money + had nothing faster than a GTX 980 / RX 580). This is how long I'm about to wait.
 
60-80% but really it was to try to get good 1440p in Ark:SE

1060-1080-6600XT-6800XT

Yeah 6600 XT was a weak upgrade but that GPU's been pretty great and I have hand-me-down users.
 
One must really get completely bored to perform a swap like that. Or, use AMD-biased software mostly.

I am a hardware geek and i like new shiny things.. Yes, i have a problem :)
 
Needs to be at least twice as good. Otherwise I don't really notice and feel like it was a waste. My biggest disappointment was GTX 970 to GTX 1070... somewhat better, but didn't feel like a "generational" improvement imo. But man, that GTX 1070 to RTX 3080 was noticeable.
 
My best upgrade so far: GT 1030 (classic GDDR5 variant) to GTX 1080 Ti.
 
I usually upgrade every other generation so I get a really big boost in performance for around the same money but I once went from a 970 to a 980 Ti because I moved from 1080p to 1440p.
 
every other generation so I get a really big boost in performance
Does not compute. Is ~40 percent... big?
 
Does not compute. Is ~40 percent... big?

Usually I get far more but I chose the category I did because it was my threshold in one case. It's a relative thing. Like for instance you once told me that anything less than 5 times the boost in performance isn't a big upgrade. I would call that an extremely massive upgrade. Just different perspectives I guess.
 
The only thing that would get me to upgrade from my 7900XTX factory OC edition is less power consumption when maxed out but at least a "small" boost in overall performance at the same time. Don't see that happening with RDNA 4, but perhaps with RDNA 5? don't care if that's couple yrs off too.. As for Nvidia 5000 series, I'll wait n' see what benchmarks reveal for retail samples & think about it.
So basically I did not react to the poll as my choices are not reflected there.
 
When SLI and CFX were still widely supported, my rule:

- Get mid-range GPU, $150-250
- A year or two later, get a cheap asf second hand matching GPU for SLI/CFX
- When new midrange GPU is twice as fast as CFX/SLI config, get new midrange GPU $150-250.
- Repeat.

Sucks can't do that anymore, so pretty much forced to buy expensive high tier. I have fond as well as painful memories of getting cheap CFX/SLI configs working and occasionally surpassing performance of high tier cards.

So I voted for 200%, and obviously my 6900XT is already obsolete by that standard (RTX 4090), but these things are so damn expensive now I will pretty much only upgrade if VRAM really becomes a problem.
 
When SLI and CFX were still widely supported, my rule:
So I voted for 200%, and obviously my 6900XT is already obsolete by that standard (RTX 4090), but these things are so damn expensive now I will pretty much only upgrade if VRAM really becomes a problem.
Technically SLI is in the 51-100% range. and 200% is like if I have a 2080 Ti I have to wait forever to get anything 3 times faster not counting the DLSS-FG gimmics, that would be ridiculous.
Look we gave you 400% increase over 20FPS the small print saying it just runs at a lower resolution internally and inserts like 2 fake frames for 1 real one with added latency perceived as mouse lag..
 
15-30% or so, factoring efficiency gains and new functionality. But I always buy at or very near the flagship tier, so it's usually more than that delivered every gen
 
400% minimum. That's right.
Coming from AGP and then IGP, the HD6570 was my first PCI-E main from 2013-2018 and I knew it was on the way out in late 2017.
It was out after I discovered GPU-bound audio stutters even after doubling to 8GB system memory.
2GB vram probably wasn't the culprit but low core clock and half-speed last class memory didn't help.
Moved to a RX 580 Red Devil 8GB and all the problems disappeared. Convinced it could hold until 2025.
It outlived its purpose but I'm waaaaay behind on features and stuff that I want to do outside of VR soooo I guess that's still a win?

HD6570:
Core clock: 650MHz
Memory: 2GB 500MHz DDR3 128-bit 16GB/s
Shaders: 480
TDP: 60W
Feature Level: DirectX 11.2
Pixel Rate: 5.2GP/s
TexRate: 15.6GT/s
FP32: 624G

RX580: (prev +912%)
Core clock: 1380MHz
Memory: 8GB 2000MHz GDDR5 256-bit 256GB/s
Shaders: 2304
TDP: 185W
Feature Level: DirectX 12.0
Pixel Rate: 44.16GP/s
TexRate: 198.7GT/s
FP32: 16.359T
FP64: 397.4G

RX7900XT: (prev +459%)
Core clock: 2499MHz
Memory: 20GB 2500MHz GDDR6 320-bit 800GB/s
Shaders: 5376
TDP: 300W
Feature Level: DirectX 12.2
Pixel Rate: 479.8GP/s
TexRate: 839.7GT/s
FP32: 53.74T
FP64: 1.679T

These are some very big jumps and I'm still in a situation where my HD6570 is gone, so no remote GPU jobs and I can't get delivery of a functional 7900XT. There's a MASSIVE surplus of these cards and nobody wants to jump on Navi 31 right before a new Navi launch either. It's a shame too. Would have solved everything. So I guess I wait for this:

RX8800XT: (prev ???)
Core clock: 2430MHz
Memory: 16GB 2438MHz GDDR6 256-bit 624.1GB/s
Shaders: 3584
TDP: 220W
Feature Level: DirectX 12.2
Pixel Rate: 233.3GP/s
TexRate: 544.3GT/s
FP32: 17.42T
FP64: 544.3G

If those speculated stats are real, outside the Pixel+Texture rates this thing is another RX580 or worse that only exists to keep the lights on.
I definitely need the DirectX 12 and encoder improvements but it's not impressive enough to be serious.
What would you do? Give Navi31 another chance? I'm burned out just trying.
 
Can't do like that anymore as I have budget threshold, I can't justify paying 2/3 of my salary on GPU even if it gives double performance
 
I went from 1070ti to 3090, the performance threshold was more than double… so I’m doing the same for the next gpu
 
Let's see. Previous one was GT 640M to an RTX 3070 mobile, then to the current desktop. ~20x across 5 generations and 3 segments, and ~2.3x across one generation, and one split of form factor and power envelope.

I somehow doubt there would even be another one, with 300%-plus improvement, and cost I can justify.
 
Last edited:
Around 50%, how ever my last one was about 76%.
 
These days I wait out until it's roughly double the raster perf, a higher VRAM size (even if it's only by 2GB), and well under $500. That's my sweet spot. With the 3060 12GB finally showing some age, that's becoming a real possibility for me, so it means that it's nearly time to upgrade. Huzzah.
 
I can't get delivery of a functional 7900XT. There's a MASSIVE surplus of these cards and nobody wants to jump on Navi 31 right before a new Navi launch either.
Sorry, my tiny foreigner brain can't comprehend this. Read it a dozen times and can't make sense of it. May I ask for English to Simple English translation?
my last one was about 76%.
Something along the lines of 2070S to 4070?
 
I'm mostly VRAM bound (currently on 2x 3090), but I do have access to some nice remote GPUs (A100s 40GB) and can rent an A100/H100 80GB in a couple minutes for $2/hour.

So for an upgrade to make sense to me, it'd need to provide me both more and faster VRAM. The 4090 did not provide this whatsoever, performance benefit was around 0~30% for my use cases, so kinda moot to spend money on that and better to just jump into cloud if needed for cheaper.

The 5090 will provide a hefty bandwidth bump, and maybe also more vram (64GB vs my current 48GB for 2x GPU), but then I'll need to see how pricing goes and if it'll make sense to buy two of those instead of renting GPUs for 1 year or two.
 
Sorry, my tiny foreigner brain can't comprehend this. Read it a dozen times and can't make sense of it. May I ask for English to Simple English translation?
Can't reach the POST screen with any 7900XT.
Recent price drops look like a buy signal but could also be a reason to wait.
Outside of that we just don't know.
On the 580 I'm clock and feature bound.
Memory is okay and noise is tolerable but I'm already locked out of some jobs because of DX12_2 and Vulkan.
 
Back
Top