• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

will gpu continue to have crazy TDP?

Joined
Jan 14, 2019
Messages
10,296 (5.20/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
well and then there were times were gpus were passive cooled.

its simple physics why we got where we are.
I still have a passive 1050 Ti on my shelf that was a proper entry-mid-range gaming card 10 years ago. It's not simple physics that graphics card power consumption grew out of proportions since then.
 
D

Deleted member 237813

Guest
i talked about far far earlier than pascal.

yes it is lol.
 
Joined
Jan 14, 2019
Messages
10,296 (5.20/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
D

Deleted member 237813

Guest
deccades earlier. i guess you dont get it. also that class is taken by apus today.

why because more fps qill require more power at some point its that simple. we are not anyore on 28nm or higher we are hittring already a wall with 5nm and the 4090. why because physics.
 
Joined
Jan 14, 2019
Messages
10,296 (5.20/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
deccades earlier. i guess you dont get it. also that class is taken by apus today.

why because more fps qill require more power at some point its that simple. we are not anyore on 28nm or higher we are hittring already a wall with 5nm and the 4090. why because physics.
Pascal is not decades earlier! That's the point!

I'm not so sure if we're hitting a wall. Look at advancements in CPUs... how much performance we get for just under 100 Watts (either with a 7800X3D, or a finely tuned Intel chip).
 
D

Deleted member 237813

Guest
Pascal is maxwell on steroids. ada is ampere on steroids. look at the core clocks and speed diferences if put to the same clockspeed ;)

my point here is pascal was exclusively a gaming Architecture especially, dx 11 optimized to the fullest extent. an arch that is purely made for gaming will always be much more efficient. rdna 2 was amds pascal.

that is out of the window right now tthanks to the ai garbage with both manufacturers okay lets say all 3.

also gpus now are much much much more compex than they were. you want more fps ? at one point it will only wortk with more power. we say here cubic capacity. There is no substitute for displacement except with even more displacement. (i hope thats the right word for what i mean since im not a native english speaker)



i can prove it look at the hardware spec difference beetween the 4080 and the 4090. yet the 4090 is only 20-35% faster in some rt titles up to 50% but overall its 30-35%. now look at the huge difference in the specs, if we would not hit a wall the 4090 should be 70% faster at least than a 4080. monolithic design is hitting a wall. intel proves this with their cpus for years now. they need to pump in the power to even get a few % more out of it than they can.

ryzen is not monolithic its chiplet like rdna 3. and nvidias future gpus will also be chiplets for a good reason ;)

ryzen is running circles around intel right now its actually really emberrassing.
 
Joined
Jun 13, 2012
Messages
1,338 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
stock 4070 super i have was running 1.1 volts and 220watts which clocked in around 2740mhz, under volted it to .95volts at 2700mhz and it plays Watch dog legion with medium RT on ultra settings at 150-175wattts.
 
Joined
Jan 14, 2019
Messages
10,296 (5.20/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
my point here is pascal was exclusively a gaming Architecture especially, dx 11 optimized to the fullest extent. an arch that is purely made for gaming will always be much more efficient. rdna 2 was amds pascal.

that is out of the window right now tthanks to the ai garbage with both manufacturers okay lets say all 3.
So it's not "just physics", then? ;)

I agree with this notion by the way - the AI plague should leave gaming alone, imo.
 
D

Deleted member 237813

Guest
it still is physics in the end it all comes down to this why. so yes it really is. you wont change natural laws and no one ever will.

without ai we would have no nvidia gpu for gamers right now lol ada gpus are 100% ai. gaming gpus are just the crap chips that couldnt make it to ai chips for selling for 10k plus.

but yes dedicated gaming gpus would be the best but wishfull thinking.

so yes unless they find a completely new way of making gpus we will not come down with the watts ever again. unlesss you would like to have the same performance like now, than they can bring that down but they will never release a next gen with the same power as before. (which they actually already do. 3060 ti to 4060 ti and rx 6800xt to 7800xt). thats how much we are near the wall.

but expect nice stuff from mcm design gpus when they figured it out on gpus but not on the tdp front. but fps will explode when they put on it 2 140sms chips but damn that 1000 watts consumption with the tech of today lol. that shit would run cp 2077 cyberpunk pathtracing in native 4k with no issues.

the issues remains x86 platform its extremely inefficient. look at the m chips from apple they are years and years ahead. but since windows dominates the world with x86 i see there no change in the near future.

arm/apple if they would make deidcated gaming gpus with their tech we would have 4090 with 250 watts but still everyting for devs is on x86 so you see the issue. spoken very very simplified
 
Joined
Feb 1, 2019
Messages
2,775 (1.42/day)
Location
UK, Leicester
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 3080 RTX FE 10G
Storage 1TB 980 PRO (OS, games), 2TB SN850X (games), 2TB DC P4600 (work), 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Asus Xonar D2X
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
it seems out of hand. lets not focus on the monstrosity sizes they have become and the sag, but why arent they doing anything about tdp with them. 300-450W for gpu is out of hand. cpu have stayed pretty stable with theirs, maybe higher intel chips have gotten high but they are just a few but if cpu can stay pretty low why gpu are getting higher and higher the jumps are quite large for the higher end ones. will this trend continue more and more? at the pace were in now, well be well over 600w for a card in the next few years. and with electricity prices jumping, thats paying the card twice in its use time. or a good chunk. seems like they put mno effort into the efficiency of them. hell I remember a time when I was thinking 180w was a lot for a gpu.
Eventually a break through will be made and a gen will come out thats much more efficient. When that happens no idea.

Problem is performance sells more than efficiency so for the TDP to come down they need to be able to get the extra generational performance whilst having enough of an efficiency boost to also reduce power consumption at same time.
 
Joined
Apr 18, 2019
Messages
2,179 (1.16/day)
Location
Olympia, WA
System Name Sleepy Painter
Processor AMD Ryzen 5 3600
Motherboard Asus TuF Gaming X570-PLUS/WIFI
Cooling FSP Windale 6 - Passive
Memory 2x16GB F4-3600C16-16GVKC @ 16-19-21-36-58-1T
Video Card(s) MSI RX580 8GB
Storage 2x Samsung PM963 960GB nVME RAID0, Crucial BX500 1TB SATA, WD Blue 3D 2TB SATA
Display(s) Microboard 32" Curved 1080P 144hz VA w/ Freesync
Case NZXT Gamma Classic Black
Audio Device(s) Asus Xonar D1
Power Supply Rosewill 1KW on 240V@60hz
Mouse Logitech MX518 Legend
Keyboard Red Dragon K552
Software Windows 10 Enterprise 2019 LTSC 1809 17763.1757
To address the thread title: Yes, and no.

We're currently hitting a "thermal density" barrier.
Exemplified in the R&D and patenting of integral nano-peltier devices, to (inefficiently) pump heat from active/dense parts to inactive/low-logic-density portions of the die.
This is also why we've seen Industry Alliances in developing MCM designs.

Absolutely, we will see higher TDPs. However, things will get 'strange' rather than 'linear'.
(In terms of bigger coolers and more capable power-carrying interfaces)

Performance/W efficiency is continuing to improve but, that gained efficiency is 'eaten' in pushing Net Performance higher. The Thermal Density Issue, is pushing us towards active thermal management at the lithographic-level, with MCM spreading that heat over more surface/mass.

As far as what I mean by 'strange':
Looking at CPUs, Dynamic Clocking and Active Power Management for subcomponents of the die(s) has allowed overall performance and peak clocks that were previously impossible.
Yet, when pushed to their 'fullest', those devices need extraordinary power and cooling.

Another (personal favorite) is something like Vega 10.
Full-Fat Vega 10 (Vega 64/FE/WX9100/MI25) was known as a 'hot and hungry' GPU.
But... That's only when clocked towards the limits of the architecture. Yet, Vega in Zen APUs 'sip power', with the last revisions managing to perform faster overall, with less actual hardware.
Speaking from experience, the 'biggest' 'fattest' 'hottest' GPUs will "Sip Juice, and Chill Out" when drastically underclocked.

Considering the ever-growing (and highly profitable) computing power needs of Big Data, I can only assume that we will continue to see TDPs increase, w/in the 'scope' of what's practicable and profitable.
(Don't forget, Submersion Cooling isn't marketed to Enthusiasts, because of extremely broad IPs, held by those that near-exclusively serve Big Data)
 
Last edited:
Joined
Sep 3, 2019
Messages
3,038 (1.74/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 160W PPT limit, 75C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
well and then there were times were gpus were passive cooled.

its simple physics why we got where we are.
Because back then games were simple too... a bunch of squares and cubes fooling around.

You cant have this with a passive cooled GPU today. You need at least 200-250W

Far Cry® 62024-2-4-10-48-53.jpg

Far Cry® 62024-1-26-20-16-20.jpg

And the (every) next gen games will always "demand" more computing power
 
Joined
Sep 3, 2019
Messages
3,038 (1.74/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 160W PPT limit, 75C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
triangles...
I meant the final shape of objects, but yeah all of them are trianges and vertices, but from Ks to many many millions

Let alone the many after applications on top of the geometry
 
Joined
Jan 28, 2024
Messages
13 (0.09/day)
System Name A COMPUTER
Processor Ryzen 9 5900X
Motherboard Gigabyte X470 Aorus "Ultra Gaming"
Cooling Arctic LF II 280
Memory 32GB Corsair LPX DDR4-3600 CL18
Video Card(s) RTX 3060 Ti LHR
Storage Samsung 970 Evo Plus 2TB, Seagate 5TB HDD
Display(s) Lenovo P24Q-20
Case be quiet! Silent Base 802
Power Supply EVGA SuperNOVA 1000 GT
Software Ubuntu 22.04 LTS Server / i3wm
Pretty sure with any contemporary RDNA3 or Ada card (and a few gens before them) you can likely shave 5-30% power use off with little-to-no impact to framerates
To provide some numbers for this statement: with the stock 200W power limit, my RTX 3060 Ti LHR averaged 26.2 FPS in Unigine Superposition (1440p ultra). After raising the power limit to 220W via a VBIOS flash, it averaged 27.2 FPS. Sample size is roughly 18 test passes at each power target. In both cases, the card was throttling against its power limit.

If you bought a fancier 3060 Ti, it would have come out of the box with the higher 220W power limit. If you have one of those cards, based on my data, you could shave 10% off power consumption with only a 3% performance loss, at most - and that's without touching VFC.
 
Joined
Sep 3, 2019
Messages
3,038 (1.74/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 160W PPT limit, 75C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
Its true that almost every CPU/GPU today its pushed to the actual edge (by default) beyond its sweetspot on the power/efficiency curve for the sake of competition
 
Joined
Jun 27, 2011
Messages
6,714 (1.42/day)
Processor 7800x3d
Motherboard Gigabyte B650 Auros Elite AX
Cooling Custom Water
Memory GSKILL 2x16gb 6000mhz Cas 30 with custom timings
Video Card(s) MSI RX 6750 XT MECH 2X 12G OC
Storage Adata SX8200 1tb with Windows, Samsung 990 Pro 2tb with games
Display(s) HP Omen 27q QHD 165hz
Case ThermalTake P3
Power Supply SuperFlower Leadex Titanium
Software Windows 11 64 Bit
Benchmark Scores CB23: 1811 / 19424 CB24: 1136 / 7687
Sure, that's a fair point and that's why aftermarket coolers exist. My point was that in the past, single slot coolers were sufficient, not required, for high end cards, and these days we're having to use dual slot coolers even on midrange parts.
Aftermarket coolers were always really overpriced. I got a very cheap 6750xt with MSI's lowest end cooler. The cooling is sufficient but not overkill. I looked at the prices for modern after market coolers and they were ~$150. If I wanted to spend that kind of money I could have bought a 6900xt with a much better cooler.

Way back in the day with my 8800gt I preferred bigger coolers then too. The single slot was sufficient sure but it was not quiet.
Remember that more and more people want higher frame rates too. In some games i am using less than my old 390X which was in 1090p and not 4k.

View attachment 334868
Is that a backpack or a 1 bedroom apartment on your back? :roll:
 
Joined
Sep 17, 2014
Messages
21,341 (5.99/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Because back then games were simple too... a bunch of squares and cubes fooling around.

You cant have this with a passive cooled GPU today. You need at least 200-250W

View attachment 334922

View attachment 334923

And the (every) next gen games will always "demand" more computing power
What?! No. You can run this fine on half the TDP. Much like GPU performance/power, high and ultra settings are heavily in diminishing returns territory. If you just want baseline performance you can shave half or even more off your power usage with ease. After 15 minutes youll probably forget what settings you are actually using now.

And then there is FPS- another area of diminishing returns. We just default into 'needing' this because we can, but its complete bullshit. You dont need any of this and it frankly doesnt really make a lot of sense. We just want it.

Take note of those Steam Deck numbers. Less than 10W running a game.
 
Joined
Sep 3, 2019
Messages
3,038 (1.74/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 160W PPT limit, 75C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
What?! No. You can run this fine on half the TDP. Much like GPU performance/power, high and ultra settings are heavily in diminishing returns territory. If you just want baseline performance you can shave half or even more off your power usage with ease. After 15 minutes youll probably forget what settings you are actually using now.

And then there is FPS- another area of diminishing returns. We just default into 'needing' this because we can, but its complete bullshit. You dont need any of this and it frankly doesnt really make a lot of sense. We just want it.
Yeah I do not disagree at all with what you're saying.
I was specifically talking about max settings and 60-65FPS

At 3440x1440 rendered at x1.5 that image was
 
Joined
Nov 26, 2021
Messages
1,401 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
It is also about yields. Your always going to have imperfections in silicon that render dies either DOA or defective.

Imagine what the yield/ability to make monolithic versions of the current EPYC dies that are 100% working. Also means its far easier to bin individual parts as dies that dont make say 7950x turbo speeds may be fine for EPYC dies due to the lower intended clocks.

That flexibility will be a great boon when you are a foundry customer as being able to effectively harvest dies per wafer will be so much higher vs monolithic dies. Has anyone noticed how they havent had to add dual quad core CCDs to make up 8 core parts for Ryzen etc? I suspect this is because they have been able to get decent enough yields with this approach to not require it as well as enough demand in the EPYC lineup to be able to use them there.


You can also see this movement with Intel with their Tile based approach and Foveros technolgy. I look forward to the possibility of Intel being able to put the I/O die below the higher heat output cores hopefully meaning we dont see socket sizes following the same size increases we have seen with GPUs over the years. (Look at Threadripper Pro boards and the sizes we are looking at already)
The dies are so small for Ryzen that even 6 core SKYs are only there to satisfy demand. With the reported TSMC defect rates, a Ryzen CCD should have worst case yields around 94%.
 
Joined
Sep 17, 2014
Messages
21,341 (5.99/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Yeah I do not disagree at all with what you're saying.
I was specifically talking about max settings and 60-65FPS
People also speak of 8K. It wont stop and its never enough, but it only gets more retarded going forward. The current idiot on the block is RT. Burning watts, halving FPS, and inflating prices for extremely minor changes. The vast majority doesnt even tell the difference and might as well say a raster image has RT lighting.
 
Joined
Sep 3, 2019
Messages
3,038 (1.74/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 160W PPT limit, 75C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
People also speak of 8K. It wont stop and its never enough, but it only gets more retarded going forward. The current idiot on the block is RT. Burning watts, halving FPS, and inflating prices for extremely minor changes.
A fact (IMO)
Changes will be minor going forward along with great computation demand for anyone that wants the highest visuals

Personally I like having them and I dont really have any problem with DLSS/FSR upscaling tech too as long as they not messing too much with visuals.
 
Joined
Dec 25, 2020
Messages
5,053 (3.99/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
No, I think there are physical limitations. Around 250W is the upper limit for the sane high-end users, while extreme overclockers can go far beyond that, with their responsibility about PC case usage, cooling solutions, noise, electricity bills, etc.

Look at the history. Rage 128 Pro was an 8W card
Today Radeon RX 7900 XTX is whooping crazy abnormal 355W

Radeon RX 7900 XTX 355W TBP 2022
Radeon RX 6900 XT 300W
Radeon RX 5700 XT 225W
Radeon VII 300W
Radeon RX Vega 64 295W
Radeon RX 580 185W
Radeon RX 480 150W
Radeon Pro Duo 350W
Radeon R9 Fury X 275W
Radeon R9 390X 275W
Radeon R9 295X2 500W
Radeon RX 290X 250W
Radeon HD 7990 375W
Radeon HD 7970 250W
Radeon HD 6990 375W
Radeon HD 6970 250W
Radeon HD 5970 294W
Radeon HD 5870 188W 2010
Radeon HD 4870 X2 286W
Radeon HD 4870 150W
Radeon HD 3870 X2 165W
Radeon HD 3870 106W
Radeon HD 2900 XT 215W
Radeon X1950 XTX 125W
Radeon X850 XT 69W
Radeon X800 XT 54W
Radeon 9800 XT 60W
Radeon 7500 23W
Rage Fury MAXX 13W
Rage 128 Ultra 8W
Rage 128 Pro 8W 1999

This is a good reference, but I can't help but wonder how many tens of thousands of times has the performance of the hardware improved since Rage 128, even if you disregard the functionality, that's quite fascinating really
 
Joined
Sep 17, 2014
Messages
21,341 (5.99/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
A fact (IMO)
Changes will be minor going forward along with great computation demand for anyone that wants the highest visuals

Personally I like having them and I dont really have any problem with DLSS/FSR upscaling tech too as long as they not messing too much with visuals.
But they do mess with visuals; every game today is a blurry mess, slow as molasses gameplay included. Every single game today that pushes on graphics hard is a high latency 30 FPS optimized POS. You mentioned FC6... its a good example. The game is slow AF. 'Cinematic experience' they call it. Lol... all I see is a race to the bottom of good gameplay.

We have FG... its not palatable without another tech to heavily reduce the latency hit. Do you need more writings on the wall?

I play a lot of games new and old and every time I compare Im struck with the infinite amount of nonsense in new games and often dumbfounded by the abysmal performance for whats on screen. This hits even harder if you take a long look at older games and how little they differ, but still run 3-4x faster.

Imho, we arent progressing much anymore.
 
Last edited:
Joined
Sep 3, 2019
Messages
3,038 (1.74/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 160W PPT limit, 75C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
Find the difference
Its small but also the performance hit is ~equally small
Its an addition nevertheless

Untitled_78.png
Untitled_79.png

But they do mess with visuals; every game today is a blurry mess, slow as molasses gameplay included. Every single game today that pushes on graphics hard is a high latency 30 FPS optimized POS. You mentioned FC6... its a good example. The game is slow AF. 'Cinematic experience' they call it. Lol... all I see is a race to the bottom of good gameplay.

We have FG... its not palatable without another tech to heavily reduce the latency hit. Do you need more writings on the wall?

I play a lot of games new and old and every time I compare Im struck with the infinite amount of nonsense in new games and often dumbfounded by the abysmal performance for whats on screen. This hits even harder if you take a long look at older games and how little they differ, but still run 3-4x faster.

Imho, we arent progressing much anymore.
FC6 is not very demading anyway...
Cinematic? You cant have that when your in-game AI behaves like people from a mental institution... lol (especially driving cars)

I've run CP2077 benchmark (3440x1440) a few times trying different settings and I find it much more than ok with non-Ultra settings, RT medium (non-Path) and FSR2.1 quality.
 
Last edited:
Top