• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Amazon's New World is bricking RTX 3090 graphics cards

Joined
Feb 23, 2019
Messages
3,234 (3.18/day)
Location
Poland
Processor Ryzen 7 3700X
Motherboard Gigabyte X570 Aorus Elite
Cooling BeQuiet Dark Rock 4
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E
Video Card(s) EVGA 1060 6GB SSC
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) Acer XB273GP
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse SteelSeries Rival 300
Keyboard MK Typist (Kailh Box White)
Just wondering how it is that it's the devs fault?
If devs let their game reach insane fps values then at least partially it is their fault. Sure, you might get away with it in 99% of the cases but here we have 3090's from EVGA that went kaboom due to combination of insane fps and bad hardware design.
 
Joined
Jul 24, 2009
Messages
953 (0.21/day)
Looks like whatever the "lobby" is in this case is ignoring vsync and that spike in FPS is burning something out. That sounds like it falls on amazon more than EVGA/NVIDIA imo, something in the engine they're using is ignoring FPS limiters. Definitely something that should have been easily noticeable in QA but, what game company does real QA anymore. Forget microtransactions, being able to easily push out hotfixes via launchers did more to kill quality.

Also sounds like a bug that would affect any card, green or red. 3090's might just be running super hot to begin with and not leave as much wiggle room.
Ah, saw that some time ago. StarCraft 2 had same issue, FPS went up to thousand(s) and worked as well as Furmark. Cost some GPUs too.
 
Joined
Mar 21, 2021
Messages
1,291 (4.97/day)
Location
Colorado, U.S.A.
System Name HP Compaq 8000 Elite CMT
Processor Intel Core 2 Quad Q9550
Motherboard Hewlett-Packard 3647h
Memory 16GB DDR3
Video Card(s) NVIDIA GeForce GT 1030 GDDR5 (fan-less)
Storage 2TB Seagate Firecuda 3.5"
Display(s) Dell P2416D (2560 x 1440)
Power Supply 12V HP proprietary
Software Windows 10 Pro 64-bit
If devs let their game reach insane fps values then at least partially it is their fault.

So FurMark is at fault to see what a card is capable of?
 
Last edited:
Joined
May 8, 2021
Messages
1,216 (5.74/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB 100 watt 1100 MHz core
Storage 512GB WD Blue + 256GB WD Green
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
If devs let their game reach insane fps values then at least partially it is their fault. Sure, you might get away with it in 99% of the cases but here we have 3090's from EVGA that went kaboom due to combination of insane fps and bad hardware design.
FPS never kills cards. Poor card design does.
 
Joined
Feb 23, 2019
Messages
3,234 (3.18/day)
Location
Poland
Processor Ryzen 7 3700X
Motherboard Gigabyte X570 Aorus Elite
Cooling BeQuiet Dark Rock 4
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E
Video Card(s) EVGA 1060 6GB SSC
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) Acer XB273GP
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse SteelSeries Rival 300
Keyboard MK Typist (Kailh Box White)
So FurMark is at fault to see what a card is capable of?
There's a reason why FurMark is called power wirus.
FPS never kills cards. Poor card design does.
Oh yeah so those evga cards died just like that, it's not like they died after running specific game under specific conditions, right? Right?
 
Joined
May 8, 2021
Messages
1,216 (5.74/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB 100 watt 1100 MHz core
Storage 512GB WD Blue + 256GB WD Green
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Oh yeah so those evga cards died just like that, it's not like they died after running specific game under specific conditions, right? Right?
Game is at no fault, if eVGA has hands growing out of their asses and can't engineer a proper power delivery for card. Other vendors had somewhat less failures, but here's a thing. Founders cards weren't reported to fail in that game. Coincidence?
 
Joined
Mar 10, 2010
Messages
9,479 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R7 3800X@4.350/525/ Intel 8750H
Motherboard Crosshair hero7 @bios 2703/?
Cooling 360EK extreme rad+ 360$EK slim all push, cpu Monoblock Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 32Gb in four sticks./16Gb
Video Card(s) Sapphire refference Rx vega 64 EK waterblocked/Rtx 2060
Storage Silicon power qlc nvmex3 in raid 0/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd
Display(s) Samsung UAE28"850R 4k freesync.
Case Lianli p0-11 dynamic
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
There's a reason why FurMark is called power wirus.

Oh yeah so those evga cards died just like that, it's not like they died after running specific game under specific conditions, right? Right?
I don't see it your way, so a game kills card's.

Why isn't it also killing the PSU or the rest of the PC, because they're made With protection circuitry to prevent over loaded situations, I genuinely thought all GPU include some such protection, clearly not.

I blame poor board design, there are reports of cards trying to do 20000 Rpm on their fans to keep heat at bay, it's totally f#@£@#g ridiculous that someone made a bios and hardware that allows this possibility.

And as an engineer, I would be ashamed of being Any part of this.
 
Joined
Mar 21, 2021
Messages
1,291 (4.97/day)
Location
Colorado, U.S.A.
System Name HP Compaq 8000 Elite CMT
Processor Intel Core 2 Quad Q9550
Motherboard Hewlett-Packard 3647h
Memory 16GB DDR3
Video Card(s) NVIDIA GeForce GT 1030 GDDR5 (fan-less)
Storage 2TB Seagate Firecuda 3.5"
Display(s) Dell P2416D (2560 x 1440)
Power Supply 12V HP proprietary
Software Windows 10 Pro 64-bit

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
51,886 (8.28/day)
Location
Oystralia
System Name Rainbow Sparkles
Processor Ryzen R7 5800X (PBO tweaked, 4.4-5.05GHz)
Motherboard Asus x570 Gaming-F
Cooling EK Quantum Velocity AM4 + EK Quantum ARGB 3090 w/ active backplate. Dual rad.
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3800 C18 TRFC704 (1.4V, SoC 1.15V Hynix MJR)
Video Card(s) Galax RTX 3090 SG 24GB: Often underclocked to 1500Mhz 0.737v
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Gigabyte G32QC (4k80Hz, 1440p 165Hz) + Phillips 328m6fjrmb (4K 60Hz, 1440p 144Hz)
Case Fractal Design R6
Audio Device(s) Logitech G560 |Razer Leviathan | Corsair Void pro RGB |Blue Yeti mic
Power Supply Corsair HX 750i (Platinum, fan off til 300W)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE (custom white and steel keycaps)
VR HMD Oculus Rift S
Software Windows 11 pro x64 (Yes, it's genuinely a good OS)
Benchmark Scores I don't quite know how i managed to get such a top tier PC, I am not rich.
Ah, saw that some time ago. StarCraft 2 had same issue, FPS went up to thousand(s) and worked as well as Furmark. Cost some GPUs too.
i literally just remebered SC2 having this drama and was gunna bring it up, it was overheating laptops in the menus while waiting for MP matches to load and got a heap of complaints

(Then when in game and the GPU load reduced as it was CPU limited, the power consumption would go down)
 
Joined
Jul 19, 2006
Messages
43,245 (7.70/day)
Processor Ryzen 5900x
Motherboard MSI x570s Carbon Max WiFi
Cooling Deepcool 360 AIO
Memory 32GB G.Skill 3600Mhz CL14
Video Card(s) Zotac RTX 3070 Ti Trinity OC Watercooled
Storage SSD's
Display(s) MSI MAG322CQR
Case Lian Li PC 011 Dynamic
Audio Device(s) Schiit Modius DAC, SMSL SP200 amp, Fosi Tube Pre-Amp.
Power Supply Corsair H1000i
Mouse Logitech Pro Wireless
Keyboard GMMK
Software Windows 10 Enterprise
If devs let their game reach insane fps values then at least partially it is their fault. Sure, you might get away with it in 99% of the cases but here we have 3090's from EVGA that went kaboom due to combination of insane fps and bad hardware design.
It doesn't reach insane fps values though. The issue has nothing to do with FPS limiting.
 
Joined
Jan 11, 2005
Messages
178 (0.03/day)
lol that does nothing. You don't seem to understand how power limits work. They aren't exe name based driver profiles. Heck, they aren't driver profiles at all.


Those are all reasons to care.


EVGAs warranty sucks less in my experience. They do tend to put effort into that image at least.

Of course, at times, it's needed, because their hardware can be hit and miss. Case in point? This thread.


Did you really just say I have no idea how power limits work?
I have a TDP modded GTX 1070, flashed with a HW programmer.
My 3090 FE is shunt modded.

Yikes man. Just yikes.
I know full well how power limits work. Hell I'm on Elmor's discord talking about this stuff with the LN2 boys quite a bit.
Maybe I'll educate you on how power limits work.

There is a TDP limit, which is total board power. When the board power gets close to this limit (about 20W or so away maybe?), it will reduce its clocks gracefully by changing the point on the V/F curve (the GPU VID, basically) to use a lower voltage and frequency step that corresponds to that voltage. Usually each frequency tier has at most 3 voltage points linked to that tier.
If the card still will get too close too the max TDP, it will continue to drop the clocks and VID until it gracefully stays below TDP.

TDP is the sum of the 8 pin power limit values in the vbios, and the PCIE Slot Power limit values. However the 8 pin limit values can be exceeded as the these values don't actually limit the 8 pins themselves.
There are also sub power rails as well. There are shunt resistors that are linked to measure all of the sub power rails.
The sub rails are not directly linked to TDP but the TDP slider can affect some values in undocumented ways. The sub rails are GPU Chip Power, MVDDC memory power, and SRC (power plane chip) power.
Each of these rails has its own power limit. There is a 'default' and a 'max' value. The default value cannot go lower than this value in triggering a power limit, but the maximum value is normalized with respect to the TDP slider itself if the slider is past 100%.

It's also worth noting that the power rails are sometimes *sums* of auxiliary rails, that are controlled by the SRC chip and regulated by other shunts. If a shunt reports values that are out of whack, this may cause a different rail to report a way too high power value, or even in some cases, 0 watts (massive under reporting) which is compensated by massive overreporting on another rail (usually because of improper shunt mods). For example, GPU Chip Power on 2x8 pin cards is a *sum* of Misc0 input Power, Misc2 Input Power and NVVDD1 input power (sum). Yes NVVDD is a sum of some other rail (this rail does not seem to be exposed in HWinfo64, much like the main NVVDD and MSVDD power rails, linked to the internal (not VID) MSVDD and NVVDD voltages, are not exposed)

The SRC power rails limit is what controls the max power draw of the individual 8 pins. While the SRC chip has its own master power limit, this is broken up into SRC1 and SRC2, or SRC3 for 3x8 pin cards, which all have their own power limit, and control what each 8 pin can draw. This is usually 150W default and 175W maximum on most cards that are not XOC Bios modded.

TDP Normalized % is the single highest power rail (Not TDP % itself) that is reported on its current value, versus its maximum allowed value, relative to all other rails.
For example, if your default memory power limit were 100W, max MVDDC power limit were 125W and your memory was drawing 150W, this would cause a Normalized TDP% draw of 150% **IF and only if no other power rail or sub power rail, including the AUX rails, exceeded 150% of their max values **, and would trigger a power limit override throttle via TDP Normalized, even if total board power were far below its TDP limit. How far a normalized power limit can exceed 100% without signaling a throttle flag depends on the vbios limits and how far the TDP slider can go to the right.

XOC Bioses often come with massively increased rail limits.

NVVDD, MSVDD and PLL have their own internal power limits that report to TDP Normalized also but are not exposed in HWinfo64. MSVDD drawing more power than is allowed for the current MSVDD voltage will cause "effective" core clocks to drop slowly, without triggering an actual powerlimit, with respect to requested clocks. NVVDD drawing more power than is allowed for the current NVVDD voltage will cause an instant power limit throttle, without effective clocks dropping first.

The Asus Strix has higher internal MSVDD and NVVDD power limits, similar to if the Kingpin cards were running with the MSVDD and NVVDD dip switches set to "on".

So yes, I know quite a bit about power limits.

How long have you been around?
Go check the 10-15 year old Rage 3D archive (I didn't read the Nvidia forums back then after I switched from a Ti 4600 to AMD for years).

There was plenty of discussion about furmark back in the day. Back when Furmark destroyed cards with any sort of substandard VRM cooling or out of spec amp limits on the phases. AMD (Ati) and Nvidia started adding app detection to limit the power draw and massively throttle the GPU core clocks *IN THE DRIVERS*. That was back when you could do stuff like use a "prerender limit" (Flip queue size) of 0 in the registry, rather than having a value of "0" turn into default, like it does now.

People were able to find out by renaming the furmark exe to Quake3.exe to restore the original power draw and clocks (on cards that were beefy enough and had good enough cooling to handle it).

This was in the windows XP days. So yeah. sometime around 15 years ago. Back then, "app detection" was done by checking the name of the executable file. People constantly renamed exe's to get huge performance boosts or to remove graphical glitches in games back then.

I had absolutely NO idea if that still worked on windows 10 or not. Clearly it doesn't. I was wrong to assume stuff that worked in XP would still work, considering how much cards are massively locked down these days. Thank you for checking.

BTW just to let you know, Furmark, at 400W TDP throttles my 3090 to *BELOW* its base clocks and 0.725v VID. GPU runs at 1,185 mhz-1200 mhz at 400W.
That's low level throttling. That's below the actual base clocks (1395 mhz), never mind boost clocks. Normal power limit throttling will never disable boost clocks like that.

At 450W TDP, I got 1550 mhz core clocks. (Clock offset was +150 mhz in both cases).
 
Last edited:
Top