• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Possibly bricked RX 5700?

Joined
Mar 21, 2021
Messages
4,329 (3.92/day)
Location
Colorado, U.S.A.
System Name HP Compaq 8000 Elite CMT
Processor Intel Core 2 Quad Q9550
Motherboard Hewlett-Packard 3647h
Memory 16GB DDR3
Video Card(s) Asus NVIDIA GeForce GT 1030 2GB GDDR5 (fan-less)
Storage 2TB Micron SATA SSD; 2TB Seagate Firecuda 3.5" HDD
Display(s) Dell P2416D (2560 x 1440)
Power Supply 12V HP proprietary
Software Windows 10 Pro 64-bit
You might also want to let it cool in-between; the heat expansion might be holding a soldering crack closed.
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
You might also want to let it cool in-between; the heat expansion might be holding a soldering crack closed.
This is something I theorized. That what brought the card alive was that for some reason (i.e. security features missing due to age) my wife's MB accepts the card and allows it to get voltage and it starts heating up. It sounds a bit crazy but I have a hard time explaning the 1:30h up and running mining session happening right now otherwise.

Edit: just as I wrote that it crashed. If I manage to make it run longer I'll update :p

Also temps problem is a closed topic, whoever applied the thermal pads just did a poor job. They don't even reach 80C now.
 
Last edited:
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Your temps are about what mine are.

I'm running most cards at 1266MHz core, 875mv, and 1860MHz on the memory with tREF at around 7250.

They run about 58.5MH/s each and if a card can't survive until the next bi-monthly update/restart/dust-clean cycle, it gets the memory clock dropped by 20MHz. Even at the stock 1750MHz memory clocks you can get 57MH/s out of them and continuous uptime is more valuable than something 3% faster but crashes all the time.
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
Your temps are about what mine are.

I'm running most cards at 1266MHz core, 875mv, and 1860MHz on the memory with tREF at around 7250.

They run about 58.5MH/s each and if a card can't survive until the next bi-monthly update/restart/dust-clean cycle, it gets the memory clock dropped by 20MHz. Even at the stock 1750MHz memory clocks you can get 57MH/s out of them and continuous uptime is more valuable than something 3% faster but crashes all the time.
Yeah happy with the temps now. On HiveOS the gpu doesn't crash the whole system, drivers crash with "no temps" error but 30 sec later the drivers go back to normal and watchdog does it job without even needing to restart.

Ran close to 7h so far... it's something do you wanna share a.modded timmings bios of those reference cards (mine is micron) so I can compare and see what I missed? At 1750 I should indeed be getting a bit more hashrate.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Yeah happy with the temps now. On HiveOS the gpu doesn't crash the whole system, drivers crash with "no temps" error but 30 sec later the drivers go back to normal and watchdog does it job without even needing to restart.

Ran close to 7h so far... it's something do you wanna share a.modded timmings bios of those reference cards (mine is micron) so I can compare and see what I missed? At 1750 I should indeed be getting a bit more hashrate.


GF GTX 1000+ and R RX 5000+ series do not allow bios modifications, only crossflashing.
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
GF GTX 1000+ and R RX 5000+ series do not allow bios modifications, only crossflashing.

You can edit a saved rom (using the original card bios) and reflash it which is what I did yesterday but these cards need a little extra by editing the tREF timing specifically.

I won't OC the mems @Chrispy_, not in the state they are in. Unless all they needed was some heat and love :p card is up for 16h. Will let it cooldown to room temp and see if still works after to test @Andy Shiekh's theory that the heat expansion can be what is keeping the card alive. It seems possible, the drivers crashed once after 3 hours, recovered without reboot and didnt crash ever since.

Update, just noticed this:
20220116_140556.jpg

Next to the empty screw whole where the tension plate should be there is some visible damage, looks like a circuit is cut, hope its visible on the image.

Could this be the cause all along? Is it repairable?
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
You can edit a saved rom (using the original card bios) and reflash it which is what I did yesterday but these cards need a little extra by editing the tREF timing specifically.

I won't OC the mems @Chrispy_, not in the state they are in. Unless all they needed was some heat and love :p card is up for 16h. Will let it cooldown to room temp and see if still works after to test @Andy Shiekh's theory that the heat expansion can be what is keeping the card alive. It seems possible, the drivers crashed once after 3 hours, recovered without reboot and didnt crash ever since.

Update, just noticed this:
View attachment 232596
Next to the empty screw whole where the tension plate should be there is some visible damage, looks like a circuit is cut, hope its visible on the image.

Could this be the cause all along? Is it repairable?
Circle all the areas you might think there is board damage.
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
Circle all the areas you might think there is board damage.
Attaching 2 slightly better (still shit, I'm sorry I shake a little..) pics with different lighting and circled the area.

You can see exposed metal, copper or rust colored.

PS: it's back to full dead mode


Edit: @eidairaman1 found some unconventional ways to get you a (crappy) close up shot of the problem area. @Andy Shiekh wanna give your opinion? I'm really not experienced with electronics but the trace looks cut on the right side there.
20220116_180644.jpg
 

Attachments

  • 20220116_170910.jpg
    20220116_170910.jpg
    3.5 MB · Views: 63
  • 20220116_170836.jpg
    20220116_170836.jpg
    2.6 MB · Views: 62
Last edited:
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Ran close to 7h so far... it's something do you wanna share a.modded timmings bios of those reference cards (mine is micron) so I can compare and see what I missed? At 1750 I should indeed be getting a bit more hashrate.
My BIOS edits are very mininmal, following that Red Fox Mining video I linked earlier.

I take the memory strap for 1550MHz and copy and paste it for all higher clocks.

Since that video was made, the community has found better values for tREF - he's just using 5945MHz for all entries, but you actually want to use these for optimum mining:

Memory Clock - tREF
1000 MHz - 3900 (Samsung & Micron)
1250 MHz - 4875 (Samsung & Micron)
1550 MHz - 6045 (Samsung & Micron)
1750 MHz - 6825 (Micron only)
1800 MHz - 7020 (Samsung & Micron)
1875 MHz - 7315 (Micron only)
2000 MHz - 7800 (Samsung & Micron)
2250 MHz - 8775 (Samsung only)

If you follow his video exactly you'll end up with about 57MH/s at 1860 VRAM
If you use the table of tREF values above you'll end up with a little over 58MH/s. For what it's worth, the cards are less stable at higher clocks with the tREF timings in the table above. For at least a few cards I have to drop clockspeeds, but the hashrate per Watt is still better with the looser tREF values and lower clocks.

Anyway, is your GPU completely dead again? It certainly sounds like physical damage that is sensitive to mechanical flex, be that mounting it vertically and heating it up, or repadding it which obviously removes pressure from the cooler and then applies it again on reassembly.
 
Joined
Oct 22, 2014
Messages
13,210 (3.83/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
You might be able to fix the broken trace with something like this:
Permatex.jpg

Rear window demister repair kit.
 
Joined
Mar 21, 2021
Messages
4,329 (3.92/day)
Location
Colorado, U.S.A.
System Name HP Compaq 8000 Elite CMT
Processor Intel Core 2 Quad Q9550
Motherboard Hewlett-Packard 3647h
Memory 16GB DDR3
Video Card(s) Asus NVIDIA GeForce GT 1030 2GB GDDR5 (fan-less)
Storage 2TB Micron SATA SSD; 2TB Seagate Firecuda 3.5" HDD
Display(s) Dell P2416D (2560 x 1440)
Power Supply 12V HP proprietary
Software Windows 10 Pro 64-bit
You would need to scratch off some of the conformal coating first
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
My BIOS edits are very mininmal, following that Red Fox Mining video I linked earlier.

I take the memory strap for 1550MHz and copy and paste it for all higher clocks.

Since that video was made, the community has found better values for tREF - he's just using 5945MHz for all entries, but you actually want to use these for optimum mining:

Memory Clock - tREF
1000 MHz - 3900 (Samsung & Micron)
1250 MHz - 4875 (Samsung & Micron)
1550 MHz - 6045 (Samsung & Micron)
1750 MHz - 6825 (Micron only)
1800 MHz - 7020 (Samsung & Micron)
1875 MHz - 7315 (Micron only)
2000 MHz - 7800 (Samsung & Micron)
2250 MHz - 8775 (Samsung only)

If you follow his video exactly you'll end up with about 57MH/s at 1860 VRAM
If you use the table of tREF values above you'll end up with a little over 58MH/s. For what it's worth, the cards are less stable at higher clocks with the tREF timings in the table above. For at least a few cards I have to drop clockspeeds, but the hashrate per Watt is still better with the looser tREF values and lower clocks.

Anyway, is your GPU completely dead again? It certainly sounds like physical damage that is sensitive to mechanical flex, be that mounting it vertically and heating it up, or repadding it which obviously removes pressure from the cooler and then applies it again on reassembly.
Yeah got it. The card came back to life... I think it is indeed mechanical flex that brings it back to life and death. I can't confirm yet, need to do more tests later after work, but it seems that if the backplate screws are tight the card won't POST which would certainly look like it. Would actually explain a lot, the tension plate that should be in the back is not and the springed screws are just normal screws. Perhaps the guy had too much pressure on the GPU, together with 100+ hotspot temps. Enough to create some solder cracks on the long run (i.e. 3 months in those conditions)??

Another reflow and then proper heatsink assembly?

You might be able to fix the broken trace with something like this:
View attachment 232753
Rear window demister repair kit.

That would be true macgyver style, I dont mind paying to have a professional solder it back. It was more about finding out if this trace is even something that would (partially) kill the GPU to see if it was worth a try repairing.

You would need to scratch off some of the conformal coating first
clear. Just to test if it does anything, is it safe to bridge it with copper wire and non-conductive tape? Backplate wouldn't heat up to more than 30C, just to test POST consistency with and without that trace bridged, no benchmarking or anything that would produce proper heat.

My BIOS edits are very mininmal, following that Red Fox Mining video I linked earlier.

I take the memory strap for 1550MHz and copy and paste it for all higher clocks.

Since that video was made, the community has found better values for tREF - he's just using 5945MHz for all entries, but you actually want to use these for optimum mining:

Memory Clock - tREF
1000 MHz - 3900 (Samsung & Micron)
1250 MHz - 4875 (Samsung & Micron)
1550 MHz - 6045 (Samsung & Micron)
1750 MHz - 6825 (Micron only)
1800 MHz - 7020 (Samsung & Micron)
1875 MHz - 7315 (Micron only)
2000 MHz - 7800 (Samsung & Micron)
2250 MHz - 8775 (Samsung only)

If you follow his video exactly you'll end up with about 57MH/s at 1860 VRAM
If you use the table of tREF values above you'll end up with a little over 58MH/s. For what it's worth, the cards are less stable at higher clocks with the tREF timings in the table above. For at least a few cards I have to drop clockspeeds, but the hashrate per Watt is still better with the looser tREF values and lower clocks.

Anyway, is your GPU completely dead again? It certainly sounds like physical damage that is sensitive to mechanical flex, be that mounting it vertically and heating it up, or repadding it which obviously removes pressure from the cooler and then applies it again on reassembly.
Any of your cards is Micron? With copied timmings + tRef adjustments after using your numbers, I can't seem to get more than 53-54 @ 1800 mem clock. Also not sure if this is normal with AMD cards but to get the memclock to 1800 I need to set it to 1804 in afterburner. The core clock is also offset by 20Mhz or so, it's set to 1266 but runs at 1234-1250. Did you experience the same with your cards? Just to check if this is a symptom too :laugh:
Nonetheless it's still one of the most efficient cards out there with 54 Mh/s @ 116-118W. If this card can't game but can come to life every other day and mine for 16h straight it might not be as useless as it could :)
1642423400079.png
 
Last edited:
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
UPDATE: Ran through the night again with no issues. This time not even driver crashes so it actually looks stable. Close to 16h now, another hour or so and it will beat it's own record :laugh:


Now, if the issue is indeed mechanical flex and not the broken trace in the back, it's probably unwise to test it out :) If there's a solder crack that expands when the heatsink is super tight, wouldn't I be risking cracking it open completely?

What are my options here? Try to make it last as long as possible in the current state? Or try a second reflow (since the first one seems to have improved the situation but not fix it completely), assuming the first one only partially melted the solders?

PS: worth to note that the heatsink not fully tight seems to be making more or less the same area of contact, if the temps went up after "loosening" it was a difference of 1-2 deg at max across all sensors, not something noticeable.
 
Joined
Jun 11, 2020
Messages
560 (0.40/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
@xorstl the XT models have the funny looking dent in the air cooler shroud as a dead giveaway. The standard 5700 models like you have, the cooler looks normal instead. so you really could have identified what you were purchasing just by that alone....but story is getting weirder by the min with the water block but didn't have a custom loop comments? (why would the seller have a water block on it if they didn't have a custom loop to put it to use in?! did they buy it just to resell it?)

Yup, this right here is the dead giveaway that its a 5700 not xt.

No one deserves to get ripped off, but with secondhand deals you have to do more research.

My BIOS edits are very mininmal, following that Red Fox Mining video I linked earlier.

I take the memory strap for 1550MHz and copy and paste it for all higher clocks.

Since that video was made, the community has found better values for tREF - he's just using 5945MHz for all entries, but you actually want to use these for optimum mining:

Memory Clock - tREF
1000 MHz - 3900 (Samsung & Micron)
1250 MHz - 4875 (Samsung & Micron)
1550 MHz - 6045 (Samsung & Micron)
1750 MHz - 6825 (Micron only)
1800 MHz - 7020 (Samsung & Micron)
1875 MHz - 7315 (Micron only)
2000 MHz - 7800 (Samsung & Micron)
2250 MHz - 8775 (Samsung only)

If you follow his video exactly you'll end up with about 57MH/s at 1860 VRAM
If you use the table of tREF values above you'll end up with a little over 58MH/s. For what it's worth, the cards are less stable at higher clocks with the tREF timings in the table above. For at least a few cards I have to drop clockspeeds, but the hashrate per Watt is still better with the looser tREF values and lower clocks.

Anyway, is your GPU completely dead again? It certainly sounds like physical damage that is sensitive to mechanical flex, be that mounting it vertically and heating it up, or repadding it which obviously removes pressure from the cooler and then applies it again on reassembly.

Thank you this is great info. Just wondering, do you still game with those modded timings?

I've thought about modding my 5700s memory timings but I game on it as well as mine so don't want to cause instability when gaming.
 
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I think all of mine use GDDR6 from Micron.

Also yes, the core clocks tend to run ~25MHz slower than set by the driver, and the memory is about 2-4MHz out. These are my cores at "1266" and I set memory at 1864 or 1784 to get the actual clocks I want on the memory.

1642513350030.png


Thank you this is great info. Just wondering, do you still game with those modded timings?

I've thought about modding my 5700s memory timings but I game on it as well as mine so don't want to cause instability when gaming.

No, these are dedicated ETH mining rigs running PCIe 1.0 x1 to each card and the system actually making use of the CPU's IGP. I guess I could game on the IGP, since that's not being used for mining and the CPU is basically idle! ;)

1642515888684.png
1642516115461.png


I wouldn't be surprised if a BIOS-modded card with double or triple the tREF value just straight-up crashes out of games constantly. People who have bought used ex-mining cards often complain of crashes, texture corruption and artifacts in game. That's not specific to the 5700-series, just ex-mining cards in general. I presume all mining BIOSes increase tREF which, presumably, is set very short by default for stability in mixed workloads (rather than the 100% 24/7 VRAM workout that ETH mining is).

Also, you can't mine and game at the same time, and you'd have to constantly jump between mining clocks and normal gaming clocks. If you want to game, that's fine - a 5700/5700XT will mine at 50-51MH/s just fine with a stock BIOS. You're leaving 10% of the mining performance on the table but it's better than nothing!
 
Last edited:
Joined
Jun 11, 2020
Messages
560 (0.40/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
I wouldn't be surprised if a BIOS-modded card with double or triple the tREF value just straight-up crashes out of games constantly. People who have bought used ex-mining cards often complain of crashes, texture corruption and artifacts in game. That's not specific to the 5700-series, just ex-mining cards in general. I presume all mining BIOSes increase tREF which, presumably, is set very short by default for stability in mixed workloads (rather than the 100% 24/7 VRAM workout that ETH mining is).

Also, you can't mine and game at the same time, and you'd have to constantly jump between mining clocks and normal gaming clocks. If you want to game, that's fine - a 5700/5700XT will mine at 50-51MH/s just fine with a stock BIOS. You're leaving 10% of the mining performance on the table but it's better than nothing!

I can get up to 51 with an overclock/undervolt. Usually just end up running it undervolted only at 48mh to lower temps and power since my 5700 has a bad cooler.
 
Last edited:
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I can get up to 51 with an overclock/undervolt. Usually just end up running it undervolted only at 48mh to lower temps and power since my 5700 has a bad cooler.
Ah, you're mining on Windows of course.

That'll slow you down a bit too; Hashrates are usually quoted for Linux which is just faster for whatever reason. I guess you could dual-boot into HiveOS or even run it off a memory stick if you fancy dabbling with a dedicated linux mining distro.
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
No one deserves to get ripped off, but with secondhand deals you have to do more research.
There's only so much you can test when buying second hand, people have lives and they can't dedicate hours to sell something 2nd hand. I was able to run the card, reboot a system and see it post. The XT part was not part of the scam (maybe it was intended to), I figured that out quickly and renegotiated the price. It's just backstory to give enough context, since the guy appeared to not be aware it was a non-XT.


Also yes, the core clocks tend to run ~25MHz slower than set by the driver, and the memory is about 2-4MHz out. These are my cores at "1266" and I set memory at 1864 or 1784 to get the actual clocks I want on the memory.
That's great news, then the card has been acting exactly as it should for the past 18:30h. Yep, new stability record broken. Now let's see if it can handle a cool-off reboot and still POST, will try after work - for now leave it mining! It has paid 6 euro for itself already, only 664 eur to go and it wasn't that useless and dead afterall :D I will still test it gaming too, didn't have the time yet but next weekend for sure.

I wouldn't be surprised if a BIOS-modded card with double or triple the tREF value just straight-up crashes out of games constantly. People who have bought used ex-mining cards often complain of crashes, texture corruption and artifacts in game. That's not specific to the 5700-series, just ex-mining cards in general. I presume all mining BIOSes increase tREF which, presumably, is set very short by default for stability in mixed workloads (rather than the 100% 24/7 VRAM workout that ETH mining is).
Might not be entirely the case. The looser and oldschool way for polaris cards wouldn't increase the tRef on higher clocks, simply copying the exact timming string from 1500 and pasting in the higher clocks' timings was enough (maybe the tRef can be further increased but back then no1 figured that out at least). Isn't the timming string just a hash or hex of the full timmings table? If that's so, then modding the 500 series would actually decrease the tRef, since they seem to be always higher on higher clocks' timmings but if you were copying the 1500 for all others, you'd be using the tRef from 1500 which is certainly lower than the 1750 and so on. I recently sold my old RX 570 that mined back in 2018 since they were in such a good state. The same hardware shop that reflowed my GPU assembles gaming PCs so I gave them the cards for a low price with the guarantee they will only be used for gaming PCs.. trying to give back since I was part of the beginnings of this shortage back then :laugh:


I can get up to 51 with an overclock/undervolt. Usually just end up running it undervolted only at 48mh to lower temps and power since my 5700 has a bad cooler.
Sounds about right, same I was getting before I modded the bios. Just be careful and save your original bios with ATIFlash, dont use GPUZ as it will give you a 512kb version that will result in your card not POSTing. If that happens feel free to message me if you don't find enough info in this thread.. I was always able to flash the bios even with the card seemingly dead and not POSTing by running IGP as primary and force flashing with 2.93+ CLI (plus is crucial here, the normal version won't work).

Also, increasing the timmings and/or specifically the tRef will not increase temperatures. Temps only go up if voltage goes up and the 5700 mems already run at max voltage (and I don't recomend undervolting the mems on a non-XT), faster timings might cause more instability, but not more heat. Core can be underclocked and undervolted for ETH with or without modded bios, again you're not touching anything related to core in the bios, it already allows pretty low voltage in the stock bios and you can't even run it stable that low so you're well within the limits with stock core settings *mine is running so far stable at 850mV on core, @Chrispy_ seems to run them at 875 which is still pretty low, stock clock voltage can go up to 1.25v or so!
If you are comfortable with it and didn't do it yet, I recommend replacing the thermal pads, doing it correctly (not easy to find for 5700 but it's out there) decreased my mem temps from 100-105 (possibly more, but I'd stop it at 104) to 80-82. Here's a pic to demonstrate that:
1642520775210.png

1642520791619.png


If it didn't pass the 80 for 18h, it won't pass it now. Of course this is winter temps, my house is permanently at 21.5 celcius. We don't have hot summers here, but I doubt it will keep he 80 if the temps go past 25-26C, will probably go up to the 90s and stay there if air flow is proper. I could run the fan faster and achieve a lower core temp (65 is core, 80 is mems), but this fan makes a ridiculous noise already at 76%, it's unbearable at 90%+. It's also so bloody fast (almost 5k RPM) that I didn't want the card to vibrate that much until I identified where the issue is (still unsure if I did but it seems so).

Ah, you're mining on Windows of course.

That'll slow you down a bit too; Hashrates are usually quoted for Linux which is just faster for whatever reason. I guess you could dual-boot into HiveOS or even run it off a memory stick if you fancy dabbling with a dedicated linux mining distro.
Yeah this is a long running debate.. Linux seems more stable (and I can confirm with this card it IS), but all benchmarks show higher MH/s on windows. Question is if those benchmarks ensured stability or if they just ran a 30min/1hr test or so. I've had cards before that looked stable with certain configs but would crash 10h later (on windows) and we all know that's not stable mining :) So far from my previous experience in 2018 and the past few nightmarish days with the 5700, linux might have 0.5-1.5MH/s less in benchmarks, but you will be able to mine on them N days straight until whatever automated cool-off time period you prefer (my 570s used to run 5 days, shutdown and cool-off 30 sec and boot).

Edit: I also did some tests some time ago on my 3060 Ti LHR, waaay more stable in linux too. It could be the specific driver versions hiveos uses, I didn't have the patience to try all of that in windows as this is my gaming card and I was just curious how the latest LHR unlock was performing when t-rex launched their latest improvement on LHR unlock. The card wouldn't even mine 1hr on windows, with the exact same settings on linux I was seeing less LHR locks triggered while achieving the same or more hashrate and same power consumption.
 
Last edited:
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Just FYI there's a guy who is a mod for flypool discord server (flypool is the world's largest ETH pool) and he runs a massive 5TH/s warehouse (~100,000 GPUs) so he knows what he's talking about when it comes to ETH mining and he is good friends with a Micron engineer. Despite not being very publicly documented the long-term safe temperature for GDDR6X is 105C and the key info from this Micron Engineer is that it is 10C higher than the long-term safe temperature for GDDR6.

This means that 95C should be seen as the absolute hard limit for mining temperatures for the GDDR6 in a 5700-series card and you have to account for warmer days when setting your fan speeds and deciding whether to re-pad the memory chips.

I aim to keep my memory at under 80C at all times. The temperature sensor for the VRAM is not necessarily recording the hottest part of the BGA package, and secondly I don't have air-conditioning so it can easily get 10C hotter on a hot day.

Another thing that can have an impact on your hashrate is using the GPU you're mining on to drive a display. The OS will use some GPU acceleration and this eats into the memory bandwidth that can be used for mining. This fact alone is likely what causes a lot of confusion over which OS is faster. Windows definitely seems to be better at multi-tasking a single GPU (so it will appear faster than Linux if you're juggling the OS GUI and mining at the same time). I use dummy HDMI dongles to ensure that the IGP always detects a display and never casts any doubt about which display device the OS should be using for its own GDI/draw calls.

The only way to reach optimum mining speeds is a mining-specific BIOS in an OS that doesn't use the card for anything else. The reason Windows is worse than Linux in that scenario is because Windows can't seem to leave a GPU alone once detected. In task manager on a multi-GPU rig you can see some little blips of 1-2% GPU use on inactive GPUs occasionally.
 
Last edited:
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
Just FYI there's a guy who is a mod for flypool discord server (flypool is the world's largest ETH pool) and he runs a massive 5TH/s warehouse (~100,000 GPUs) so he knows what he's talking about when it comes to ETH mining and he is good friends with a Micron engineer. Despite not being very publicly documented the long-term safe temperature for GDDR6X is 105C and the key info from this Micron Engineer is that it is 10C higher than the long-term safe temperature for GDDR6.

This means that 95C should be seen as the absolute hard limit for mining temperatures for the GDDR6 in a 5700-series card and you have to account for warmer days when setting your fan speeds and deciding whether to re-pad the memory chips.

I aim to keep my memory at under 80C at all times. The temperature sensor for the VRAM is not necessarily recording the hottest part of the BGA package, and secondly I don't have air-conditioning so it can easily get 10C hotter on a hot day.

Another thing that can have an impact on your hashrate is using the GPU you're mining on to drive a display. The OS will use some GPU acceleration and this eats into the memory bandwidth that can be used for mining. This fact alone is likely what causes a lot of confusion over which OS is faster. Windows definitely seems to be better at multi-tasking a single GPU (so it will appear faster than Linux if you're juggling the OS GUI and mining at the same time). I use dummy HDMI dongles to ensure that the IGP always detects a display and never casts any doubt about which display device the OS should be using for its own GDI/draw calls.

The only way to reach optimum mining speeds is a mining-specific BIOS in an OS that doesn't use the card for anything else. The reason Windows is worse than Linux in that scenario is because Windows can't seem to leave a GPU alone once detected. In task manager on a multi-GPU rig you can see some little blips of 1-2% GPU use on inactive GPUs occasionally.
True, again I don't own the benchmarks so I can't speak for their validity. Just sharing what's out there :D

Interesting the 95 Deg as the cards are configured stock to allow 105 on the mems. Will keep that in mind. Btw card survived a cool-off and fresh boot and it's mining again. Next weekend if I have the time I'll do some long-term gaming tests with it's original bios, there seems to be no voltage spikes with the card so I might actually try to follow the initial plan and use it as my wife's gaming gpu. She'll be happy to ditch that good old 280X she has :laugh: Just didn't want to do it without being sure it wouldn't damage other parts but LGTM so far..

Also @Chrispy_ might be missing something but with the clock at 1266 it loses performance. I don't want to up it too much cause that will force me to up the voltage too or it won't hold stable, but I've gone up to 1410 core clock and it keeps increasing the hashrate.. From 44ish to solid 45+ (1266 -> 1410 core). Mems at 900, according to your stats I should be getting some 57+ right? I modded the bios exactly as you mentioned :)

Edit: pic to demonstrate hashrate with clock increase (compare to previous reply). voltage is lower just to test for now if it holds stable with 1410@825mV
1642524646177.png

(ignore different temps, it fluctuates between 65-66 on core and 78-82 on mems)

In task manager on a multi-GPU rig you can see some little blips of 1-2% GPU use on inactive GPUs occasionally.
Even if you set IGP primary and disable multi-display in BIOS? Interesting if so. Not all boards have this option, I suppose the ones that don't just have multi display on by default..

PS edit: Yeah I think I understand how heat dissipation works with air. It should actually be worse than that: as room temperature goes up you lose dissipation capacity. So the difference in temps in the card between 20C and 30C room temperature should be higher than those 10C difference. Nonetheless, there's plenty room for improvement - as I explained I'm not actively mining so nothing is setup properly, I probably have old fans lying around so I'll do some air flow tests to see how I can improve the temps before the warm months come (if the card even lasts that long :laugh: ).
 

Attachments

  • 1642525126670.png
    1642525126670.png
    14.2 KB · Views: 28
Last edited:
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
yeah, the 105C stock VRAM limit on 5700-series is a peak, temporary limit.
You absolutely don't want to be anywhere near that limit for 24/7 operation.

1266 is not the absolute fastest mining speed. You probably want 1325 on the core for that but you'll need to bump the voltage up to like 925mv 825mv or something which raises power consumption and hurts the MH/Watt that is key to profitable mining.

Mining performance starts to nosedive below 1200MHz on the core, so my 1266 is to keep it comfortably above that. If you bench different clockspeeds you'll see that there's maybe 1-2% more performance to be had at 1300+ MHz, but you'll gain 10-15 Watts which is a 10% power use increase for almost nothing. If you have dirt cheap electricity then 1325 is probably the best mining clock.

edited my screwup. my screenshot of voltages is accurate, my fat-fingered typing is not.
 
Last edited:
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
yeah, the 105C stock VRAM limit on 5700-series is a peak, temporary limit.
You absolutely don't want to be anywhere near that limit for 24/7 operation.

1266 is not the absolute fastest mining speed. You probably want 1325 on the core for that but you'll need to bump the voltage up to like 925mv or something which raises power consumption and hurts the MH/Watt that is key to profitable mining.

Mining performance starts to nosedive below 1200MHz on the core, so my 1266 is to keep it comfortably above that. If you bench different clockspeeds you'll see that there's maybe 1-2% more performance to be had at 1300+ MHz, but you'll gain 10-15 Watts which is a 10% power use increase for almost nothing. If you have dirt cheap electricity then 1325 is probably the best mining clock.
I understand the theory behind it, but this card seems to be "special". I actually went lower on the voltage and it's still holding, can report later if it's stable but so far it even looks more stable (less stale/rejected shares, although that can be completely unrelated if the cause of those shares is on pool side).

I went up the clock with small steps until 1410, MH/s kept going up, consumption stayed around the same.. Sure it peaks higher (127-128W instead of 124-125W) but the average numbers from the wall over a 10min period seemed to be the same. Either that or my wall plug meter is broken, always possible :laugh: I could keep going but I'm 99.9% sure it can't hold any higher clocks at this voltage, even 850 or 875. Anyway, I'll do lots of fine tuning if the card proves stable for mining but not for gaming (i.e. high core clock could enhance whatever issue the card has) and it ends up mining full-time.

PS: nonetheless, I dont get your suggested numbers either with 1266 cclock or 1410. Weird :( not that I care for 1-2MH/s, just trying to figure this out :)

Edit: HOLD ON @Chrispy_ , you mentioned 900mV VDDC before, the last pic you shared is 775... That's something way more interesting, if I can run the exact same settings @ 1266 but 775 core voltage that will help with the temps a lot and reduce power usage.
 
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Sorry, I was out by 100mv going from memory.

1200 core will usually run at 750mv
1300+ core will usually need north of 800mv - typically 825 is stable but there's some silicon lottery.

When you modded the BIOS did you lower the voltage limit? Because otherwise you can set whatever you want and it'll say whatever you set it to but it'll really still be running at 750mv.

Thinking about it more carefully the voltage needed ramps up dramatically as you gain clockspeeds. By 1500MHz you're probably needing over a volt. You basically want to run as fast as you can at the lowest possible voltage, and for many cards (but not all) the voltage needs start to up up from the 750mv baseline between 1250 and 1350MHz.
 
Joined
Jan 10, 2022
Messages
62 (0.08/day)
Location
Noord-Holland, Netherlands
Processor Intel i7 11700F
Motherboard Asrock B560 Pro4
Cooling Noctua NH-D14
Memory Corsair Vengeance LPX 2x8GB
Video Card(s) EVGA RTX 3060 Ti XC
Storage Samsung 980 Pro 500 GB NVMe
Display(s) BenQ XL2411P
Case MSI MAG Forge 100R
Power Supply XFX Pro Series 750W
@Chrispy_ Ok so 775mV on core does reduce the temps a little, but still nothing compared to this image (core 60, hotspot 68). I need to apply new thermal paste anyway, already opened and closed the card 3 times with the same paste so I can prob get this even lower. But 35-40% fan speeds only?? I need to run mine at 75% (no temp difference above that) to achieve decent core temps.. What ambient temperature is around this rig in the pic?

when you modded the BIOS did you lower the voltage limit? Because otherwise you can set whatever you want and it'll say whatever you set it to but it'll really still be running at 850mv.
Are you 100% sure about this? I see an immediate difference in power consumption and temps by simply changing the VDDC. Amd drivers also report the correct voltage (with 775mV limit it reports its running at 768mV). I have no way to explain temp and wattage drop if what you say is correct... I'm specifically talking about VDDC, not MVDD.
 
Joined
Feb 20, 2019
Messages
7,194 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Are you 100% sure about this? I see an immediate difference in power consumption and temps by simply changing the VDDC. Amd drivers also report the correct voltage (with 775mV limit it reports its running at 768mV). I have no way to explain temp and wattage drop if what you say is correct... I'm specifically talking about VDDC, not MVDD.
Not 100%, no.
It was ~10 months ago since started looking at mining BIOSes and the one thing I remember compared to Polaris BIOS edits was that I couldn't reduce the minimum vcore less than a certain amount. I can't remember if it was 800mv or 750mv but there was an artificially-imposed floor in the BIOS that could be tweaked with MPT.

I don't have any cards left running stock BIOSes to check, but it looks like I picked 725mv as the new lowest limit just to give myself wiggle room during tweaking. I'm sure one or two of my cards are way better than average based on the bell curve of normal distribution but I don't have time to find the optimum clock/voltage and stability test iterations for 24 cards - that's a big project!

I just know from when I was running a 5700XT for gaming that 750mv is usually the lowest it was stable at game clocks (ie, not 300MHz 2D mode) and whilst 1500MHz+ likely needs ~1000mv, you should be able to get 1250-1350 without really raising voltage at all. I set 775mv typically because although some cards will run at 750 just fine, I kind of just need to keep things simple when dealing with 24 cards at once. I can only imagine the database of tweaks large mining farm operators have to keep up with - that's probably their full-time job!

Anyway, getting the best voltage/clocks is good but remember that you're never going to make massive differences to your overall power consumption - and the power numbers reported are just the power the GPU and VRAM are consuming so you still need to add ~10-20W to that for the VRM inefficiencies and the cooling fan depending on how fast it's running.

With the goal of seeking the optimum power efficiency to save that last couple of Watts, just remember that your CPU, RAM, case fans, SSD, all use like 50-100W and then the whole system is running off a power supply that is only 90% efficient so that's another 20W+ wasted.

Bascially, power use is proportional to the square of the voltage. Get the card clocked as high as you can whilst keeping the voltage as close to 750mv as you can get away with and remember that stability is more important than the last 1-2% :)
 
Top