• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

"Radeon RX 6600 XT is most efficient mining card yet"

Low quality post by P4-630
Joined
Jan 5, 2006
Messages
18,584 (2.63/day)
System Name AlderLake
Processor Intel i7 12700K P-Cores @ 5Ghz
Motherboard Gigabyte Z690 Aorus Master
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MT/s CL36
Video Card(s) MSI RTX 2070 Super Gaming X Trio
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p
Case Be quiet! Silent Base 600 - Window
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W
Mouse Logitech MX Anywhere 2 Laser wireless
Keyboard RAPOO E9270P Black 5GHz wireless
Software Windows 11
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
Last edited:
well kinda but not really, 3060 Ti and 3070 (non LHR) can get 60MH/s at 100W (.6 PPW) with a bit of undervolt, but let the miners take them 6600XT :D
 
Debatable:

Looking at a single card then yes, 32MH/s at 55W is impressive
Looking at a rig of 6 cards it's 192MH/s for 330W + 75W overhead for the motherboard, CPU@~30W, RAM, fans, SSD, etc for a net of 405W for 192MH/s

Using more powerful cards like a vanilla 5700 8GB at 55MH/s for 110W can't quite match the hashrate-per-Watt on a per-card basis but you need fewer rigs to produce the same hashrate and that rig overhead can be almost halved.

A rig of 6x 6600XT is 192MH for 405W, which is 0.47ppw
A rig of 6x 5700 is 330MH for 735W which is 0.45ppw

So it's good, but close enough that halving the number of rigs taking up space, power sockets, network points, per-rig maintenance/management/effort is way more beneficial than squeaking a 4% efficiency gain. It isn't going to convince miners to swap to these unless they're vastly cheaper than the alternatives to purchase and very widely available.
 
Let them have it.
 
Debatable:

Looking at a single card then yes, 32MH/s at 55W is impressive
Looking at a rig of 6 cards it's 192MH/s for 330W + 75W overhead for the motherboard, CPU@~30W, RAM, fans, SSD, etc for a net of 405W for 192MH/s

Using more powerful cards like a vanilla 5700 8GB at 55MH/s for 110W can't quite match the hashrate-per-Watt on a per-card basis but you need fewer rigs to produce the same hashrate and that rig overhead can be almost halved.

A rig of 6x 6600XT is 192MH for 405W, which is 0.47ppw
A rig of 6x 5700 is 330MH for 735W which is 0.45ppw

So it's good, but close enough that halving the number of rigs taking up space, power sockets, network points, per-rig maintenance/management/effort is way more beneficial than squeaking a 4% efficiency gain. It isn't going to convince miners to swap to these unless they're vastly cheaper than the alternatives to purchase and very widely available.
This. Board + CPU overhead is a huge factor people are forgetting...
 
I don't think anybody ever said it's bad but compute is clearly not the main focus either.
Polaris/Vega/RDNA1/RDNA2 compute performance are kind of irrelevant - it's AMD's VRAM controllers that make them desirable for ETH, and I'm running all of the core clocks as low as possible because it has almost no benefit to ETH mining.

1628710972221.png

The only reason I don't run the cores lower than that is because there's no power/temp reductions from dropping it further whilst the memory controllers are running an overclock at full load.
 
Other thing is these cards have of order twice the ROI time of other cards you can buy...
 
As usual capital cost allowance for initial cost along with the inevitable failures

Chia enthusiasts are spinning their wheels trashing hard disks in the process by the thousands
 
55W is misleading, because this is ASIC only power. The total power is 20-30W higher, which means it slides down the list to somewhere around RX 6800.
Assuming you mean GPU-only power, are you sure about that?

Nvidia's driver outputs board-power and GPU-only-power seperately in things like GPU-Z and HWInfo
AMD's driver outputs only board-power for current-gen products. A 180W board total TDP product like the vanilla 5700 will, in something like Furmark, report a 180W usage, not 150W which is what you'd see if it was reporting the GPU power only.

I wasn't sure myself until I hooked up a kill-a-watt meter to my (now three) rigs:

18GPUs @ 107W
15fans @ 3W
3 CPUs @ ~30W
3 motherboards/SSD/WiFi @ ~20W
mutiply all by 1/0.92 for 80+ Gold PSU efficiency

Total 'from-the-wall' use (theoretically calculated from GPU-Z and TDP estimates of other parts) = 2305W
Actual kill-a-watt meter reading: 2275-2375W, fluctuates quite wildly but never saw north of 2400W.

If I had to add 20-30W per GPU I would expect the meter to read more like 2800W....
 
See lots of uninformed people keep saying RDNA2 draw 100W less than Ampere because of this. AMD driver power measurement doesn't monitor VRM power loss, which could be 30-40W at the higher end (6900XT) and maybe around 20W with the 6600XT
 
To me that doesn't make sense. Google "VRM efficiency" and you see pages upon pages of the same graph - showing that VRMs have an afficiency of 94-95%

For a 30-40W loss on a 6900XT, that would mean that the card is using 600-800W of power, which we all know isn't happening, because the PCIe power connectors would melt way before that.
 
To me that doesn't make sense. Google "VRM efficiency" and you see pages upon pages of the same graph - showing that VRMs have an afficiency of 94-95%

For a 30-40W loss on a 6900XT, that would mean that the card is using 600-800W of power, which we all know isn't happening, because the PCIe power connectors would melt way before that.

The higher the current handling the lower the VRM efficiency, for example here is a spec sheet of the IR3553 power stage.

1579689183747.png


Let say the 6900XT use 240W for the chip, with 8 phase VRM, each power stage handle 25A (assuming Vcore is 1.2V) each and produce 3.5W each, 8x 3.5= 28W of power loss for the GPU alone. Higher end SKU will use better power stages for higher efficiency.

Go to Youtube and search for Buildzoid channel, he does a lot of PCB analysing stuff and explain the VRM power loss.
 
Back
Top