Monday, September 4th 2017

RX Vega Achieves 43 MH/s @ 130 W in Ethereum Mining

AMD's RX Vega is more along the lines of an original computing card that was moved over to the consumer segment for gaming workloads than the other way around. Raja Koduri himself has said something along those lines (extrapolating a little more than what he can actually say), and that much can be gleaned with at least a modicum of confidence through AMD's market positioning and overall computing push. In the argument between gamers and miners, Raja Koduri didn't have all that much to say, but for AMD, a sale is a sale, and it would seem that after some tweaking, RX Vega graphics cards can achieve much increased levels of mining efficiency than their Polaris counterparts, further showing how Vega handles compute workloads much better - and more efficiently - than traditional gaming ones.
Now granted, Vega's strength in mining tasks - Ethereum in particular - stems mainly from the card's usage of HBM2 memory, as well as a wide architecture with its 4096 stream processors. By setting the core clocks to 1000 MHz, the HBM2 memory clock at 1100 MHz, and power target at -24%, Reddit user S1L3N7_D3A7H was able to leverage Vega's strengths in Ethereum's PoW (Proof of Work) algorithm, achieving 43 MH/s with just 130 W of power (104 W of these for the core alone.) For comparison, tweaked RX 580 graphics cards usually deliver around 30 MH/s with 75 W core power, which amounts to around 115 W power draw per card. So Vega is achieving 43% more hash rate with a meager 13% increase in power consumption - a worthy trade-off if miners have ever seen one. This means that Vega 64 beats RX 580 cards in single node hashrate density, meaning that miners can pack more of these cards in a single system for a denser configuration with much increased performance over a similarly specced RX 580-based mining station. This was even achieved without AMD's special-purpose Beta mining driver, which has seen reports of graphical corruption and instability - the scenario could improve for miners even more with a stable release.
Moreover, S1L3N7_D3A7H said he could probably achieve the same mining efficiency on a Vega 56, which isn't all that unbelievable - memory throughput is king in Ethereum mining, so HBm2 could still be leveraged in that graphics card. It seems that at least some of that initial Vega 64 stock went into some miner's hands, as expected. And with these news, I think we'd be forgiven for holding out to our hats in the expectation of increased Vega stock (at the original $499 for Vega 64 and $399 for Vega 56 MSRP) come October. Should the users' claims about RX Vega 56 efficiency be verified, and coeteris paribus in the mining algorithms landscape for the foreseeable future, then we can very much wait for respectable inventory until Navi enters the scene. Source: Reddit user @ S1L3N7_D3A7H
Add your own comment

102 Comments on RX Vega Achieves 43 MH/s @ 130 W in Ethereum Mining

#27
bug
"FordGT90Concept said:
Radeon versus GeForce has really become more like RISC versus ASIC. Both designs have their pros and cons. Vega is as good at compute as Pascal is at gaming; the reverse is untrue.
Depends on which Pascal you're looking at ;)
Posted on Reply
#28
FordGT90Concept
"I go fast!1!11!1!"
GP100 is slower than Vega64 in compute. Only way GP100 comes out on top is in memory intensive tasks thanks to the 4096-bit bus.
Posted on Reply
#29
dimitrix
Fake news ... power consumption is almost double ... near to 250W

So you can make 50 MH/s using 2XRX570 with less power ...
Posted on Reply
#30
_Flare
https://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf

Efforts to exploit the GPU for non-graphical applications have been underway since 2003. By using high-level shading languages such as DirectX, OpenGL and Cg, various data parallelalgorithms have been ported to the GPU. Problems such as protein folding, stock options pricing, SQL queries, and MRI reconstruction achieved remarkable performance speedups on the GPU. These early efforts that used graphics APIs for general purpose computing were known as GPGPU programs.
While the GPGPU model demonstrated great speedups, itfaced several drawbacks. First, itrequired the programmer to possess intimate knowledge of graphics APIs and GPUarchitecture. Second, problems had to be expressed in terms of vertex coordinates, texturesand shader programs, greatly increasing program complexity. Third, basic programming features such as random reads and writes to memory were not supported, greatly restricting the programming model. Lastly, the lack of double precision support (until recently) meantsome scientific applications could not be run on the GPU.

To address these problems, NVIDIA introduced two key technologies—the G80 unified graphics and compute architecture (first introduced in GeForce 8800®, Quadro FX 5600®, andTesla C870®GPUs), and CUDA, a software and hardware architecture that enabled the GPU to be programmed with a variety of high level programming languages. Together, these two technologies represented a new way of using the GPU. Instead of programming dedicated graphics units with graphics APIs, the programmer could now write C programs with CUDA extensions and target a general purpose, massively parallel processor. We called this new way
of GPU programming “GPU Computing”—it signified broader application support, widerprogramming language support, and a clear separation from the early “GPGPU” model ofprogramming.

So GPGPU is dead, long live GPU-Computing ... (addition for the archives)
Posted on Reply
#31
T4C Fantasy
CPU & GPU DB Maintainer
"silentbogo said:
Lol. Some random guy from reddit claims 130W usage based on HWInfo screenshot and behold - it's all over the net!
I've been reading about it yesterday, and even WCFTech [!!!]... just think about it, WCFTech did a follow-up/fact checking on those claims:

In context: they are measuring RX Vega64 with 980mV undervolt at 1130/1100 MHz (vs 1000/1100 @1000mV?).

All things considered, 43.5MH/s is still an impressive result but in this context it is irrelevant. That power consumption number is total fiction of a delusional kid from reddit and until vega finally hits the shelves at promised prices - no one in their right mind is going to buy it. It is still a 6+month for a complete return on investment at MSRP, and "f^&k that" at today's fictional retail price. In terms of perf/W - a pair of undervolted GTX1060 6G's makes more sense and is abundant in stores worldwide.

So, once again, there is no reason for miners to hunt for Vega until the price drops to MSRP, the shelves are stocked, and/or AMD optimizes the crap out of it to run ~60+MH/s.
Its not hard to play with voltages to get a vega 64 to mine at 130w....

Thats my only problem with the post... 130w is so easy to get if you know how to undervolt and clock
Posted on Reply
#32
DeathtoGnomes
"dimitrix said:
Fake news ... power consumption is almost double ... near to 250W

So you can make 50 MH/s using 2XRX570 with less power ...
dont be silly, ever hear of undervolting? Get out and visit more tech and review sites! There were some claims to have had wattage down to under 120 watts.

beep-beep, cluebus leaving...:laugh:
Posted on Reply
#33
_Flare
"idx said:
A GPU should be more like a general purpose chip for parallel compute tasks that can be used in any way a user needs it to be. It is really bad what Nvidia is doing with their stripped off GPUs.

I really hope AMD will never ever follow that path. Nvidia went so far with crippling GPUs to the level that they swap shader files at the driver level and custom patch drivers for specific games ( isn't that supposed to be the developers problem to optimize their applications...).

I would love to see GPUs treated more like GP Processors not some specific game renderer.

EDIT:

I mean if I buy GPU I would like it to do the tasks that I throw at. It is not of anyone's business to interfere with what people do with their hardware ( edit, render, mine , game, or even experiment with other stuff).
1. GCN never will be able to come close to GeForce at Top-Tier.
2. A totally Computing focused GPU, wich can game too, never will come close a pure Pixel-Monster, wich can Compute too, in Games on an efficient approach.
3. GCN just isn´t as modular as GeForce are.
some pictures what Nvidia can mix and match:
PDF from "Tesla G80", to Volta GeForce
https://www.file-upload.net/download-12692808/nvidia_SMX_GM104_GP100_GV100.pdf.html

and now remember what happened from HD7970 to Tonga to Fiji to Vega, sad huh?
Posted on Reply
#34
ZoneDymo
are we just going to ignore the Vega 56 is just as good as a GTX1070 if not better for the same money?

I mean sure AMD is much better in another area, but just because it does as good or even a bit worse in some area does not mean it suddenly is worthless for that area....

Yet I get that sort of vibe from TPU as of late, and I dont know where its coming from.
Or have fanboys successfully overhyped Vega so that the outcome could only be disappointment so that we can now systematically set AMD aside for no real reason?
Posted on Reply
#35
_Flare
well, like at Fiji Nano vs. GTX980 ... the Nano Vega will show IF a sub-180W GPU can beat the GTX1080, too.

And Navi will only scale with Mhz like the architectures before, but we will NOT see IPC-Gains in Games.
And IF Navi will have MCMs with so then doubled or quadrupled Frontends the Driver alone has to do the sync and take down 4 CPU-Cores easy.
Posted on Reply
#36
okidna
"Recus said:
No Vega definitely isn't for gamers. AMD used Geforce cards in their Gamescom booth.


[SPOILER="pics"]
[/SPOILER]
Damn mining craze, even AMD can't get stock of their own cards! :roll:
Posted on Reply
#37
silentbogo
"T4C Fantasy said:
Its not hard to play with voltages to get a vega 64 to mine at 130w....

Thats my only problem with the post... 130w is so easy to get if you know how to undervolt and clock
My problem is not the 130W figure, my problem is that the guy shows a screen of HWInfo displaying 103W GPU core and 43W VRAM, and screams 130W mining.
And this is wrong at least on several levels:
1) 103+43 ≠ 130
2) Software readings are inaccurate, especially when it comes to Vega
3) Card's power consumption is a lot more than just adding vCore and vRAM power

So, while it may somehow be possible to run Vega64 at 130W for mining, it is not possible at the current state to run 43.5MH/s at 130W....
...because science.

The most realistic approximation I can get off the top of my head is ~180W at 1000mV vCore with 1GHz GPU downclock, given that you don't attempt to overclock HBM.
We do have some Vega owners on TPU, who can probably verify my guesstimate.
Posted on Reply
#38
Sempron Guy
our local newspaper tabloid can come up with a better title than is. Why stoop so low?
Posted on Reply
#39
Camm
People whinge about AMD abandoning gamers.

Gamers abandoned AMD a long time ago, even when its cards were faster and cheaper.

AMD's strategy will make sense in the long term - full precision shaders are unnecessary for 90% of game tasks, so the shift to 16 bit shaders will be a boon both to compute centric tasks, and gaming on those tasks.

Until then (if your one of the 30% that buys a card that's faster than a 580), either you have other shit to do than just game, or really like AMD tech no matter what.
Posted on Reply
#40
qubit
Overclocked quantum bit
So Vega is a big old disappointment when it comes to gaming, but I have to hand it to AMD, they're very clever here. We must remember that their ultimate aim, like with any company, is to make a profit - to make money - and as much of it as possible, not to make customers happy, especially not the mainstream ones.

They've figured out that they can make a killing serving the mining market and not bother too much with satisfying gamers, so they've concentrated on creating great mining cards. You can even see this from Raja's comment, where Raevenlord says "in the argument between gamers and miners, Raja Koduri didn't have all that much to say, but for AMD, a sale is a sale". Quite. Sucks for us gamers, but I can't fault them for doing this as profit is the bottom line. I'd do exactly the same in their shoes. This strategy explains the use of that fast, expensive HBM on their cards, since it clearly helps with mining.

What we don't want is for NVIDIA to start doing the same, but I can see it happening...
Posted on Reply
#41
EarthDog
I thought i was on tpu, is this TMZ?
Posted on Reply
#42
FordGT90Concept
"I go fast!1!11!1!"
"Camm said:
People whinge about AMD abandoning gamers.

Gamers abandoned AMD a long time ago, even when its cards were faster and cheaper.
AMD did not abandon gamers and never has. Gamers always were the core of their audience and still are. Miners are extremely fickle and completely unloyal. Enterprise customers buy in large volumes in spats. Gamers are the only purchasers that AMD can build a GPU company from because they're reliable and relatively consistent (respond to broad market forces rather than emotion).

What you speak of...AMD still commanded some 40% of the GPU market share among gamers. Technology products ebb and flow based on their technology so AMD naturally fell behind when they were blindsided by Maxwell. Even so, AMD can price their products to still be competitive against NVIDIA, and they do, which is why they still command a healthy market share. AMD was blindsided by GP102 again. What's AMD's response? Vega56: faster than GTX1070 by a large margin for about the same price. Their technology still is behind NVIDIA which shows in power consumption but, honestly, who cares? Most people make their purchase based on performance per dollar, not performance per watt.

AMD got knocked down again because NVIDIA outmaneuvered them again. They aren't out of the fight, not by a long shot.

Oh, and that RX Vega56 bitch slaps GP102 at compute. It's icing on the cake even if not particularly useful to gamers right now. There's still people happily playing new games on HD 7970 GHz cards. Radeon cards have untapped potential that is released with age. The same can't be said for GeForce. That raw compute power and architectural flexibility is the reason for that.
Posted on Reply
#43
iO
"qubit said:
So Vega is a big old disappointment when it comes to gaming, but I have to hand it to AMD, they're very clever here. We must remember that their ultimate aim, like with any company, is to make a profit - to make money - and as much of it as possible, not to make customers happy, especially not the mainstream ones.

They've figured out that they can make a killing serving the mining market and not bother too much with satisfying gamers, so they've concentrated on creating great mining cards. You can even see this from Raja's comment, where Raevenlord says "in the argument between gamers and miners, Raja Koduri didn't have all that much to say, but for AMD, a sale is a sale". Quite. Sucks for us gamers, but I can't fault them for doing this as profit is the bottom line. I'd do exactly the same in their shoes. This strategy explains the use of that fast, expensive HBM on their cards, since it clearly helps with mining.

What we don't want is for NVIDIA to start doing the same, but I can see it happening...
Sure, because they were able to predict an ASIC resistant currency that only scales with memory bandwith 3-5 years ago when the development of Vega began.
Posted on Reply
#44
qubit
Overclocked quantum bit
"iO said:
Sure, because they were able to predict an ASIC resistant currency that only scales with memory bandwith 3-5 years ago when the development of Vega began.
You think you've cockily nailed my argument with a one liner don't you? :rolleyes: You have not, smartypants.
Posted on Reply
#45
vega22
another flame/click bait title dude :|

so because someone took the time to develop and optimise a mining client, which shows the real power these cards have, it makes them not for gaming....
Posted on Reply
#46
Vayra86
"qubit said:
You think you've cockily nailed my argument with a one liner don't you? :rolleyes: You have not, smartypants.
I'll take a stab at it then - to consider AMD (or Raja) smart enough to market this card for mining instead of gaming is way overestimating their capacity for good business. On top of that, it ISNT good business when you release sub top GPU performance over a year later than the competitor at the exact same price. Its stagnation and nobody likes it. The only reason Vega is now in the picture for mining, is because the Nvidia stock (which mines more efficiently these days...) ROI has become a lot less favorable but mining still is profitable.

Don't attribute a mining craze to anything AMD is doing right now, really, it is complete idiocy. The only thing mining is, is Vega's saving grace, because AMD gets to sell these at a decent price and not go the way the Fury did, price drop after price drop because nobody wants a loud AIO when the competitor has silent air. All AMD does is jump on the bandwagon out of pure necessity. There is no strategy here.
Posted on Reply
#47
Raevenlord
News Editor
"vega22 said:
another flame/click bait title dude :|

so because someone took the time to develop and optimise a mining client, which shows the real power these cards have, it makes them not for gaming....
Would you say that current prices and availability are for gamers? That it's worth it to pay $800 for a Vega 64, or $600+ for a Vega 56, solely for gaming?

If you were able to purchase Vega 56 at MSRP, great. That's a pretty good gaming card, you'll be extremely satisfied - and you have objective reasons for being so.

If you purchased any Vega graphics card above MSRP, you may still feel great about your purchase, but objectively, it's almost certain that the comparable NVIDIA alternative is better in pure gaming/price/performance/power terms.

Meanwhile, if you're mining, you're actually tapping into Vega's potential and strengths, which sadly, and I would love to be wrong, isn't reflected in its gaming prowess.

That's why these aren't for gamers right now. Your mileage may vary with personal opinion, your favorite manufacturer, sure. But objectively, in a technical review, price/performance/power consumption graph like you see here at TPU, that doesn't stand.
Posted on Reply
#48
silentbogo
"Vayra86 said:
Don't attribute a mining craze to anything AMD is doing right now, really, it is complete idiocy. The only thing mining is, is Vega's saving grace, because AMD gets to sell these at a decent price and not go the way the Fury did, price drop after price drop because nobody wants a loud AIO when the competitor has silent air. All AMD does is jump on the bandwagon out of pure necessity. There is no strategy here.
Agreed. Over the course of Vega story everyone from AMD tried to distance themselves from mining as far as possible, from Lisa Su stating that "mining is not taken in consideration" to Raja Koduri taking the NVidia approach with gaming and deep learning. Heck, even all promotional events were centered around games, from DOOM demo, to dumping a ton of money into promoting Quake Champions and co-sponsoring QWC with both cash and hardware.
RX Vega is about as gaming as a gaming card can get, the only problem is not even performance, but the supply.

So far out of the entire lineup of AMD's new cards we have:
- depleted polaris supply
- almost no Vega56/64 supply
- very-very few Frontier Edition cards in even fewer countries
- complete mystery about Radeon Instinct... (even Fiji-based MI8 is not out yet, and Vega-based MI25 was probably the first Vega to be announced)
Posted on Reply
#49
EarthDog
"Raevenlord said:
Would you say that current prices and availability are for gamers? That it's worth it to pay $800 for a Vega 64, or $600+ for a Vega 56, solely for gaming?
But that also doesnt make them NOT for gaming, an inflated price.

A valid argument can be made it isnt for mining either as the ROI at its current pricing doesnt balance out with some other cards either.

See how that works... for both sides?

"qubit said:
You think you've cockily nailed my argument with a one liner don't you? :rolleyes: You have not, smartypants.
Instead of taking your ball and going home with a snarky comment, how about you respond with why you feel that isnt true...its how forums should work.
Posted on Reply
#50
renz496
"FordGT90Concept said:
GP100 is slower than Vega64 in compute. Only way GP100 comes out on top is in memory intensive tasks thanks to the 4096-bit bus.
GP100 is only slightly slower than Vega (10Tflops vs 13Tflops). but for pure compute chip GP100 still better than Vega since GP100 FP64 was rated at 1/2 of it's FP32 while for Vega was rated at 1/16 (5Tflops vs 0.8Tflops). and we still have to count how efficient the hardware is to extract it's raw performance.
Posted on Reply
Add your own comment