Sunday, August 26th 2018

AMD Announces Dual-Vega Radeon Pro V340 for High-Density Computing

AMD today at VMworld in Las Vegas announced their new, high-density computing, dual-GPU Radeon Pro V340 accelerator. This graphics card (or maybe accelerator) is based on the same Vega that makes AMD's consumer graphics card lineup, and crams its dual GPUs into a single card with a dual-slot design. 32 GB of second-generation Error Correcting Code (ECC) high-bandwidth memory (HBM) greases the wheels for the gargantuan amounts of data these accelerators are meant to crunch and power through, even as media processing requirements go through the roof.
"As the flagship of our new Radeon Pro V-series product line, the Radeon Pro V340 graphics card employs advanced security features and helps to cost effectively deliver and accelerate modern visualization workloads from the datacenter," said Ogi Brkic, general manager of Radeon Pro at AMD.

"The AMD Radeon Pro V340 graphics card will enable our customers to securely leverage desktop and application virtualization for the most graphically demanding applications," said Sheldon D'Paiva, director of Product Marketing at VMware. "With Radeon Pro for VMware, admins can easily set up a VDI environment, rapidly deploy virtual GPUs to existing virtual machines and enable hundreds of professionals with just a few mouse clicks."
"With increased density, faster frame buffer and enhanced security, the AMD Radeon Pro V340 graphics card delivers a powerful new choice for our customers to power their Citrix Workspace, even for the most demanding applications," said Calvin Hsu, VP of Product Marketing at Citrix.
Sources: AMD, via Tom's Hardware
Add your own comment

54 Comments on AMD Announces Dual-Vega Radeon Pro V340 for High-Density Computing

#26
londiste
Vya Domus, post: 3892957, member: 169281"
Quite the contrary, synchronization is brought down to a minimum in graphics processing. Data parallelism is king and that does not require a lot synchronization, as a matter of fact the only type of synchronization you can do on a GPU is a barrier on all the threads within a SM/CU which is fairly primitive. The only real synchronization that is critical is between the stages themselves not within them and that does not inherently represent a big problem for a potential multi GPU implementation.
Well, how many stages are there, for example when rendering a single frame?
Posted on Reply
#27
Vya Domus
londiste, post: 3892962, member: 169790"
Well, how many stages are there, for example when rendering a single frame?
It varies, some are required like input assemblers, vertex and fragment shaders others optional like tessellation.
Posted on Reply
#28
Caring1
While it was designed as a Server card, it appears to have a USB-C? port on the pci slot bracket, so perhaps this can be used for gaming afterall.
Posted on Reply
#29
Vya Domus
Caring1, post: 3892970, member: 153156"
While it was designed as a Server card, it appears to have a USB-C? port on the pci slot bracket, so perhaps this can be used for gaming afterall.
Doesn't look like a type-c connector to me.
Posted on Reply
#31
Caring1
Vya Domus, post: 3892973, member: 169281"
Doesn't look like a type-c connector to me.
I wasn't sure, any ideas what it is?
Posted on Reply
#32
Vayra86
Fluffmeister, post: 3892851, member: 101373"
This thing is supposed to sit in a server rack packed full of cooling goodness.
Gotta love the msdt perspective regardless of what hardware ppl look at. Im surprised no one asked if it ran Crysis hehe
Posted on Reply
#33
hat
Enthusiast
Vayra86, post: 3892977, member: 152404"
Gotta love the msdt perspective regardless of what hardware ppl look at. Im surprised no one asked if it ran Crysis hehe
No, it will never be enough to run Crysis. Nothing will ever run Crysis. These guys tried, and failed, brave souls though they were:

Posted on Reply
#34
RejZoR
"A mining facility"

More like: "A main reason for shortage of graphic cards and biggest global warming contributor."
Posted on Reply
#35
hat
Enthusiast
RejZoR, post: 3892984, member: 1515"
"A mining facility"

More like: "A main reason for shortage of graphic cards
True

RejZoR, post: 3892984, member: 1515"
and biggest global warming contributor."
I can understand the mining hate... but biggest global warming contribuitor? lolno
Posted on Reply
#36
TheinsanegamerN
lexluthermiester, post: 3892869, member: 134537"
That's an interesting point. In a 1U or 2U rack the ventilation would come from the system itself. They can't just be focusing on that cooling scenario though.
Why not? That cooling scenario is how ALL blade GPU servers are set up. Go look at the tesla GPUs, same thing. No fan, just a full card length heatsink designed to funnel cold air through the GPU out of the case. There are servers that can cool 4-8 of these at once.
Posted on Reply
#37
Valantar
TheinsanegamerN, post: 3892989, member: 127292"
Why not? That cooling scenario is how ALL blade GPU servers are set up. Go look at the tesla GPUs, same thing. No fan, just a full card length heatsink designed to funnel cold air through the GPU out of the case. There are servers that can cool 4-8 of these at once.
Yep. Have these people never, ever seen a server/datacenter GPU launch post before? None of these have built-in fans. If they had, they'd be blown to pieces by the server's own fans, and they'd take up valuable volume that could house a bigger heatsink. I'm kind of baffled by all these "OMG where's the fan?!?!?!?!?!" comments.

Joss, post: 3892936, member: 152251"
Do you guys think this could prelude a consumer/gaming dual Vega?
No. Multi-GPU for consumers might not be entirely dead, but as a flagship product, it certainly is. AMD launching a card like this for consumers (where it makes zero sense) would make them a laughing stock. They know what a PR faceplant that would be, and have sensibly avoided that since the 295X.
Posted on Reply
#38
jabbadap
hat, post: 3892961, member: 32804"
Well, the Quadro P6000 can beat the Titan XP by a decent margin even in gaming. Though it would be ridiculous to buy such a card for gaming, it at least shows that something's going on there...
You mean Titan X(Pascal) aka Titan "XP" not the nvidia Titan Xp...

Despite the name Radeon Pro these are meant to compete with nvidia Teslas at gpu virtualization. Someone should say their marketing department that Tesla V100 32GB exists though.
Posted on Reply
#39
Valantar
jabbadap, post: 3892993, member: 148195"
Despite the name Radeon Pro these are meant to compete with nvidia Teslas at gpu virtualization. Someone should say their marketing department that Tesla V100 32GB exists though.
That card is >$11 000. If I were to make a guess, I'd say this card will be around 1/4 of that. Perhaps a bit more. But still: you'd get 3-4 of these for the price of a Tesla V100. Which, again, would let you deploy 3-4x as many VMs.
Posted on Reply
#40
londiste
jabbadap, post: 3892993, member: 148195"
Someone should say their marketing department that Tesla V100 32GB exists though.
They are technically right though.
V100 is a hell of a lot more expensive, so "not in the same class" and V100 is also sold primarily for compute, not virtualization :)
Posted on Reply
#41
jabbadap
londiste, post: 3892997, member: 169790"
They are technically right though.
V100 is a hell of a lot more expensive, so "not in the same class" and V100 is also sold primarily for compute, not virtualization :)
Was there price somewhere for this. Tesla V100 32GB pcie costs ~8000€.
Posted on Reply
#42
Vya Domus
jabbadap, post: 3893003, member: 148195"
Was there price somewhere for this. Tesla V100 32GB pcie costs ~8000€.
Yeah because we all expect it to be more expensive.
Posted on Reply
#43
Mescalamba
Might be fun with some watercooling.

Wonder what would be mine effectivity, pretty good I guess?
Posted on Reply
#44
RejZoR
hat, post: 3892986, member: 32804"
True



I can understand the mining hate... but biggest global warming contribuitor? lolno
Where do you think power comes from? Magic pixies? It's still mostly coal power plants, especially in countries where they run this shit the most (Asia). So they grind some meaningless numbers to make cash. Yeah, it's literally that.
Posted on Reply
#45
jabbadap
Vya Domus, post: 3893018, member: 169281"
Yeah because we all expect it to be more expensive.
More expensive than what? P40 maybe, V100 probably not. Price points with these are quite meaningless anyway. Take some HPE, Dell, lenovo, Fujitsu et al. pricing and there's obvious premium/support in those prices(like Dell sells P40 at 12k€ and V100 at 20k€).
Posted on Reply
#46
Vya Domus
jabbadap, post: 3893041, member: 148195"
More expensive than what?
I was being sarcastic, there isn't a single card AMD has , server grade or consumer, that is more expensive than their Nvidia counterpart. The lower prices sit pretty much at the core of their business.

jabbadap, post: 3893041, member: 148195"
Price points with these are quite meaningless anyway.
Prices aren't meaningless at all. I don't know why everyone thinks OEMs like to throw with money right and left as if they don't care when they buy high volume server grade stuff like this, I can guarantee you they care.
Posted on Reply
#47
hyp36rmax
lexluthermiester, post: 3892844, member: 134537"
I'm looking at this design and thinking; " Where's the fan? "
This is designed for server racks with huge fans pushing air behind it. Similar setup as you can see here:

Posted on Reply
#48
HD64G
Some other slide not posted here shows that the dual-gpu core into it are Vega 56. So, if setup properly for efficiency this pair can consume 250-270W on full load.
Posted on Reply
#49
Casecutter
What this tells me is AMD is super confident on the production of chips, memory, interposer, and in the level of quality of all that now packaged onto the interposer. Then the conviction in placing two onto a PCB! HPC and Data Center's aren't about to stand for having these be buggy or fail for the money and dependence the place on them. By this I see the technology and process's have truly become mainstream for all markets.
Posted on Reply
#50
jabbadap
HD64G, post: 3893136, member: 95052"
Some other slide not posted here shows that the dual-gpu core into it are Vega 56. So, if setup properly for efficiency this pair can consume 250-270W on full load.

Well it has official TDP of 300W and requires two 8-pin power. It has it own site on amd too. Interestingly that memory bandwidth translates to 2Gbps hbm2s so it's the same speed but higher density as what radeon pro WX8200 uses.
Posted on Reply
Add your own comment