• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Announces the Radeon Pro Vega II and Pro Vega II Duo Graphics Cards

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,853 (7.39/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD today announced the Radeon Pro Vega II and Pro Vega II Duo graphics cards, making their debut with the new Apple Mac Pro workstation. Based on an enhanced 32 GB variant of the 7 nm "Vega 20" MCM, the Radeon Pro Vega II maxes out its GPU silicon, with 4,096 stream processors, 1.70 GHz peak engine clock, 32 GB of 4096-bit HBM2 memory, and 1 TB/s of memory bandwidth. The card features both PCI-Express 3.0 x16 and InfinityFabric interfaces. As its name suggests, the Pro Vega II is designed for professional workloads, and comes with certifications for nearly all professional content creation applications.

The Radeon Pro Vega II Duo is the first dual-GPU graphics card from AMD in ages. Purpose built for the Mac Pro (and available on the Apple workstation only), this card puts two fully unlocked "Vega 20" MCMs with 32 GB HBM2 memory each on a single PCB. The card uses a bridge chip to connect the two GPUs to the system bus, but in addition, has an 84.5 GB/s InfinityFabric link running between the two GPUs, for rapid memory access, GPU and memory virtualization, and interoperability between the two GPUs, bypassing the host system bus. In addition to certifications for every conceivable content creation suite for the MacOS platform, AMD dropped in heavy optimization for the Metal 3D graphics API. For now the two graphics cards are only available as options for the Apple Mac Pro. The single-GPU Pro Vega II may see standalone product availability later this year, but the Pro Vega II Duo will remain a Mac Pro-exclusive.



View at TechPowerUp Main Site
 
Dual GPU. Now that is a squeaker. I thought AMD would never go dual GPU. Infinity fabric between chips. Hmm wonder how that is going to pan out. Although it is an HBM2 so I wonder what the delay would be since Ryzen had some latency. Especially the 1st gen. Hope the GPU can avoid such a delay.
I assume this is only for MAC and we are not talking about PC alternative of the same GPU?
 
I'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...

apple_gpu.jpg
 
this sounds like the successor of the beastly R9 295 X2 GPU & HD7990. Kinda sad this is a specific card for Apple "PC"....
 
Last edited:
I'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...

Here you go.

The graphics card pulls its power entirely from a standard PCIe x16 slot, which is capable of 75W, and Apple's new propiertary PCIe connector that can supply up to 475W.

The graphics cards communicate with each other through AMD's Infinity Fabric Link connection for an aggregate bandwidth up to 84 GB/s per direction.
 
Infinity Fabric,it says so.they used if bridge for quad mi60s already.

Uhm, no, that's the blue outline between the chips...
But it might well be between cards as well.
 
Last edited:
Power connectors. I don't see any conventional 8- or 6-pin PCIe power connectors. Although, maybe they really are drawing all the power from the top left one in the photo, like it says.

I guess you didn't read the spec. See the front part of the rear PCIe connector, that does 475W of power, hence why it looks quite different.
 
I guess you didn't read the spec. See the front part of the rear PCIe connector, that does 475W of power, hence why it looks quite different.

I saw that, but I almost found it hard to believe considering that two 8-pin PCIe connectors, plus PCIe power are limited (in spec) to 375 watts. However, it looks like a fairly substantial connector, so I tried editing my post as quickly as I could which is why you see the part where I said "Although, maybe they really are drawing all the power from the top left one in the photo, like it says." After which, I just deleted the entire reply.
 
I'm curious as to what the connectors on the top are. I thought AMD had done away with CrossFire bridges...

View attachment 124268

No idea of the specifications of this "extended PCIe" they've invented, but considering it's roughly the same physical size as regular PCIe yet can supply over 6x the power, it's possible this has no data pins. Thus in a multi-card configuration (if that's even possible), the cards would need to talk to each other over the PCIe 3.0 bus, which would be severely limiting (in terms of both latency and bandwidth) compared to the Infinity Fabric link between the on-card GPUs. In that case the only moderately feasible solution would be a direct card-to-card link a la SLI or CrossFire.

As for AMD doing away with CF, it seems to me that this card throws all of the industry norms out the window, so I wouldn't read too much into its design in regards to more consumer-oriented products.
 
I saw that, but I almost found it hard to believe considering that two 8-pin PCIe connectors, plus PCIe power are limited (in spec) to 375 watts. However, it looks like a fairly substantial connector, so I tried editing my post as quickly as I could which is why you see the part where I said "Although, maybe they really are drawing all the power from the top left one in the photo, like it says." After which, I just deleted the entire reply.

It does indeed look a bit too good to be true, but if you look closely in the renders, it seems the slot has a dozen connectors and that part of the card is wider than the small part of the PCIe connector, so it seems like it should be possible. I like the design and would like to see it in PCs, but that's highly unlikely.

Note the weird little "plastic" blocks up front too, labelled with an exclamation mark and 1-2, 3-4 and 5-8. They look suspiciously like something to do with power as well.

macpro_expansion.jpg
 
I saw that, but I almost found it hard to believe considering that two 8-pin PCIe connectors, plus PCIe power are limited (in spec) to 375 watts. However, it looks like a fairly substantial connector, so I tried editing my post as quickly as I could which is why you see the part where I said "Although, maybe they really are drawing all the power from the top left one in the photo, like it says." After which, I just deleted the entire reply.
Looks like they have gone to same solution that Server PSU's use:
124278

124279
 
I don't want to know the noise levels.
 
I don't want to know the noise levels.


That's just the icing compared to those 2 nuclear reactors on it..... Well if you look at the bright-side you won't have to pay for heating. It's not like Macs are running cool these days anyway why not add more...
 
Dual GPU. Now that is a squeaker. I thought AMD would never go dual GPU.
For gaming they said no. But for compute-heavy workloads, dual GPU is great. They even said so when discussing MCM approach to GPUs. There was an interview with David Wang a while ago on the subject.
 
For gaming they said no. But for compute-heavy workloads, dual GPU is great. They even said so when discussing MCM approach to GPUs. There was an interview with David Wang a while ago on the subject.
It is infinity fabric. Maybe for gaming sooner or later it will be OK.I recall something different. AMD stated that improving gaming with monolithic Chip will get harder every year. Since we want to move forward with gaming advancement we will need dual GPU.
BTW you can still game on that GPU no problem.
 
I like the design and would like to see it in PCs, but that's highly unlikely.

Please no, we don't need a repeat of EISA slots that were as long as the motherboard is wide. The issue is with 12V, what is needed is for the industry to migrate to higher multiples of that (24V, 36V) to bring down the high amperages and thick traces and cables necessitated by such a low voltage. High amperage is also far more dangerous than high voltage.
 
It is infinity fabric. Maybe for gaming sooner or later it will be OK.I recall something different. AMD stated that improving gaming with monolithic Chip will get harder every year. Since we want to move forward with gaming advancement we will need dual GPU.
BTW you can still game on that GPU no problem.
Yeah, of course you can it's just that most often than not the game engine will recognize only one gpu and not both. That was the biggest issue they talked about. Until they can make multiple chips appear as one to the the API and engines, they will not pursue it, as they can't hope that the devs will optimize their game for that setup. To quote him, devs see it as a burden.
They are definitely looking at it for gaming tho, so these kinds of cards could give them some insight. Besides, we all know that when SLI/Crossfire works, it becomes an amazing thing.

“To some extent you’re talking about doing CrossFire on a single package,” says Wang. “The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance.

Does that mean we might end up seeing diverging GPU architectures for the professional and consumer spaces to enable MCM on one side and not the other?

“Yeah, I can definitely see that,” says Wang, “because of one reason we just talked about, one workload is a lot more scalable, and has different sensitivity on multi-GPU or multi-die communication. Versus the other workload or applications that are much less scalable on that standpoint. So yes, I can definitely see the possibility that architectures will start diverging.”
 
I don't want to know the noise levels.
I believe these custom cards are 4x tall and run the full length of the case, so they have massive heat sinks, and the large front system fans handle the airflow. They may not get that loud depending on how the thermal setup is. The classic MP let chips run warmer before ramping up fan speeds.
 
Yeah, of course you can it's just that most often than not the game engine will recognize only one gpu and not both. That was the biggest issue they talked about. Until they can make multiple chips appear as one to the the API and engines, they will not pursue it, as they can't hope that the devs will optimize their game for that setup. To quote him, devs see it as a burden.
They are definitely looking at it for gaming tho, so these kinds of cards could give them some insight. Besides, we all know that when SLI/Crossfire works, it becomes an amazing thing.

To be honest they are all monolithic due to the fact they can work as a single unit. I remember that AMD was inventing connection type that would allow games see the 2 chips as one unit. (Infinity fabric maybe with I/O die of some sort?) This might be the first approach to that solution. Or maybe I'm just thinking way to far into the future.
SLI and CrossFire sure. But you know they don't scale double the speed of a single card yet you can still see an improvement.
 
Back
Top