• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Announces the Radeon Pro Vega II and Pro Vega II Duo Graphics Cards

Well goodbye GTX 1650, your PCIE-powered "throne" has just been destroyed. :laugh:
Seriously though, this explains many of AMD's decisions. Very interestied to see what future this dual gpu has.
 
Uhm, no, that's the blue outline between the chips...
But it might well be between cards as well.

Infinity fabric supports more than 2 chips.

The new mac pro cheesegrater can take 2 single or dual gpus cards, and link them together.
 
8K video rendering. It's here and apple knows the sooner they offer it and make it available the sooner they will have a need for a uber $$$$$ 8K display.

Sow the seeds, reap the reward, Apple is expanding it's roots again into content creation focused growth. About time, cause like it or not us PC plebes have benefitted from the drive for better monitors.
 
this sounds like the successor of the beastly R9 295 X2 GPU & HD7990. Kinda sad this is a specific card for Apple "PC"....
Those were gaming cards. New Pro Duo Vega II is a workstation card.
 
Dual GPU. Now that is a squeaker. I thought AMD would never go dual GPU. Infinity fabric between chips. Hmm wonder how that is going to pan out. Although it is an HBM2 so I wonder what the delay would be since Ryzen had some latency. Especially the 1st gen. Hope the GPU can avoid such a delay.
I assume this is only for MAC and we are not talking about PC alternative of the same GPU?

If you knew AMD history you would know they have always used dual gpus.
 
I don't want to know the noise levels.

Just watch a SpaceX video just before take off.

Jokes aside, skipping this too even if i was a Apple user i still would not buying this.
 
8K video rendering. It's here and apple knows the sooner they offer it and make it available the sooner they will have a need for a uber $$$$$ 8K display.

Sow the seeds, reap the reward, Apple is expanding it's roots again into content creation focused growth. About time, cause like it or not us PC plebes have benefitted from the drive for better monitors.
For at least 5 years, the “pros” on the Apple size have felt pretty neglected, maybe longer since many didn’t care for the trash can design. This is the first real move of confidence for those users. Users can actually configure this MP, though I’m curious to see what sort of expansion there truly is. Will it be possible to add a third party GPU? Not just be detected by MacOS, but also have the necessary power cables to drive it.

I do love Apple’s internal design cleanness. This thing could just as easily have a glass shell.
 
Please no, we don't need a repeat of EISA slots that were as long as the motherboard is wide. The issue is with 12V, what is needed is for the industry to migrate to higher multiples of that (24V, 36V) to bring down the high amperages and thick traces and cables necessitated by such a low voltage. High amperage is also far more dangerous than high voltage.

Dude, that is a change in the whole PC eco platform we know today affecting business, enterprises, everything on this planet lol. They are not going todo that.

A proper PSU is capable of putting 12A per yellow wire, easily. So with a 2x 8 pin connector your looking at 6x 12A = 72A or roughly 1200 watts alone, if you take the PCI-E slot not into account. There are cards with 3x 8 pin which increase this even further to 108A.

If cables was your concern, then load balancing is your friend. Many OC'ers just use a bunch of PSU's together, and there are even 2000Watts PSU's out there that suit the job perfectly running a rough overclocked system without even maxing the power consumption out that fast.

If you buy proper equipment you never face any of those issues you mention. It's pretty much designed to use it in a safe way. Your asking for problems if your going to cut wires and hack them up and get electrocuted lol.
 
If you knew AMD history you would know they have always used dual gpus.
I know AMD history and they were using handful of dual GPU always. Now they are releasing quite a number. Anyway this doesn't change the fact they were not going dual GPU for gaming. They are using Infinity Fabric for the communication and that's interesting. It's like AMD's trying to squeeze more juice from the GPU and I think that's a good thing. First step for dual chips in gaming. That's my assumption but I think they can pull this one off.
 
Last edited:
Wow, 2 pages and no Crysis comment?

I hope this means AMD is ready to give NVIDIA some meaningful competition. It must perform better than NVIDIA's tensor cores if Apple picked it for their HEDT, granted it's using twice as many GPUs to do so.
 
Wow, 2 pages and no Crysis comment?

I hope this means AMD is ready to give NVIDIA some meaningful competition. It must perform better than NVIDIA's tensor cores if Apple picked it for their HEDT, granted it's using twice as many GPUs to do so.
Or it might perform slightly worse but it is twice as cheap and very good across the workstation suite workload :)
guess we will have to wait and see :)
 
I hope this means AMD is ready to give NVIDIA some meaningful competition. It must perform better than NVIDIA's tensor cores if Apple picked it for their HEDT, granted it's using twice as many GPUs to do so.
Because tensor cores are the only reasonable metric one has when weighing performance of 28 TFlop GPUs.

A pity you can't rant about lack of heavily denoised RT gimmick.
 
Because tensor cores are the only reasonable metric one has when weighing performance of 28 TFlop GPUs.

A pity you can't rant about lack of heavily denoised RT gimmick.

If you want to do Deep learning on Vega/Vega ii chips they can operate at double, single, half and int 8.
So 28 Tflop single, 56 Half precision, 112 int8 tops.
 
Because tensor cores are the only reasonable metric one has when weighing performance of 28 TFlop GPUs.

A pity you can't rant about lack of heavily denoised RT gimmick.

14 TFLOPS according to AMDs slideshow, which isn't exactly 28 TFLOPS because of bandwidth limitations between the two cards but even assuming 100% efficiency you're drag racing against NVIDIA's 16.3 TFLOPS on a single GPU card, which is significantly higher.

Believe me, I'm not a fanboy, I want AMD to succeed so I can get excited about product launches from both sides of the aisle instead of the one-sided progression we've had for years, but I'm also a realist, and tech marketing doesn't affect me anymore than a late night infomercial for pillows, so without real-world testing it's hard to say.

BUT

Apple DOES have product access, and has assumingly tested it thoroughly, and didn't decide to base their platform around it on a whim. That gives hope.
 
Last edited:
14 TFLOPS according to AMDs slideshow, which isn't exactly 28 TFLOPS because of bandwidth limitations between the two cards but even assuming 100% efficiency you're drag racing against NVIDIA's 16.3 TFLOPS on a single GPU card, which is significantly higher.

Believe me, I'm not a fanboy, I want AMD to succeed so I can get excited about product launches from both sides of the aisle instead of the one-sided progression we've had for years, but I'm also a realist, and tech marketing doesn't affect me anymore than a late night infomercial for pillows, so without real-world testing it's hard to say.

BUT

Apple DOES have product access, and has assumingly tested it thoroughly, and didn't decide to base their platform around it on a whim. That gives hope.
Actually it is 28.4 single precision and double that for half precision for the Radeon Pro Vega II duo. That's what it says on the Apple page. The only thing worth mentioning is that this card is dual chip. What's also worth mentioning, you can double single precision by adding another Vega II duo. So if you ask me it's pretty impressive :)
 
Back
Top