• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Maxsun Arc Pro B60 Dual 48GB Graphics Card Hands-on

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,739 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Here are some of the first pictures of the Maxsun Intel Arc Pro B60 Dual, which as its name suggests, is a dual-GPU graphics card. This card comes with a pair of Arc Pro B60 chips, each with 24 GB of memory, for a total of 48 GB on the card. The card is a 2-slot, full-height, and over 30 cm-long beast with a lateral-blower based cooling solution. It draws power from a 600 W 12V2x6 power connector. Both GPUs on the card have their own set of display I/O—one each of DisplayPort 2.1 and HDMI 2.1b, on the rear I/O.

Internally, the Maxsun Arc Pro B60 Dual lacks a PCIe bridge chip. Since the "BMG-G21" silicon has a PCI-Express 5.0 x8 host interface, both GPUs are connected across the x16 gold finger, and rely on PCIe lane segmentation at the host level. This is precisely how M.2 NVMe riser AICs work, where they split the x16 connection among the four x4 M.2 SSDs. The primary use-case of the Arc Pro B60 Dual 48 GB is AI inferencing, and its board is designed to help you stack up to four of these cards in a workstation for 192 GB of video memory for AI models to span across. PCIe Gen 5 offers certain cache coherency features Intel introduced with CXL 1.0. Tying it all together is Intel's Project Battlematrix inference workstation platform.



View at TechPowerUp Main Site
 
It they manage to sell it for less than $1k (given that a single A60 is rumored to be $500), then that's going to be a great product.
 
600w with a blower fan and much memory screens over-heating but it's a must if you want 4 of them SBS.
Still, water cooling is the way to go with such setups. 2.4KW for GPU only is hooot stuff..
 
It they manage to sell it for less than $1k (given that a single A60 is rumored to be $500), then that's going to be a great product.
Nah that's a low price. Raise the price and make sure they go into the hands of the people that can use them properly. High prices are good and keeps GPUs out of the hands of the riff raff.
 
600w with a blower fan and much memory screens over-heating but it's a must if you want 4 of them SBS.
Still, water cooling is the way to go with such setups. 2.4KW for GPU only is hooot stuff..
How much power hungry must be a card for to be with 600W power connector? Nvidia use it for its middle class gaming cards.
 
card1.jpg


mostly same design with this ?
 
I though what if someone did this but with AMD 9070 XT chips? That would be a monster.
AMD should have this a 9182 GPU cores monster, with 32Gb or 64Gb of VRAM.
 
I though what if someone did this but with AMD 9070 XT chips? That would be a monster.
AMD should have this a 9182 GPU cores monster, with 32Gb or 64Gb of VRAM.
For real things yes, for gaming nobody is going to support it. PC gaming sucks and is going to the cloud. Get with it gamer.
 
I believe PCIe 5.0 x8 (32 GBps) is the same bandwidth as LPDDR5-8000 as used by the Ryzen AI MAX+ 395. So this card is fine as long as you're loading one large model that spans the two 24GB pools, but as soon as you start using it for more random tasks you'll hit the inter-GPU bottleneck and it may be no faster than a MAX+ 395 (which in 128GB form can allocate up to 96GB VRAM).
 
I believe PCIe 5.0 x8 (32 GBps) is the same bandwidth as LPDDR5-8000 as used by the Ryzen AI MAX+ 395. So this card is fine as long as you're loading one large model that spans the two 24GB pools, but as soon as you start using it for more random tasks you'll hit the inter-GPU bottleneck and it may be no faster than a MAX+ 395 (which in 128GB form can allocate up to 96GB VRAM).
For inference the inter-GPU comms won't be that bad, specially for such a "low-end" GPU. You may start to notice bottlenecks if you end up with 4 of those (so 8 GPUs in total), but the impact for inference should be really acceptable given the price. I don't think most people will be using those for training, but even then they should know what they're paying for.

FWIW, Strix Halo's performance is really disappointing for LLMs so far, mostly due to limitations within the software stack (to the surprise of no one):
 
Let's be honest, this is not a gaming card. The only question is if the drivers will be good enough to support llama.cpp inferencing or if Intel will drop its own solution.
 
600w with a blower fan and much memory screens over-heating but it's a must if you want 4 of them SBS.
Still, water cooling is the way to go with such setups. 2.4KW for GPU only is hooot stuff..
I don’t see anywhere it’s saying that it’s TDP is 600w… Does it use a 600w 12v-2x6? Yes, but that’s about it…..

How much power hungry must be a card for to be with 600W power connector? Nvidia use it for its middle class gaming cards.
More than likely, 400 W max. Intel his moving towards the new standard, yet AMD still uses 2x8 pins…like a Nvidia Turing…
 
I don’t see anywhere it’s saying that it’s TDP is 600w… Does it use a 600w 12v-2x6? Yes, but that’s about it…..


More than likely, 400 W max. Intel his moving towards the new standard, yet AMD still uses 2x8 pins…like a Nvidia Turing…
Maxsun has reported a 400W TDP for the dual GPU model. Most other B60s top out at 200W.
 
Pleasantly surprising from Intel that this wasn't just a rumor. It's a beast of a GPU and seeing the CXL detail is great. I think it's solid design overall the x8/x8 bifurcation and CXL with massive VRAM and dual GPU is pretty impressive from Intel they came out swinging on this one.
 
I though what if someone did this but with AMD 9070 XT chips? That would be a monster.
AMD should have this a 9182 GPU cores monster, with 32Gb or 64Gb of VRAM.

No, it would be unusable for gaming as there is no Crossfire support in the drivers anymore, the graphics APIs themselves no longer support this type of hardware configuration, and the GPU was not designed for scalable operation. It would still be slower than a RTX 5090, even if it theoretically worked and scalability exceeded 90% (not even in the glory days of SLI and CF, this was ever achieved).

Compute-wise, you'd be better off even with the Navi 31 GPU, and AMD offers much better compute centric products under the Instinct brand.
 
Back
Top