• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Xe HP "Arctic Sound" 1T and 2T Cards Pictured

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,853 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel has been extensively teasing its Xe HP scalable compute architecture for some time now, and Igor's Lab has an exclusive look at GPU compute cards based on the Xe HP silicon. We know from older reports that Intel's Xe HP compute accelerator packages come in three essential variants—1 tile, 2 tiles, and 4 tiles. A "tile" here is an independent GPU accelerator die. Each of these tiles has 512 execution units, which convert to 4,096 programmable shaders. The single-tile card is a compact, half-height card capable of 1U and 2U chassis. According to Igor's Lab, it comes with 16 GB of HBM2E memory with 716 GB/s memory bandwidth, and the single tile has 384 out of 512 EUs enabled (3,072 shaders). The card also has a typical board power of just 150 W.

The Arctic Sound 2T card is an interesting contraption. A much larger 2-slot card of length easily above 28 cm, and a workstation spacer, the 2T card uses a 2-tile variant of the Xe HP package, but each of the two tiles only has 480 out of 512 EUs enabled. This works out to 7,680 shaders. The dual-chiplet MCM uses 32 GB of HBM2E memory (16 GB per tile), and a typical board power of 300 W. A single 4+4 pin EPS connector, capable of up to 225 W, is used to power the card.



View at TechPowerUp Main Site
 
The memory bandwidth for Intel's HBM2e memory isn't very impressive considering what AMD did with their Radeon VII.
 
Very nice looking GPU. Half height, low profile, passively cooled. Great for small form factor computers. If this product reaches retail Intel will have a sale from me.
 
The impressiveness level depends on the number of HBM stacks.
 
Retail will be HPG if i´m not mistaking.
 
Why 4+4 pin EPS?

Why not use PCI-E 8 pin, its more prevalent on PSUs?
 
Is it me or 'Xe' sounds like a gendervoid demiqueer foxkin pronoun?
I'm not sure where you're planning to go with that comment, so let's file it under "probably should leave it at that" okay?
 
The memory bandwidth for Intel's HBM2e memory isn't very impressive considering what AMD did with their Radeon VII.
The bandwidth makes it most likely that this is two 8GB stacks per tile, at ~2.8Gbps/pin. That just tells us that they aren't using pushed-to-the-limit HBM2e, likely for thermal reasons (HBM is efficient, but dense).
Very nice looking GPU. Half height, low profile, passively cooled. Great for small form factor computers. If this product reaches retail Intel will have a sale from me.
There's no way it's passively cooled.
It is. In a server chassis with a bank of 15000rpm screamers pointing at every passive heatsink at there. Not quite what 'passive' means in consumer PCs ;)

For consumer applications ... well, the HHHL card is 150W. Most GPU makers struggle to cool 75W cards silently with dual-slot HHHL coolers. There have been higher rated ones (up to 125W IIRC) in that form factor, but that's really pushing things. But this isn't coming to the consumer market. Period.
 
But!

Does it run Crysis?

I hope so because since no one can get their shit together I will buy one if I have to.
 
No benchmark? Come on Intel, I know you good at number put some 50% higher frame rate than M100 and 50% higher clock than A100.
 
I'm not sure where you're planning to go with that comment, so let's file it under "probably should leave it at that" okay?
Yeah, no insult towards the transgender community was intended. For the sake of not invoking a huge s**tstorm better leave it like that.
 
Why 4+4 pin EPS?

Why not use PCI-E 8 pin, its more prevalent on PSUs?
I think that is a typo as the picture shows a single 8 pin connector.
 
Normal EPS 8 PIN is 336W not 225W. So this cards power comsumption is atleast 250W+.
 
Normal EPS 8 PIN is 336W not 225W. So this cards power comsumption is atleast 250W+.
The post says TBP of 300W ;) But you're right about that rating though.
I think that is a typo as the picture shows a single 8 pin connector.
Or rather that the EPS spec is 4+4 at its base, regardless of whether the connector is split or not? But more to Mussels' point, as was mentioned above, server GPUs/accelerators/AICs typically use EPS rather than PCIe power cables.
 
If you can point out where the the blades of a fan are on those pics of the Intel GPU I will edit my post.

The fact that is lacking a fan does not make it fit for SFF... The fan for it is simply located at the front of the server chassis.
Server accelerators in general do not have fans and rely on high static chassis forced airflow.
Those accelerators are generally 150-450w...
While it is "passive" in the sense it doesn't have a dedicated fan you aren't cooling that in a SFF without a waterblock.
When determining if something needs a fan, if its >15w it needs a fan. The baby gpu uses 150w, and the biggone uses 300w.
 
If you can point out where the the blades of a fan are on those pics of the Intel GPU I will edit my post.
It's been pointed out several times in the thread already that this is a server accelerator reliant on extreme levels of forced airflow from fans in the server chassis. So yes, it's passive by itself, but if you put this into any regular PC case with normal airflow it would overheat at the first sign of a load. You directly connected this being passive to SFF and you wanting to buy one (presumably for SFF, and presumably not an SFF server), which is why people are pointing out to you that this would never, ever work, and that you seem to have fundamentally misunderstood the product.
 
Or rather that the EPS spec is 4+4 at its base, regardless of whether the connector is split or not? But more to Mussels' point, as was mentioned above, server GPUs/accelerators/AICs typically use EPS rather than PCIe power cables.
What?
They are wired differently.
Care to explain?
 
What?
They are wired differently.
Care to explain?
8pin EPS is the same as your mobo 8pin power. and yes its wired differently, 4 12v 4 ground, opposite of pcie and and pcie has 3 power 5 ground.
 
What?
They are wired differently.
Care to explain?
What is wired differently? PCIe and EPS? Yes. That is precisely the point. Server PSUs typically output EPS wiring due to it being four 12V pairs rather than three + two extra grounds like 8-pin PCIe, allowing it to handle higher currents. I was just pointing out that "4+4 EPS" can still be a single 8-pin connector - the base spec is a 4-pin connector, then an option for another 4 pins was added, and that combined 8-pin connector is used (in non-splittable form) in servers for powering AICs.
 
ANY idea about cost;cough
 
Back
Top