• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc "Battlemage" BMG-G31 B770 GPU Support Lands in Mesa Driver

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,141 (1.10/day)
Intel has quietly submitted its patches for BMG-G31 GPU SKUs in the Mesa open-source graphics driver library. With IDs e220, e221, e222, and e223, Intel is gearing up the launch of its higher-end "Battlemage" B770. In the weeks leading up to Computex 2025, Intel dropped hints and unofficial leaks about new Arc Xe2 "Battlemage" desktop graphics cards, including rumors of a high-end B770 model and placeholder mentions of a B750 on Intel Japan's website. Fans were excited, but at the Taipei Nangang show, neither card appeared. Then Tweakers.net reported, based on unnamed insiders, that the Battlemage-based Arc B770 is real and expected to launch in November 2025, though plans could still change.

With 32 Xe2 cores for the B770, Intel plans to pair 16 GB of GDDR6 memory on a 256-bit bus. What is interesting is that Intel will use a PCIe 5.0 ×16 host bus, whereas the lower-end Arc B580 and Arc B570 use a PCIe 4.0 ×8 host bus. A faster PCIe standard is likely to follow as the higher-end Arc B770 yields significantly more compute bandwidth, so we will have to wait and see what Intel has prepared. If the rumored Q4 launch manifests, it will give gamers an interesting choice right around the holidays.



View at TechPowerUp Main Site | Source
 
Intel has been rather late with B770. Ideally, it should have arrived shortly after the B580. It has been six months since that launched and we still don't have a confirmed release date for its larger sibling.
 
They just stopped selling the A750/A770. Odds are getting better that this is a shift to start producing the B770.

Really interested in seeing how they perform if Intel can get them out the door this year.
 
Intel has been rather late with B770. Ideally, it should have arrived shortly after the B580. It has been six months since that launched and we still don't have a confirmed release date for its larger sibling.

I don't think Intel wants to sell a lot of them. 256 bit bus cards with 16GB of memory and a big GPU competing at (a guess) between 5060 Ti and 5070 level for ~$400 with a (relatively vs Nvidia/AMD) large # of software issues and support needs is not gonna make a lot of money for them. It's likely to be 5080 class hardware, hobbled by drivers and still relative inexperience.

That said, I'd be interested. Arc got a lot of significant performance upgrades from drivers as they improved their software, I bet Battlemage will too.
 
I don't think Intel wants to sell a lot of them. 256 bit bus cards with 16GB of memory and a big GPU competing at (a guess) between 5060 Ti and 5070 level for ~$400 with a (relatively vs Nvidia/AMD) large # of software issues and support needs is not gonna make a lot of money for them. It's likely to be 5080 class hardware, hobbled by drivers and still relative inexperience.

That said, I'd be interested. Arc got a lot of significant performance upgrades from drivers as they improved their software, I bet Battlemage will too.
Battlemage, unlike Arc, has performed well out of the gate so I don't expect it to show the same improvement over time. Rumours suggest 32 Xe cores. If that turns out to be the case, it's unlikely to be more than 60% faster than the B580 which, going by recent reviews, would put it around the 4070. Given Battlemage's large die sizes, I suspect you're right; they don't want to sell too many of these as they won't be able to charge as much for it as the die size would warrant.
 
Intel has been rather late with B770. Ideally, it should have arrived shortly after the B580. It has been six months since that launched and we still don't have a confirmed release date for its larger sibling.
It definitely feels very late, but it seems like none of these companies have any incentive to rush consumer GPUs right now, so I'm really not sure when any next gen parts are going to be available...and even when they're "available", will they actually be available? The low to mid-range parts are mostly widely available from AMD and Nvidia, they're just at super inflated prices still. Things have just started dropping on a few GPUs because they were so far above MSRP and not being purchased for those prices that they'll want to bring the price down to keep them moving. So from Intel's perspective, the only timing that will definitely be too late is if AMD and Nvidia have already dropped all their competing parts' prices and now the Intel card can't compete at all. If whenever it actually launches, it hits good performance/price (and is available for that price), it doesn't really matter when it launches.
 
It definitely feels very late, but it seems like none of these companies have any incentive to rush consumer GPUs right now, so I'm really not sure when any next gen parts are going to be available...and even when they're "available", will they actually be available? The low to mid-range parts are mostly widely available from AMD and Nvidia, they're just at super inflated prices still. Things have just started dropping on a few GPUs because they were so far above MSRP and not being purchased for those prices that they'll want to bring the price down to keep them moving. So from Intel's perspective, the only timing that will definitely be too late is if AMD and Nvidia have already dropped all their competing parts' prices and now the Intel card can't compete at all. If whenever it actually launches, it hits good performance/price (and is available for that price), it doesn't really matter when it launches.
At least here, we have a good supply of the lower end 9060 XT at brick and mortar stores. One version of the 8 GB SKU is even going for a slight discount right now.

1749834651067.png
 
Battlemage, unlike Arc, has performed well out of the gate so I don't expect it to show the same improvement over time. Rumours suggest 32 Xe cores. If that turns out to be the case, it's unlikely to be more than 60% faster than the B580 which, going by recent reviews, would put it around the 4070. Given Battlemage's large die sizes, I suspect you're right; they don't want to sell too many of these as they won't be able to charge as much for it as the die size would warrant.

Nah, it will. Tom's did a review of Xe2. The hardware appears faster than AMDs RDNA 4, shows up in synthetics, and older already optimized games like Tomb Raider. But throw it into a newer game like Black Myth Wukong and Xe2 barely keeps up with Xe. This shows their optimization efforts are far from complete.

It definitely feels very late, but it seems like none of these companies have any incentive to rush consumer GPUs right now, so I'm really not sure when any next gen parts are going to be available...and even when they're "available", will they actually be available? The low to mid-range parts are mostly widely available from AMD and Nvidia, they're just at super inflated prices still. Things have just started dropping on a few GPUs because they were so far above MSRP and not being purchased for those prices that they'll want to bring the price down to keep them moving. So from Intel's perspective, the only timing that will definitely be too late is if AMD and Nvidia have already dropped all their competing parts' prices and now the Intel card can't compete at all. If whenever it actually launches, it hits good performance/price (and is available for that price), it doesn't really matter when it launches.

Agree with that. We get too caught up on launch dates, and fail to remember 'launch' doesn't mean 'available at a reasonable price'.

Even the cheapest of the 5070s and 9070s are still running ~10-20% above 'MSRP', and most of them are like >+35%. I'd guess it'll be sometime 2026 before we might, maybe, see them at MSRP outside of a few flash sales.
 
Nah, it will. Tom's did a review of Xe2. The hardware appears faster than AMDs RDNA 4, shows up in synthetics, and older already optimized games like Tomb Raider. But throw it into a newer game like Black Myth Wukong and Xe2 barely keeps up with Xe. This shows their optimization efforts are far from complete.
Let's see. I wish Intel well, but I'm rather skeptical about Battlemage's prospects of being faster than RDNA 4 which requires much smaller dies to achieve greater performance. Accounting for clock speeds, the 9060 XT has 85% to 90% of the theoretical compute potential of the B580. Despite that deficit, the former is about 20% to 30% faster than the latter.
 
Let's see. I wish Intel well, but I'm rather skeptical about Battlemage's prospects of being faster than RDNA 4 which requires much smaller dies to achieve greater performance. Accounting for clock speeds, the 9060 XT has 85% to 90% of the theoretical compute potential of the B580. Despite that deficit, the former is about 20% to 30% faster than the latter.

That 20-30% number counts is likely due to software optimizations. No doubt AMD and Nvidia's software optimization is better, and they're quicker at getting it out. They should be, they both have ~25 years of experience at this vs Intel.

The hardware isn't at fault. Like I said it shows up in synthetics for one :

1749837815119.png


And in older optimized games:
1749837871204.png


But not in newer games:

1749837905447.png
 
That 20-30% number counts is likely due to software optimizations. No doubt AMD and Nvidia's software optimization is better, and they're quicker at getting it out. They should be, they both have ~25 years of experience at this vs Intel.

The hardware isn't at fault. Like I said it shows up in synthetics for one :

View attachment 403629

And in older optimized games:
View attachment 403630

But not in newer games:

View attachment 403631
Ah I see. This review is using IGPs where Intel has the advantage due to larger last level caches (8 MB vs 2 MB). I may be wrong, but the B580, despite being bigger than the 9060, doesn't show this discrepancy.
 
It's quite the gap between this and the B500 series, but then again the B500s might have been a pilot run for this big kahuna chip. Produce the smaller, cheaper, more viable chip and send it out, see how it does; people liked it, so they'll roll out the riskier product now that they know what they're in for.

As has been the case with me since Alchemist, I'd be taking a nice good look at these cards once I'm sure that all of the visual bugs for VR applications have been squashed. These GPUs have always been impressive value, it's the edge cases that discourage me.
 
That 20-30% number counts is likely due to software optimizations. No doubt AMD and Nvidia's software optimization is better, and they're quicker at getting it out. They should be, they both have ~25 years of experience at this vs Intel.

The hardware isn't at fault. Like I said it shows up in synthetics for one :

View attachment 403629

And in older optimized games:
View attachment 403630

But not in newer games:

View attachment 403631
Aside from the fact you are showing intel iGPU performance in a discussion on their dGPUs, you're comparing apples to pitchforks here. Intel's iGPUs use the low power version of the XE cores, the Xe2-LPG, not the Xe2-HPG used in the B580. AMD, similarly, uses rDNA3.5 in their APUs, not rDNA4.

Also, anyone who has been in the space for more then a single launch should know that synthetics =! real world performance. AMD used to rock synthetics and 3Dmark while falling apart in actual games. The vega 64 was a beast in some syntetic tests....in games not so much. All the hypothetical synthetic performance int he world doesnt help if you cant get it out in actual gaming titles.
 
Also, anyone who has been in the space for more then a single launch should know that synthetics =! real world performance. AMD used to rock synthetics and 3Dmark while falling apart in actual games. The vega 64 was a beast in some syntetic tests....in games not so much. All the hypothetical synthetic performance int he world doesnt help if you cant get it out in actual gaming titles.

Which if you read carefully, is why I said :

That 20-30% number counts is likely due to software optimizations. No doubt AMD and Nvidia's software optimization is better, and they're quicker at getting it out. They should be, they both have ~25 years of experience at this vs Intel.

The hardware isn't at fault. Like I said it shows up in synthetics for one :

The hardware capability shows up in synthetics, software optimization to use that capability shows up in "real world games".

Which goes back to my point earlier in the thread, which you clearly failed to read, that Intel is forced to sell XX70 class hardware at XX60 class prices until their software optimization is on par.

But the synthetics show that they have the hardware.
 
Which if you read carefully, is why I said :



The hardware capability shows up in synthetics, software optimization to use that capability shows up in "real world games".

Which goes back to my point earlier in the thread, which you clearly failed to read, that Intel is forced to sell XX70 class hardware at XX60 class prices until their software optimization is on par.

But the synthetics show that they have the hardware.
I think the point you failed to understand is that "optimization" isnt a zero sum game. Just because 3dmark can fully exploit a piece of hardware, does not mean that actual game engines can. There's a lot more to production software than "optimize for X and go", which is why those synthetic tests are to be taken with a massive grain of salt, just because a GPU outperforms in a synthetic does not mean it can do so with production software.

This is just...basic knowledge. Like how some game engines perform better on AMD GPUs (see CoD), to the point they are statistical outliers. No amount of intel working on their drivers is going to change that, if their architecture has more compute hardware then they can sufficiently load, whether because of ROP limitations, TMU limitations, driver optimization, ece, that synthetic performance will NEVER show up in production.

That is why it's called "synthetic load" and why, you may notice, many GPU reviews have fully dropped or heavily minimized such results, as they have little bearing on the real world.
 
I think the point you failed to understand is that "optimization" isnt a zero sum game. Just because 3dmark can fully exploit a piece of hardware, does not mean that actual game engines can. There's a lot more to production software than "optimize for X and go", which is why those synthetic tests are to be taken with a massive grain of salt, just because a GPU outperforms in a synthetic does not mean it can do so with production software.

This is just...basic knowledge. Like how some game engines perform better on AMD GPUs (see CoD), to the point they are statistical outliers. No amount of intel working on their drivers is going to change that, if their architecture has more compute hardware then they can sufficiently load, whether because of ROP limitations, TMU limitations, driver optimization, ece, that synthetic performance will NEVER show up in production.

That is why it's called "synthetic load" and why, you may notice, many GPU reviews have fully dropped or heavily minimized such results, as they have little bearing on the real world.

Facts. I thought this was common knowledge though. Synthetic tests have never been a reliable way to predict relative real-world performance. If I had a dollar for every time a product looked exciting in synthetics and then failed to deliver in real-world workloads...
 
I think the point you failed to understand is that "optimization" isnt a zero sum game. Just because 3dmark can fully exploit a piece of hardware, does not mean that actual game engines can. There's a lot more to production software than "optimize for X and go", which is why those synthetic tests are to be taken with a massive grain of salt, just because a GPU outperforms in a synthetic does not mean it can do so with production software.

This is just...basic knowledge. Like how some game engines perform better on AMD GPUs (see CoD), to the point they are statistical outliers. No amount of intel working on their drivers is going to change that, if their architecture has more compute hardware then they can sufficiently load, whether because of ROP limitations, TMU limitations, driver optimization, ece, that synthetic performance will NEVER show up in production.

That is why it's called "synthetic load" and why, you may notice, many GPU reviews have fully dropped or heavily minimized such results, as they have little bearing on the real world.
Sythentic tests have never squared well with actual game results.

Nonetheless, Intel has a real driver problem, which, were it fixed, could raise performance considerably. Even AMD and Nvidia had excessive driver overhead in the past; Intel has more. Chips and Cheese did an analysis on Battlemage in January, and found that something strikingly different is happening between AMD's and Intel's drivers. Too much work stacking up on the CPU-side queue, along with time wasted waiting for locks.

Having bought a B580, it affects me, so I posted a long issue on their community tracker, bringing attention to the article, but it's uncertain whether they're interested or not. Generally, when someone mentions the overhead on their tracker, they are not transparent. So, it could be Arc's hardware at the end of the day; however, C&C's analysis suggests otherwise, as do fast CPUs mitigating the problem.
 
it would be nice to finally have a vga as powerful as the B580 at a good price. unfortunately this 5060ti and 9600xt is a bad joke
 
They don't do much to compete but at least they try, we should support this effort even if they are not selling much.
Good luck Intel and maybe one day we will pe proud to have an intel GPU in our rig.
 
I was considering if I should wait for the B770. I didn't really need a graphics card yet and would've been fine with waiting. But the RX 8600 9060 XT was relatively cheap and offered a full x16 PCIe interface, and the B770 was nothing more than a rumor, so I bought what was available now. (Edit: and I just checked, Microcenter 9060 XT 16GB stock is reduced and the one I bought is now $20 more, so I think it was wise that I didn't wait.)

Compared to AMD, Intel has a superior video encoder especially for newer formats and where it works XeSS is superior to FSR. But FSR2 is available in any game through the Radeon software, which is nice even though it's an old upscaler.
 
I was considering if I should wait for the B770. I didn't really need a graphics card yet and would've been fine with waiting. But the RX 8600 9060 XT was relatively cheap and offered a full x16 PCIe interface, and the B770 was nothing more than a rumor, so I bought what was available now. (Edit: and I just checked, Microcenter 9060 XT 16GB stock is reduced and the one I bought is now $20 more, so I think it was wise that I didn't wait.)

Compared to AMD, Intel has a superior video encoder especially for newer formats and where it works XeSS is superior to FSR. But FSR2 is available in any game through the Radeon software, which is nice even though it's an old upscaler.

16GB 9060XT is probably the best (well, only) deal going right now if you need > 1080p at sub $400. I just would have expected it to beat a 7700 XT, which is really just a tier above it, and it doesn't really do that.

It's just not enough of an upgrade for me personally vs my 6700 XT. I need to go up one more tier to make any expenditure worthwhile, but that jumps to $600+.

Sad thing is, coulda bought a 7800 XT a year ago for $400.
 
Back
Top