• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc B580 Card Pricing Leak Suggests Competitive Pricing

The limited edition card is stupid. No one's going to see it inside the case and I don't need fancy additions which have nothing to do with performance.

If anything I think Intels LEs are less fancy then literally every ODM. I generally prefer them because they arent fancy. In addition, Intels LEs are infact, no different then AMD or Nvidias FE or in house designs. They all do it.
 
If anything I think Intels LEs are less fancy then literally every ODM. I generally prefer them because they arent fancy. In addition, Intels LEs are infact, no different then AMD or Nvidias FE or in house designs. They all do it.
Yeah, the LE almost looks like a workstation card compared to 99% of designs out there, especially the A750 variant without RGB.
 
If anything I think Intels LEs are less fancy then literally every ODM. I generally prefer them because they arent fancy. In addition, Intels LEs are infact, no different then AMD or Nvidias FE or in house designs. They all do it.
Which is also pretty much what I wrote in the news post, but we also don't know what the new cards from Intel will look like as yet.
 
Need to disagree with that when thinking all of the windowed cases and the RGB stuff of today's hardware. :D
Hehehe, I forgot about that. Had to turn than off on my mainboard, because it was disturbing my sleep even from inside the case
 
Maybe the 2x 8 pin is to power the ultimate RGB experience. I’m talking “can see it from space” level stuff. :D
 
Last edited:
If rumoured performance is true this thing is doa. 250$ is too expensive. Make it 8GB and 195$ and now we're talking.
 
This is Intel vs Intel, so what is the problem? The V140 based on same Battlemage is showing very similar poor results in gaming.
The problem is that the benchmark is OpenCL which in no way can be used as a proxy for gaming performance. But if you really wanted to use OpenCL, realize that the A770 has 32 Xe cores, the B580 has 20 Xe cores. That means the B580 has 37.5% less Xe cores than the A770 but essentially matches the A770 in OpenCL performance, that is not anything to sneer at. I also have to imagine that the power consumption is significantly improved in the B580 compared to the A770.

The v140 is an integrated graphics chip on a notebook and you can't exactly do a fair comparison as you are not able to control for the CPU. But if you really want to then see the below screenshot from

Intel Lunar Lake iGPU analysis - Arc Graphics 140V is faster and more efficient than Radeon 890M - NotebookCheck.net Reviews

1732764382721.png
 
The problem is that the benchmark is OpenCL which in no way can be used as a proxy for gaming performance. But if you really wanted to use OpenCL, realize that the A770 has 32 Xe cores, the B580 has 20 Xe cores. That means the B580 has 37.5% less Xe cores than the A770 but essentially matches the A770 in OpenCL performance, that is not anything to sneer at. I also have to imagine that the power consumption is significantly improved in the B580 compared to the A770.

The v140 is an integrated graphics chip on a notebook and you can't exactly do a fair comparison as you are not able to control for the CPU. But if you really want to then see the below screenshot from

Intel Lunar Lake iGPU analysis - Arc Graphics 140V is faster and more efficient than Radeon 890M - NotebookCheck.net Reviews

View attachment 373588
Now when you mentioned it, the OpenCL actually is not good representation really
A770 has 60% more cores than B580, but this lead to only 10% more performance in OpenCL
On mobile with same core count Battlemage is 23% faster, but it is using 14% faster memory + lower latency thanks to the integrated memory on the SOC.
So Battlemage has almost zero performance improvement and B580 with 17% less cores than A580 should be slower than A580.
Do you like this math or you will prefer to stick with OpenCL and not make the Battlemage to look even worse than it is?
 
Damn, the b580 is supposed to be the smallest/cheapest of the Battlemage lineup?
I was hoping for another 100 Dollar card thats just worth buying for its media codec like the a310. Would have loved that one for transcoding video...
 
I would't call B580 a fail even when it delivers similar to A580 performance at halved power. Battlemage in Core Ultra 200 series CPU has improved efficiency enormously. B580 might be ideal card for small home media PC. Anyway, even with halved wattage, that price tag $250-260 is a bit high for similar to A580 performance since most of people don't care about power draw. GPU segment is much more important for me than CPUs, thus I hope for the best for Intel with Battlemage. Let them even release 48 Xe version with 300W TDP for < $500.

Help Me Crying GIF
 
Pricing: not great, not terrible.

With educated extrapolation of the leaked VRAM and iGPUs already released, $250 is about the upper limit for what they can "reasonably" charge for the B580, which itself is a pleasant surprise when nvidia and AMD regularly push over that limit by 25-50%. Keen to see a B770 at $400.
 
How $250 price is considered a competitive price when the same class of it's older generation -the Arc A580- was $179? so that's a $70 jump; a 40% jump.

So "Nvidiaing" pricing is the new normal now when you launch the same supposedly classed GPUs at higher prices then blame inflation? even thought the new prices were more than what the inflation did?

Both Intel and AMD are killing the entry level GPU's which is a good thing actually to make the iGPU better, but this should also make the entry level GPUs more competitive, but the current situation is just bad, NV is holding the desktop RTX 4050 while still making the RTX 3050 because it's just cheaper for them, and still make new SKUs of 3000 series as well which are a downgrade compared to previous 3000 series. AMD doesn't have a proper low-end GPU either, because they push their APU performance more.
 
Now when you mentioned it, the OpenCL actually is not good representation really
A770 has 60% more cores than B580, but this lead to only 10% more performance in OpenCL
On mobile with same core count Battlemage is 23% faster, but it is using 14% faster memory + lower latency thanks to the integrated memory on the SOC.
So Battlemage has almost zero performance improvement and B580 with 17% less cores than A580 should be slower than A580.
Do you like this math or you will prefer to stick with OpenCL and not make the Battlemage to look even worse than it is?
That is like pointing out that every generation of discrete GPU uses faster memory and lower latency due to redesigned integrated memory controllers. Plus, the memory is part of the GPU design. Do better.

There is also not much evidence of past mobile GPU, and especially integrated GPUs, that can be used as a proxy for how good the discrete desktop GPU architecture will do. They are not the same dies as the desktops.

What is the power consumption of the B580 compared to an A770? Or the A580? Let's say the B580 is 10% slower than the A580, but it is at half the power consumption of the A580, that indicates that there was a substantial performance improvement but that they focused more on bringing down the power consumption.

We shall find out soon enough.
 
Last edited:
What is the power consumption of the B580 compared to an A770? Or the A580? Let's say the B580 is 10% slower than the A580, but it is at half the power consumption of the A580, that indicates that there was a substantial performance improvement but that they focused more on bringing down the power consumption.

We shall find out soon enough.
While I like your point and I stand by effectivity, most of users will doom the product if it does not deliver proper performance increase compared to predecessor. It's what happened with Zen 5 release. Does not matter that 9600X takes 65W instead of 105W (7600X) that is 40% less, all that matters is 3-5% performance gain over predecessor which is underwhelming or rather disappointing. (By lowering clocks to get 9600X's performance on par with 7600X's, the effectivity increase might be even 50%.) Funny thing is that with Intel, approach is a bit different. Ultra 200 series are in many ways beaten by 14th Gen Core generation, but wait, Intel managed to lower consumption by 20-30% and that's something! Intel really needs to lower consumption WAY MORE.

As someone already mentioned, Nvidia used to deliver very efficient products. But it's the performance that sells the product. Nvidia doesn't care that you have to replace your smaller case and PSU in order to fit and power 600-700W RTX 5090. It'll be the new flagship and that's what matters. Like Jensen once said during interview: "... And what? We are the fastest. We have the best GPU in the world." Does not matter that Nvidia enterprise AI accelerators are extremely difficult to cool ... They are the fastest!
 
While I like your point and I stand by effectivity, most of users will doom the product if it does not deliver proper performance increase compared to predecessor. It's what happened with Zen 5 release. Does not matter that 9600X takes 65W instead of 105W (7600X) that is 40% less, all that matters is 3-5% performance gain over predecessor which is underwhelming or rather disappointing. (By lowering clocks to get 9600X's performance on par with 7600X's, the effectivity increase might be even 50%.) Funny thing is that with Intel, approach is a bit different. Ultra 200 series are in many ways beaten by 14th Gen Core generation, but wait, Intel managed to lower consumption by 20-30% and that's something! Intel really needs to lower consumption WAY MORE.

As someone already mentioned, Nvidia used to deliver very efficient products. But it's the performance that sells the product. Nvidia doesn't care that you have to replace your smaller case and PSU in order to fit and power 600-700W RTX 5090. It'll be the new flagship and that's what matters. Like Jensen once said during interview: "... And what? We are the fastest. We have the best GPU in the world." Does not matter that Nvidia enterprise AI accelerators are extremely difficult to cool ... They are the fastest!
Let's keep something in mind here, we are comparing a B580 to the A770, if B580 matches the performance of an A770 that isn't a bad performance uplift gen over gen, especially if they are able to reduce the power consumption in half.

With that being said, per the official preview details from Intel, the B580 will be around 10% faster than a Geforce 4060, which means the B580 in theory will be around 45% faster than the A580 and around 15% faster than the A770 and they significantly reduce power consumption than it isn't a bad product at the price of 250 considering the 4060 is $300+
 
Does not matter that 9600X takes 65W instead of 105W (7600X) that is 40% less
That's TDP, not its actual power usage. The 9600X only uses about 20W or ~20% less than the 7600X in practice, and it uses slightly more power than the 7600 non-X.

1733400599707.png

So Zen 5 is somewhat more efficient than Zen 4, but not by nearly as much as the TDP implies.
 
Now when you mentioned it, the OpenCL actually is not good representation really
A770 has 60% more cores than B580, but this lead to only 10% more performance in OpenCL
On mobile with same core count Battlemage is 23% faster, but it is using 14% faster memory + lower latency thanks to the integrated memory on the SOC.
So Battlemage has almost zero performance improvement and B580 with 17% less cores than A580 should be slower than A580.
Do you like this math or you will prefer to stick with OpenCL and not make the Battlemage to look even worse than it is?
Well, the reviews are out, it looks like my position has been vindicated and yours has been refuted.
 
Back
Top