• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc B580

Wondering if the relative lack of performance in Unreal based titles is something that will be overcome or a fundamental architecural issue with how the Unreal game engine + Xe work together.
Silent Hill 2 and Black Myth Wukong for example both show how the B580 slips to being noticably slower than the RTX 4060 and more level with the AMD RX 7600 offerings.

Doesn't bode well for Stalker 2, or the upcoming Witcher 4 game. Would also impact Hellblade, Deliver Us Moon/Mars, etc.... Hoping Intel driver team can figure that one out.

He's talking about hybrid PhysX...

To be fair most games now do multicore PhysX instead of GPU but there are some older ones that will still use GPU - in that case to be honest an older Kepler/Maxwell card would be more than enough to act as the PhysX card. Wouldn't even need to be a powerful one...
 
Wondering if the relative lack of performance in Unreal based titles is something that will be overcome or a fundamental architecural issue with how the Unreal game engine + Xe work together.
Silent Hill 2 and Black Myth Wukong for example both show how the B580 slips to being noticably slower than the RTX 4060 and more level with the AMD RX 7600 offerings.

Doesn't bode well for Stalker 2, or the upcoming Witcher 4 game. Would also impact Hellblade, Deliver Us Moon/Mars, etc.... Hoping Intel driver team can figure that one out.
UE5 at that. Seen some reviews and it runs UE4 games just fine.
 
UE5 at that. Seen some reviews and it runs UE4 games just fine.
Fair enough - that'll mean the Hellblade (1), and Deliver Us titles would be OK then, as would older UE games.
But UE5 games are actually fairly pletinful now - e.g. Robocop and the new Fortnite update...

Wonder if anyone has seen how well it fairs on pre-DX11 titles, i.e. how Intel's compatibility layer performance compares (i.e. is it significantly better than Xe1 cards and no major bugs and what silly FPS numbers it can pull out)....
Half-Life 2 has had a bit of a repackage - some people may be dusting off some DX9 or older titles...
 
Last edited:
I know most people say that RT is not relevant in this segment, but isnt it possible that more games like the new Indiana Jones will pop up in years to came? The game simply wont run without RT hardware. At this moment this B580 should be more future-proof then the competition in that reagard. Or is this a moot point?
But I must ask, are there any visible improvements brought by this forced RT?

I have seen plenty of samples were someone has to point out if and were the damned RT effect is it, besides the performance hit.

Games this "early" already demanding RT hardware is a bad precedence, unless somehow they have managed to include it without tanking performance.
To be fair most games now do multicore PhysX instead of GPU but there are some older ones that will still use GPU - in that case to be honest an older Kepler/Maxwell card would be more than enough to act as the PhysX card.
Every now and then, I like to replay the Arkham games and those are infected with PhyX, to the point that only having a Ngreedia GPU in the system will enable them and yes, some of us (and all consoles) only have AMD hardware and cant see all the eye candy, especially in Arkham Knight.

I tried once to recreate the old ways of having a Ngreedia gpu just for PhyX and an AMD one as a main (remember how back in the day, Ngreedia went to the extreme of disabling your GPU if their drivers detected an AMD GPU in the same system? fun times) and it was as bad as it was back then. Couldn't do it because the Ngreedia GPU would deactivate itself if it wasnt the primary gpu or had a dummy adapter connected to one of its ports.

Fine, its only eye candy, which it doesnt affect gameplay, but it sucks that its simply locked behind a hardware paywall.

RT so far, has proven to be even less significant (so far) but the performance hit is simply not worth the minimal visual changes (that I have observed)

Granted, there are exceptions, like the Portal and Quake remakes, but the rest is not all that.
 
Last edited:
4060 is the 50 class card of the 40 series, 18 months old and after june-july obsolete with 5060 in the picture. B580 should be using N3B 140mm2. Currently the die is almost as big as 4070 Ti, density like 6nm.
 
Maybe we will see another era of secondary GPUs in computers. Just like some 12-14 years ago, someone would buy GPU to compute PissX, whereas primary GPU was used for anything else.
See response above.
 
But I must ask, are there any visible improvements brought by this forced RT?

I have seen plenty of samples were someone has to point out if and were the damned RT effect is it, besides the performance hit.

Games this "early" already demanding RT hardware is a bad precedence, unless somehow they have managed to include it without tanking performance.
Maybe it cuts production time a bit if you don't have to generate/include/place shaders, also maybe download size, because it's rendered in real time (certainly unsure if that's true)

Also forced RT sounds ass
 
but isnt it possible that more games like the new Indiana Jones will pop up in years to came
Possible yes, but looking at the flop sales numbers for Indiana Jones I think a lot of publisher will wonder if RT-only is the right strategy. Why block a vast majority of the userbase from buying your game?
 
I tried once to recreate the old ways of having a Ngreedia gpu just for PhyX and an AMD one as a main (remember how back in the day, Ngreedia went to the extreme of disabling your GPU if their drivers detected an AMD GPU in the same system? fun times) and it was as bad as it was back then. Couldn't do it because the Ngreedia GPU would deactivate itself if it was primary or had a dummy adapter connected to one of its ports.
Supposedly you don't have to hack it anymore - it's no longer blocked by nvidia drivers (still need a 'display'/dummy connected for adapter to be active). I suspect that's more to do with the fact that most laptops (even at the higher end) may still use CPU iGPU as primary display controller and then pass-thru the dGPU for gaming. It would be hypocritical to design for / support such a hardware implementation by Nvidia and then block certain software from functioning because of the same supported configuration. Also by mid-2010s most games were not bothering with GPU physx as all the consoles were not using Nvidia for anything and were happy to leverage muti-core CPU implementations.
 
Possible yes, but looking at the flop sales numbers for Indiana Jones I think a lot of publisher will wonder if RT-only is the right strategy. Why block a vast majority of the userbase from buying your game?
Welcome to the great Nvidia RT con. By the way, why can't we get RT vs gen ras comparisons for AAA games that come out with RT options in the same way that you compare DLSS vs. XeSS vs. FSR? If we are all to care so much about RT, shouldn't we have side by side screenshot comparisons for different games as they are released with RT on and off? You are very good at these kinds of comparisons so it would be really nice to know how much better a particular game looks with and without RT and get an idea of whether our game enjoyment will increase (thus justifying the extra premium we pay for GPUs nowadays).
 
Last edited:
Very similar to a 6700xt, which you can get for slightly less money but without the warranty. No OC on this card though by the look of it. If this pulls prices down for AMD and Nvidia then a good product.
 
See response above.
I meant it other way. Nvidia can't restrict RT capabilities, as it's part of DX12. Maybe to see enough improvement in graphics in games devs will need to implement RT on much wider scale. It is a situation which will be much more demanding on RT cores. So maybe ... primary GPU for rendering, secondary for RT computing? Just like with PissX back then.
 
It's a good card if it's sold for a lower price than its direct competitors, but it's an entry-level card for 1080p only, nothing more, competing with the RTX 4060/4060Ti and Radeon 7600XT.
But it's very likely that when the Radeon RDNA4 comes out (which will be launched in early 2025), Intel's new VGAs will lose their cost-benefit once again.


Which compute applications would you like to see tests with?
AMD's track record would suggest otherwise. their $300 and below GPUs have consistently been dissapointig lately, aside from the 6650xt.

If rDNA4 is as much of an improvement as rDNA3 was, intel doesnt have a lot to worry about, unless AMD is willing to take a loss.

Possible yes, but looking at the flop sales numbers for Indiana Jones I think a lot of publisher will wonder if RT-only is the right strategy. Why block a vast majority of the userbase from buying your game?
I doubt the flop sales are from RT, much more likely it's from Disney's culture war that has utterly obliterated any fanbase for Marvel or Lucasfilm properties. As more of the writing was revealed to be modern slop, enthusiasm for the title quickly dissipated.
 
Very similar to a 6700xt, which you can get for slightly less money
I checked and 6700 XT goes for $320 to 360 on eBay. What pricing do you see?
 
Looks like Intel's bringing the heat to AMD. AMD can't catch up to Nvidia in the high-end and now Intel's hot in their heels in the low end. Good riddance. Maybe Lisa'll finally spare some R&D budget for Radeon. Haven't been this excited in a while.


Since B580 goes up in ranking at higher resolution, does that mean driver overhead or some such at lower resolution? Too tired to peruse through 9 pages of comments to see anybody else asked this already.


If so, is it finewinin' time?


1.jpg
 
This begs the question: What did LTT do differently?

Different and fewer games, also used Vulkan in some of them.

Since B580 goes up in ranking at higher resolution, does that mean driver overhead or some such at lower resolution? Too tired to peruse through 9 pages of comments to see anybody else asked this already.

I think it's primarily the hardware, wider memory bus and more VRAM, that makes it go up in ranking at 1440p/4K.

I also think 4K and 1440p are more important than people posting here realize. I pretty much run everything at 4K now with a 6700 XT for example. I never run at 1080P.

So, taken from that vantage point, the B580 suddenly becomes a direct competitor to the 4060 Ti at $150 less.

4K overall results:
1734098075438.png
 
I really wish people would stop with the asinine name-calling.
If it weren't guaranteed to get me LQ'd to hell and back, I'd be sorely tempted to respond to any instance of "M$" or "Ngreedia", etc. with a clip of the How I Met Your Mother cast making fart noises.
 
Great VFM. If they manage to have reliable drivers and less games that drop performance like a rock ti will sell well. Thing is, they cannot make many of those in that price without having problems with their profit margins. I hope I am wrong and they make millions of those that will help push AMD and nVidia to drop their prices also.
 
Great VFM. If they manage to have reliable drivers and less games that drop performance like a rock ti will sell well. Thing is, they cannot make many of those in that price without having problems with their profit margins. I hope I am wrong and they make millions of those that will help push AMD and nVidia to drop their prices also.

I'm not convinced profit margins on ARC matter that much to Intel at present. PC graphics looks for all the world like bonus money for all three players, who see HPC as the profit generator for GPU compute. Nvidia may basically own the space, but it's a space that's growing. Where there's growth, there's opportunity to snag share. ARC may simply be a way to salvage some revenue while Intel gets the architecture and ecosystem sorted for the real work: large-scale AI.
 
I meant it other way. Nvidia can't restrict RT capabilities, as it's part of DX12. Maybe to see enough improvement in graphics in games devs will need to implement RT on much wider scale. It is a situation which will be much more demanding on RT cores. So maybe ... primary GPU for rendering, secondary for RT computing? Just like with PissX back then.
I misunderstood your original comment.

I recall reading that such scenario is not possible because of how RT is rendered in the final image, but i am open to it if it ever becomes a possibility.
I really wish people would stop with the asinine name-calling.
Funny enough, that seems to trigger people that are fans of the mentioned brand/corporation.

You must accept that others have good reasons to hate what you perceive as a favorite/loved/preferred brand.

Perhaps ask why such people hate them and maybe their reasons are valid, instead of getting upset about them.

And don’t take this as an attack, since personally, i think that you in particular provide lots of good input in this forum.

Simply bringing another possible point of view.

I'm not convinced profit margins on ARC matter that much to Intel at present.
My curiosity is, how low can a company go, price wise, before it’s considered illegal price dumping?

Dont get me wrong, I am down to cheaper gpus.
 
Last edited:
I'm not convinced profit margins on ARC matter that much to Intel at present. PC graphics looks for all the world like bonus money for all three players, who see HPC as the profit generator for GPU compute. Nvidia may basically own the space, but it's a space that's growing. Where there's growth, there's opportunity to snag share. ARC may simply be a way to salvage some revenue while Intel gets the architecture and ecosystem sorted for the real work: large-scale AI.

The entire dGPU market was 11.7B in 2023. By contrast, Musk reportedly spent ~6B on the initial AI rollout for the Memphis, TN xAI Colossus data center.
 
Back
Top