• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc B580

TBH, visually games have barely gotten better looking over the past 4 years while being noticeably heavier in computing requirements. We've reached a point where more details and higher poly count are also becoming gimmicky because you only notice the difference when you stop and stare.
Oh yes, agreed.

We reached that point a little while ago and I lament that many devs continue going for the eye candy but seem to have forgotten that gameplay is that important.

RT could bring something new on the eye candy dept but not necessarily in gameplay , but so far, i havent seen anything that makes me want to ignore the performance hit.

Funny enough, going by that, it does helps justifying gpus like this one, given its msrp and performance.
 
Witcher 3 results are still incomparable to any other reviewers and give the appearance that this card is faster than it actually is.

W1zz tests The Witcher 3 in DX11 where most/all other places test it in the newer DX12 implementation. Both are valid but I prefer DX11 here to represent performance in older titles, many of which I'm still playing.
 
I still can't get my head around this:
  • No support for DLSS (yes I know it's an NV exclusive, still doesn't change the fact that you can have it on one option and not on others)
I thought XeSS was about as good as DLSS. Why would one care about DLSS if the card supports a very similar version of it? Also the last TPU Frontpage poll only found 2.8% of TPU readers care about Upscaling and Frame Generation. I would almost say these technologies are all cons as we are being forced to pay for the development of something the vast majority of us don't want.

As long as NV GPU´s being reviewed get the equivalent comment; "Cons: No support for XeSS 2" it all works out in the end. However, awarding manufacturers who makes use of proprietary tech by penalizing those who do not, require a spoon fed explanation if I am to make any sense of it.
 
I thought XeSS was about as good as DLSS. Why would one care about DLSS if the card supports a very similar version of it? Also the last TPU Frontpage poll only found 2.8% of TPU readers care about Upscaling and Frame Generation. I would almost say these technologies are all cons as we are being forced to pay for the development of something the vast majority of us don't want.
It isnt, but neither is FSR to be fair (I'd still like to see both reach that point though.) Never take a poll at face value, even TPU's. Theres pros and cons to upscaling and frame gen, I think the positives outweigh the negatives if developers don't rely on them but were seeing a increased reliance on upscaling, frame gen, etc it seems.
 
As long as NV GPU´s being reviewed get the equivalent comment; "Cons: No support for XeSS 2" it all works out in the end. However, awarding manufacturers who makes use of proprietary tech by penalizing those who do not, require a spoon fed explanation if I am to make any sense of it.
When there are swaths of games that support XeSS, but not DLSS, that will make sense. As it stands DLSS are widespread and the other two are spotty to MIS, so not having it is a major downside no non Nvidia GPUs.

Sucks, but the real world is not fair and open. Never has been.
 
When there are swaths of games that support XeSS, but not DLSS, that will make sense. As it stands DLSS are widespread and the other two are spotty to MIS, so not having it is a major downside no non Nvidia GPUs.

XeSS SR is all that's necessary, the other components (hardware-assisted frame generation and low-latency) are already available in NV hardware. You can even use NVIDIA Reflex low-latency in conjunction with a XeSS 1.3 or FSR 3 context. they're independent of one another.
 
Having a problem with a company does not justify such childish behaviour.
What is and what is not considered a childish behavior is a relative thing.

PhysX left many gamers pissed, so that's why it's PissX. One of the reasons has already been mentioned this thread, but there are also others.

To me, childish is Nvidia who lets their kids play only with their own toys and the kids are also forbidden to share toys. This is one of the fundamental differences between AMD and Nvidia. Nvidia is a corp. that keeps their technologies locked and wants money for it, while AMD inventions are often released as open-source. I get it, it's all for money. However, sharing such things can radically speed up the development of other things and might result in much better things/inventions. To me, sharing seems more adultish than childish. Even Intel does open-sourcing. Sometimes Nvidia product owners get f*cked up when Nvidia says the technology won't be available on previous gen. cards, because ... god only knows why, as all required compoments (computing units) are present on previous gen. products as well.

Being greedy and not wanting to share toys is also present in very young age of a kid, mostly in age of 2-5 years.

Btw, have you read how Jensen avoided paying $8 billion in taxes? One definition for greed is for one to be never satisfied with assets they already posses, they always think they can have more and more. An honest businessman with lots of money would not give a f* about paying taxes, but someone deliberately abusing a hole in the law is not honest. That's to say. US tax laws are a joke, that's also to say.

So, Nvidia does this charity thing called "GeForce fund" which helps poor souls in San Francisco, but god only knows how really does it help and how much money do these poor souls see in the end, because to disclose how these money is spent (along with how much) is not required by US law. :kookoo:
 
However, sharing such things can radically speed up the development of other things and might result in much better things/inventions
Has it? Have you seen any meaningful returns from AMD's open source initiatives? I only see NVIDIA advancing technology in major ways, and others just follow and are like "o shit, how can we do this with our own tech?" .. the CES press conferences will be interesting
 
Has it? Have you seen any meaningful returns from AMD's open source initiatives? I only see NVIDIA advancing technology in major ways, and others just follow and are like "o shit, how can we do this with our own tech?" .. the CES press conferences will be interesting
Yes. AMD developed Mantle and open sourced it. This paved path for Vulkan. Thanks to Vulkan we can play games without emulation on Linux. Many games run smoother on Linux but TPU only tests Windows. It would be good to have at least few games tested on Linux, so that we know how much performance are we losing thanks to Windows.

Nvidia advances in technologies that aim to alter the original image in order to increase fps. That's not a proper way to justify ever raising complexity and prices of their hardware. But that's my humble opinion.
 
Looks like they can still pull more from driver improvements, because it still has performance "flaws" that Alchemist has, which is that it's weak at lower resolutions. I had hoped that maybe the change in architecture and increased utilization would have improved it, but instead of enhancing it at 1080p, it makes it relatively more competitive in 1440p and 4K resolutions. At the higher resolutions the card is rather formidable. It also does fix some issues such as Alchemist falling really behind in Assassin's Creed games. Looks like the near 50% improvement makes it a strong contender in that game.

Perhaps a 10% general boost in 1080p is possible. The card is more powerful than Alchemist and thus would run into bottlenecks more. Also we haven't got the driver that would do a general boost to DX11 yet, they are still targeting games individually. It's also weaker on Unreal 5 engine games, which is also a surprise since things like Execute Indirect is supposed to be especially of benefit to engines like UE5.

Despite the fact that it's still behind performance per area, I estimate it's still 20-30% more performance ISO-process compared to Alchemist. They also managed to improve power efficiency despite clocking 20% or so higher(which is a typical node gain), so the power efficiency improvements are significantly attributed to architecture, not node. ISO-process perf/W is probably 35-45% better compared to Alchemist.

The 10% general boost in 1080p via drivers may be a nice surprise to counter the RDNA4 and RTX 5000 series competition?

Based on the A580 vs A770 results, a hypothetical B770 "G31" might just be able to reach 4070 Super in 4K, as it has 60% more shaders, fill rates, and the clock gap is supposed to be greater than A580 vs A770, possibly in the 3.2-3.3GHz range. The lower end resolution will depend on the driver and will probably land around regular 4070.

Solid card, for now.
 
Last edited:
Yes. AMD developed Mantle and open sourced it. This paved path for Vulkan
Ah yes, good old times when they were competitive. Over 10 years ago now .. Mantle was never open sourced though afaik, just the API spec was basically carried forward. and still Vulkan games make up less than 1% of the market. it's the only reason I kept Doom Eternal in my test suite.

Yes, before anyone asks, I'll be adding Indiana Jones in next rebench

Space Marine 2 has been retested on all cards and readded to the charts
 
Has it? Have you seen any meaningful returns from AMD's open source initiatives? I only see NVIDIA advancing technology in major ways, and others just follow and are like "o shit, how can we do this with our own tech?" .. the CES press conferences will be interesting
FreeSync also wasn't bad... almost on any monitor nowadays. BUT it doesn't earn AMD anything, also advertised as "g-sync compatible" so most users think it's an nvidia thing.
 
FreeSync also wasn't bad... almost on any monitor nowadays. BUT it doesn't earn AMD anything, also advertised as "g-sync compatible" so most users think it's an nvidia thing.
That's a nice example indeed. NV invented G-Sync, AMD was "o shit", but luckily they figured out how to run it without $$ dedicated hardware in the monitors and gave it to VESA, who made it a standard
 
There was talk earlier about differing control panels, this video shows off the Intel side of things.
looks pretty clean. Let's see if they can keep it that way ^^
 
Ah yes, good old times when they were competitive. Over 10 years ago now .. Mantle was never open sourced though afaik, just the API spec was basically carried forward. and still Vulkan games make up less than 1% of the market. it's the only reason I kept Doom Eternal in my test suite.
Not everything is about the latest AAA slopfest. People use GPUs for other things. Vulkan is the beating heart of the emulation scene, along with gaming on Linux as a whole. Devices like the Steam Deck simply wouldn't exist without it. Not to mention Android, where Vulkan is the standard graphics API, having replaced the now-deprecated OpenGL ES. The impact of Vulkan on the world has been gigantic, and only an ignorant person would try to downplay it.

As for Mantle being open sourced, AMD donated it to the Khronos Group as a base to develop Vulkan. If you want to see the Mantle source as it existed at the time development on it ceased, just grab Vulkan 1.0 from 2016. Obviously modern Vulkan has been hugely extended and expanded upon since then.
 
The impact of Vulkan on the world has been gigantic, and only an ignorant person would try to downplay it.
No doubt, but in the context of modern commercial AAA game development on PC it barely exists. I wish it was different

If you want to see the Mantle source as it existed at the time development on it ceased, just grab Vulkan 1.0 from 2016
I don't think that is an accurate statement. AFAIK Mantle is not compatible with Vulkan, not even 1.0
 
The impact of Vulkan on the world has been gigantic, and only an ignorant person would try to downplay it.
Wait, was that a slam on W1z? Are you kidding?

Vulkan is not as popular on Windows, thus why it's not focused on very much by anyone reviewing hardware.

I don't think that is an accurate statement. AFAIK Mantle is not compatible with Vulkan, not even 1.0
I would agree. Vulkan is a rework and recompile of Mantle and as such there are differences. It's not a 1 to 1 transition.
 
Ah yes, good old times when they were competitive. Over 10 years ago now .. Mantle was never open sourced though afaik, just the API spec was basically carried forward. and still Vulkan games make up less than 1% of the market. it's the only reason I kept Doom Eternal in my test suite.
Was you previous question somewhat time-limited? Anyway, there are other games that support Vulkan as alternative API, AAA games too (RDR2, Detroit Become Human, Rainbow Six Siege). Many games and software that are ported to Android leverage Vulkan, beucause it is OS independent and cross-platform. Many gamers hate Windows 11 and would go for Linux next day when there is support for their games and hardware. I think you underestimate what OS independent API means. Valve has big plans with it and Valve surely is a company that can give gaming scene a twist.

Mantle was given to Khronos by AMD while knowing Khronos' intent to make new open API based on it. Surely if AMD did not want to make it open, they would specify it in terms they agreed upon with Khronos.

That's a nice example indeed. NV invented G-Sync, AMD was "o shit", but luckily they figured out how to run it without $$ dedicated hardware in the monitors and gave it to VESA, who made it a standard
DisplayPort AdaptiveSync was optional part of VESA's DisplayPort 1.2a standard released in January 2013. Nvidia released GSync in October 2013, that's a bit later. AMD did not give anything to VESA, Adaptive Sync was there first and then there was AMD FreeSync "made" on this already existing part of VESA's DisplayPort VRR technology. VESA later renamed this technology on AMD's request. AMD showed first prototype of FreeSync monitor in 2014, one year and few months later than DP 1.2a specification was released. Nvidia invented something what was already invented (and invented without requirement for additional device). Foundation for adaptive sync technology was part of eDP specification since 2009, as states this article on TPU:
Adaptive-Sync is a proven and widely adopted technology. The technology has been a standard component of VESA's embedded DisplayPort (eDP) specification since its initial rollout in 2009. As a result, Adaptive-Sync technology is already incorporated into many of the building block components for displays that rely on eDP for internal video signaling. Newly introduced to the DisplayPort 1.2a specification for external displays, this technology is now formally known as DisplayPort Adaptive-Sync.
It's been already there for eDP, they added it for external DP in 2013 in 1.2a revision.

While fighting for Nvidia's GSync, please don't forget to mention 14W permanent power draw of monitor's GSync module evens while monitor is in standby mode.

Now I clearly see why Nvidia-related news on TPU get much more attention than news related to anything else.
 
Last edited:
DisplayPort AdaptiveSync was optional part of VESA's DisplayPort 1.2a standard released in January 2013. Nvidia released GSync in October 2013, that's a bit later. AMD did not give anything to VESA, Adaptive Sync was there first and then there was AMD FreeSync "made" on this already existing part of VESA's DisplayPort VRR technology. VESA later renamed this technology on AMD's request. AMD showed first prototype of FreeSync monitor in 2014, one year and few months later than DP 1.2a specification was released. Nvidia invented something what was already invented (and invented without requirement for additional device). Foundation for adaptive sync technology was part of eDP specification since 2009, as states this article on TPU:
I must be an idiot, because I was at the G-SYNC press event in London, and nobody had ever seen something like it. A while later I was at AMD's event, where they showed Adaptive Sync running on their GPUs and everybody was like "ah, so finally they have G-SYNC"
 
I must be an idiot, because I was at the G-SYNC press event in London, and nobody had ever seen something like it. A while later I was at AMD's event, where they showed Adaptive Sync running on their GPUs and everybody was like "ah, so finally they have G-SYNC"
I'd say you reviewers don't give a damn about 500-sheet specifications. And who does? When there is generational change, people talk only about bandwidth (mostly). I was impressed about that fact from 2009, too, I thought of 2011 or 2012 as beginning for VRR but I was wrong. I felt need to let you know that AMD did not give anything to VESA to standardize to make it AMD's GSync-like and VRR technology was already specified 9 month sooner than Nvidia came out with GSync, and as it turns out, VESA themselves states the sooner date for embedded DP standard. AMD gave proposition to VESA, but that was a bit later than DP 1.2a specs was released.

Thanks for fixing Space Marine 2 section, I could not believe my eyes back then when I saw B580 beating RX 6800.

Also, while I don't want to sound rude or that I don't appreciate your work, maybe it'd be wise to expand efficiency testing of GPUs on more than just Cyberpunk to make results more accurate. B580 does very well in CB2077, but on average the RTX 4060 is real efficiency king across the board.
 
Ah yes, good old times when they were competitive. Over 10 years ago now .. Mantle was never open sourced though afaik, just the API spec was basically carried forward. and still Vulkan games make up less than 1% of the market. it's the only reason I kept Doom Eternal in my test suite.

Yes, before anyone asks, I'll be adding Indiana Jones in next rebench

Space Marine 2 has been retested on all cards and readded to the charts
mantle 1.0 was open source. then kronos pick it and become mantle.
the begining of mantle came from betrayal from microsoft-amd's temash deal. and that why we get better version of dx12 that is low level api same as mantle.
radeon team had been behind developing of direct x for a long time now.
 
While fighting for Nvidia's GSync, please don't forget to mention 14W permanent power draw of monitor's GSync module evens while monitor is in standby mode.
That depends on what you mean by "Standby". If you're talking about low-power sleep mode during an active OS session, than then maybe. If you're talking about when the display is not receiving an active signal due to the connected system being powered off, then no. In "Powered Off" mode, the display will draw less than 2.5W(IIRC) as per US regulations, which became mandatory in the US a number of years ago and were adopted world-wide soon thereafter.
Thus. Read, learn.
Now I clearly see why Nvidia-related news on TPU get much more attention than news related to anything else.
Good grief! And people call ME condescending?
 
Has it? Have you seen any meaningful returns from AMD's open source initiatives? I only see NVIDIA advancing technology in major ways, and others just follow and are like "o shit, how can we do this with our own tech?" .. the CES press conferences will be interesting
Freesync or if you like VRR
 
DisplayPort AdaptiveSync was optional part of VESA's DisplayPort 1.2a standard released in January 2013. Nvidia released GSync in October 2013, that's a bit later. AMD did not give anything to VESA, Adaptive Sync was there first and then there was AMD FreeSync "made" on this already existing part of VESA's DisplayPort VRR technology. VESA later renamed this technology on AMD's request. AMD showed first prototype of FreeSync monitor in 2014, one year and few months later than DP 1.2a specification was released. Nvidia invented something what was already invented (and invented without requirement for additional device). Foundation for adaptive sync technology was part of eDP specification since 2009, as states this article on TPU:

It's been already there for eDP, they added it for external DP in 2013 in 1.2a revision.

While fighting for Nvidia's GSync, please don't forget to mention 14W permanent power draw of monitor's GSync module evens while monitor is in standby mode.

Now I clearly see why Nvidia-related news on TPU get much more attention than news related to anything else.
I'm a bit confused here, maybe I misunderstood W1zzards message. But why does it matter who owns the tech/software now/implemented it as a standard? AMD found a way to introduce something cheap that almost did the same thing. Why argue about something this pointless. It's a win for amd (and us) in my book at least.
Freesync or if you like VRR
was already mentioned by me in #262 (Freesync)
 
Last edited:
Back
Top