• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc B580 Selling Like Hot Cakes, Weekly Restocks Planned

Can you explain what is effective bandwidth?
Already basically did, it's the combination between the caches + vram bandwidth. AMD used this some times in their slides, giving a huge number in the mid 1 TB/s or higher.
No, 4090 does not have a ton of cache.
The full chip has 96 MB, regardless, 72 MB IS a lot of L2 cache. You seem to compare it to the wrong metrics (probably L3 cache). Even AMD reduced it to 96 MB max. - it is a lot.
Not going to google right now but IIRC AMD caches have more bandwidth almost throughout the whole thing.
No, L2 cache is generally faster than L3 cache, does not matter which kind of processor. AMD can not dream about having faster cache especially because it now goes into chiplets, not even on the same chip anymore, which decreases efficiency further.
 
Already basically did, it's the combination between the caches + vram bandwidth.
How are these combined to get the effective bandwidth?
 
How are these combined to get the effective bandwidth?
You calculate the speed of the Cache against the bandwidth of the vRAM, it's an average. Edit: this is my take, AMD just states the *maximum* bandwidth instead, which is 3.5 TB/s for XTX and 2.9 TB/s for 7900 XT. 6900 XT's max. bandwidth is ~2 TB/s.
 
Last edited:
You calculate the speed of the Cache against the bandwidth of the vRAM, it's an average. Edit: this is my take, AMD just states the *maximum* bandwidth instead, which is 3.5 TB/s for XTX and 2.9 TB/s for 7900 XT. 6900 XT's max. bandwidth is ~2 TB/s.
Great, and then what? Bigger number is always better? The benchmarks show us that is not true. The 4090 has a much lower 'effective' bandwidth but slaughters 4K whereas the XTX plateaus heavily with much higher 'effective bandwidth'. Of course, we will then say 'but the 4090 is just fastuuhhr' due to .... more shaders.... which effectively means the performance is NOT capped by the bandwidth..... so the XTX's effective bandwidth also says nothing of its actual performance and it is clear it does not obtain a further benefit if you haven't got more core performance. If you compare the XTX and the 4090 like that, the only possible conclusion is that AMD balanced that card's bandwidth horribly, there's a huge amount of waste.... and what @londiste said is probably true: there's no benefit to ever more cache or ever higher 'effective' bandwidth.

'Effective' bandwidth tells you next to nothing of a card's performance and screams 'marketing BS number' like Watt RMS does for audio, its some arcane calculation to end up with a high number to wow you. Its not quite as hard a number as, say, the capacity of VRAM is. In other words, we might do better to just forget about this metric altogether. Yes, cache is good and alleviates some bandwidth constraints, but in a world where all cards have this cache in sufficient amounts (which is the case going forward), its no longer a real influence on product performance. Its like RAM: downloading more of it won't make your PC faster, enough is what you need, and all you need.
 
Last edited:
You calculate the speed of the Cache against the bandwidth of the vRAM, it's an average. Edit: this is my take, AMD just states the *maximum* bandwidth instead, which is 3.5 TB/s for XTX and 2.9 TB/s for 7900 XT. 6900 XT's max. bandwidth is ~2 TB/s.
That should not be an average.

Did look up the launch slides now. Looking at 6900XT for example, using TPU's review and slides in there:
arch7.jpg

There are two important numbers for the bandwidth discussion:
1. The VRAM bandwidth, in case of 6900XT 512GB/s
2. The cache bandwidth, in case of 6900XT 2TB/s

Effective bandwidth is not a specific number but varies across different usage scenarios. For obvious reasons effective bandwidth is somewhere between these two numbers. The logical way to calculate one number for effective bandwidth is to account for the cache hit rate. According to AMD slides the cache hit rate in games they tested at 2160p was 58%, so something along the lines of 58% at 2TB/s plus rest at 512 GB/s. 0,58 * 2048 + 0,42 * 512 = 1402 MB/s. As you can see from the slide the cache hit rate is higher at 1440p and 1080p which will give noticeably higher effective bandwidth at these resolutions. Of course while based on actual usage - as measured by AMD - this is pretty theoretical and the results will vary depending on what the memory usage actually is.

On slides AMD naturally uses the theoretical maximum effective bandwidth at 100% cache hit rate but this is expected from launch slides by a company trying to sell you on something.

Edit:
If you look at the AMD slide above that should give you a hint about why AMD went with smaller caches in the next generation. Cards with 256-bit memory bus are aimed at 1440p and 64MB does seem to be a pretty nice sweetspot for that. Yes, 2160p will somewhat suffer but the cards aimed for that in new generation also get a wider memory bus and accordingly, more cache.

Why would they want to reduce the amount of cache? Transistor budget. that 128MB of cache is 6-7 billion transistors. Navi 21 for 6900XT has 26.8 billion transistors in total. That is almost a quarter of transistors for the entire chip. If they can reduce the cache size by half and still have it almost as good, that is a big win. In context of Navi 21 halving the cache would cut the transistor count by 12-13% and that is a lot.

The 4090 has a much lower 'effective' bandwidth but slaughters 4K whereas the XTX plateaus heavily with much higher 'effective bandwidth'. Of course, we will then say 'but the 4090 is just fastuuhhr' due to .... more shaders.... which effectively means the performance is NOT capped by the bandwidth..... so the XTX's effective bandwidth also says nothing of its actual performance and it is clear it does not obtain a further benefit if you haven't got more core performance. If you compare the XTX and the 4090 like that, the only possible conclusion is that AMD balanced that card's bandwidth horribly, there's a huge amount of waste.... and what @londiste said is probably true: there's no benefit to ever more cache or ever higher 'effective' bandwidth.
4090 probably has effective bandwidth in a similar range of 7900XTX. The architectural differences are noticeable though - like AcE noted before 40 series has L2 cache while RDNA3 has L3 cache. Usually the cache levels on GPU are muddied enough that it does not matter but architecturally it does apply. RDNA3 has the L3 as victim cache but in the end next to the memory (controller). Nvidia decided to enlarge L2 cache instead. It's a whole big optimization game - cache hit rate and cache size being the main - but not only - considerations. All-in-all it seems to play out quite evenly in this generation at least in terms of memory bandwidth, effective or not.

Whether game - or application for that matter - ends up being more compute or bandwidth limited varies. A lot. Something that relies on memory bandwidth probably has an easier time on 7900XTX, both due to larger cache and it being able to more effectively utilize its shaders while 4090s bigger shader array is more easily starved (4090 simply has about 30% more compute power that needs to be fed with data). Or even limited by something else - there are games that perform better on 7900XTX compared to 4090 which can probably be attributed in large part to 4090 having less ROPs and lower pixel fillrate.

I wouldn't say 7900XTX is balanced horribly. It is simply balanced to different strengths. Partially ending up weaker in compute power uncharacteristically for AMD. And from what it looks like so far the reliance on more compute in games finally went up, unfortunately for AMD.
 
Last edited:
Insightful. Thanks. I learned a thing :)
 
As usual, I won't argue with people who just want to argue and be "right". ;) Waste of time. Everything I've said is correct, I don't need this discussion. Some people are here, not to learn, but just to argue for their ego's sake.

Gonna state the facts again, for the sake of it:

RTX 40 cards have more than enough bandwidth. Go check reviews. No big anomalies in resolution scaling.

RX 7000 cards have a lot of bandwidth, but not as perfect as Nvidias scaling, a few short comings (no biggies). This can be seen in various reviews, check for 4K, 1440p for RX 7600.

RX 6000, same, with the added notion that none card is perfect, whereas RX 7900 XTX + 7800 XT have perfect scaling even for 4K.

Big L2/L3 cache is great, 1) it saves energy, 2) it's the only way forward because you can not design cards with 1024 bit bus, not even 512 bit is good, but will be needed in the next round of very power hungry cards. 3) it works very well, if the cache is big enough or fast enough (L2 cache > L3). Opposing big caches makes zero sense, next you can also go and say "Ryzen X3D isn't good". ;) The doubters here are wasting their time and have no data to back up their words.
 
Last edited:
The 6700 series draws nearly 60% more power, for roughly the same performance - and they are still more expensive than a B580.
Still vaporware in Canada...............

edit - using gaming and the intel model, I would say the 6700xt uses 17.3% more power at gaming.
1735149915651.png
 
Last edited:
They are gone from Newegg. In fact, it looks like Intel is re-releasing Arc B-series.
 
They are gone from Newegg. In fact, it looks like Intel is re-releasing Arc B-series.
No, one trademark have available two models of B580. With 2 and with 3 vents. The last one with black and with white (little more expensive) variants.
Ps. But all prices is over MSRP+ tax! :(
 
Right now, reminds me of the very-early Vermeer days, where I couldn't even buy a Ryzen 5 5600X if I wanted to. :(
Reminds me of the early-pandemic great chip shortage. Arrrgh!
(That was when we couldn't buy Ryzen 5000 series when they were an entirely new series)

Update:
And I forgot about the GeForce RTX 30 series of '20, too! When people couldn't buy them, either, even when months before the video-card-market-Armageddon!

Feels like early fall, '20 all over again. :(
 
Last edited:
Right now, reminds me of the very-early Vermeer days, where I couldn't even buy a Ryzen 5 5600X if I wanted to. :(
Reminds me of the early-pandemic great chip shortage. Arrrgh!
(That was when we couldn't buy Ryzen 5000 series when they were an entirely new series)

Update:
And I forgot about the GeForce RTX 30 series of '20, too! When people couldn't buy them, either, even when months before the video-card-market-Armageddon!

Feels like early fall, '20 all over again. :(
Except that even A-series Intel GPUs weren't widely available, either. The problem is with Intel's GPU supply, not with the industry like it was back in '20.
 
Strange. In Germany the Intel cards are well available.
 
Strange. In Germany the Intel cards are well available.
Strange indeed. In the UK, there's almost nothing. I see Overclockers has a couple of A310 and A750 cards, but that's it.
 
Strange indeed. In the UK, there's almost nothing. I see Overclockers has a couple of A310 and A750 cards, but that's it.
All of them are buyable here in various shops. Hmm.
 
The last time I checked, in the U.S., you can buy an Alchemist, but with Battlemage, you can forget about it.
 
Bildschirmfoto vom 2025-01-01 14-04-11.png

This is like... one of many shops here ^^ I checked and in Germany the AsRock Steel Legend variant seems to be available everywhere, while other models are not. So I guess that model should be available everywhere else soon too. No idea why it's here first though.
 
View attachment 377986

This is like... one of many shops here ^^ I checked and in Germany the AsRock Steel Legend variant seems to be available everywhere, while other models are not. So I guess that model should be available everywhere else soon too. No idea why it's here first though.
That price though, ouch.
 
-Yep, can't even find a B580 on Amazon which is really something...
Yep, it's like RTX 30 series in September, '20 (and October) Where IIRC, people lined up to get one and there were none on September 15, '20. :(
 
Back
Top