Thursday, August 4th 2022

Intel Arc Board Partners are Reportedly Stopping Production, Encountering Quality Issues

According to sources close to Igor Wallossek from Igor's lab, Intel's upcoming Arc Alchemist discrete graphics card lineup is in trouble. As the anonymous sources state, certain add-in board (AIB) partners are having difficulty adopting the third GPU manufacturer into their offerings. As we learn, AIBs are sitting on a pile of NVIDIA and AMD GPUs. This pile is decreasing in price daily and losing value, so it needs to be moved quickly. Secondly, Intel is reportedly suggesting AIBs ship cards to OEMs and system integrators to start the market spread of the new Arc dGPUs. This business model is inherently lower margin compared to selling GPUs directly to consumers.

Last but not least, it is reported that at least one major AIB is stopping the production of custom Arc GPUs due to quality concerns. What this means is yet to be uncovered, and we have to wait and see which AIB (or AIBs) is stepping out of the game. All of this suggests that the new GPU lineup is on the verge of extinction, even before it has launched. However, we are sure that the market will adapt and make a case for the third GPU maker. Of course, these predictions should be taken with a grain of salt, and we await more information to confirm those issues.
Source: Igor's Lab
Add your own comment

133 Comments on Intel Arc Board Partners are Reportedly Stopping Production, Encountering Quality Issues

#126
ZoneDymo
Vayra8610% is 10%. I didn't attribute any value to it. But its not 0%, and its not 'margin of error' 2-3% either.

However, the gap on the 1% and 0.1% lows is absolutely huge, and its one of the reasons Nvidia maintained the lead it had. It just ran more smoothly. It also shows AMD produced cards to get high fps, not consistency in FPS - whether that was natural to GCN at the time or not, I don't know. But it echoes the whole episode of frame pacing/microstutter on both brands. Nvidia clearly started sacrificing high averages for consistency at some point.

Now obviously RX480 wasn't going to equal a 1060, because it literally wasn't marketed to compete with that card. It was competing with the 970, albeit far too late - and even there it didn't quite get to a decisive victory. Pascal had the node advantage.


idk man, they are trading blows and are otherwise 1 to 2 frames apart...they are the same and that has always been the consensus.
But ultimately this is a rather silly back and forth, it does not hurt anything or anyone, it does not help anything or anyone :p
Posted on Reply
#127
chrcoluk
stimpy88I agree, but from what I understand, the cards are already delayed more than 7 months now due to the driver team not being able to deliver. The strange thing about the drivers, is apparently, simple things like buttons in the driver control panel don't work, or work in unintended ways. It's like nobody is actually eating the dogfood the team is putting out...

If the drivers are in that kind of state, and basically need another 6-9 months in the oven, then by the time everything is ready, and they have spun a new PCB revision, then Intel will be competing with next gen cards from AMD & nVidia, and will be lucky for ARC to compete against even the lowest end cards, so Intel will be forced to slash prices. I think this first iteration of ARC is DOA, it's simply too late, too expensive and too underpowered to have any meaningful impact on the market by the time it's released.

I hope that Intel fix what is wrong, concentrate in the top two or three SKU's, release them as loss-leaders to be competitive, and get their asses working on the next gen as fast as they can. I would assume that the drivers will be based on the same foundation as the first gen, so they should be fairly stable by then.

I feel that Intel really should stick with ARC, and it would be short-sighted to completely cancel the discreet GPU project. I don't trust Intel not to simply align with the other two vendors and pricefix the market, but I at least hope that they actually do intend to be competitive, and keep the low to mid range prices sane, as I'm sure nVidia want to move the midrange to $800+ after what happened to the 3070.
Your point is a good one I do accept it, I think this seems to have happened because whoever is responsible for rolling these cards out of factories has not been working with the driver team, Intel probably shouldnt have even started the manufacturing and continued development, and just make a few sample cards which could be used to get the drivers working, then after that point start the mass production.

Ultimately though they could have still released these against the next gen AMD/Nvidia and they still would sell ok providing the price is right, the cards would be weaker, less efficient, but if the value is there then those things would be overlooked by the budget crowd. Seems to be a combination of greed and impatience from Intel, writing drivers from complete scratch is no easy task.
Posted on Reply
#128
efikkan
Vayra86However, the gap on the 1% and 0.1% lows is absolutely huge, and its one of the reasons Nvidia maintained the lead it had. It just ran more smoothly. It also shows AMD produced cards to get high fps, not consistency in FPS - whether that was natural to GCN at the time or not, I don't know. But it echoes the whole episode of frame pacing/microstutter on both brands. Nvidia clearly started sacrificing high averages for consistency at some point.
The drivers of all three obviously does some degree om frame pacing, otherwise you would see a lot of fluctuations in the frame time. While it's easy to handle too fast frames, you just delay it a bit, but if a frame takes longer than the previous ones, then there is nothing the driver can do.

The reason why GTX 1060 outperformed RX 480 while the latter was much stronger "on paper" (~50%) has nothing to do with a node advantage, it's the result of better resource management. RX 480's power draw was the result of AMD using "brute force" to throw resources at the problem, while Nvidia used better GPU scheduling to use less GPU resources per rendered frame. Better resource control also means less chances of stalls, either small or large, which is what happens when you see a single frame take too long. AMD have since gotten better, but interestingly enough, Intel is facing basically the same problem with Alchemist; a GPU with a lot of (theoretical) power, which scores decently in synthetic workloads, but poorly in games and especially with frame rate consistency at higher frame rates. This is a GPU scheduling issue, and no amount of driver wizardry can compensate for that.
Posted on Reply
#129
Bomby569
Vayra86

Suit yourself ;) I pointed out your own review link, userbenchmark, and now here we have another 10% which is not counting the 1% lows.
idk the source for that, when was that done, but in the techpowerup gpu database the difference between them is 4% not 10%, and even that is debatable, i would say.

www.techpowerup.com/gpu-specs/radeon-rx-480.c2848
see the relative performance
Posted on Reply
#130
Vayra86
ZoneDymo


idk man, they are trading blows and are otherwise 1 to 2 frames apart...they are the same and that has always been the consensus.
But ultimately this is a rather silly back and forth, it does not hurt anything or anyone, it does not help anything or anyone :p
You know what, to each their own version of the consensus ;)

Apparently the numbers from 3 written sources are wrong and we're down to Youtube level now to mitigate the damage, after 'Userbenchmark is wrong, use TPU, and it showing another 10% gap in the very review of the product'. I'm out of this one ;) Be sure to polish those glasses once in a while though.
Posted on Reply
#131
Bomby569
Vayra86You know what, to each their own version of the consensus ;)

Apparently the numbers from 3 written sources are wrong and we're down to Youtube level now to mitigate the damage, after 'Userbenchmark is wrong, use TPU, and it showing another 10% gap in the very review of the product'. I'm out of this one ;) Be sure to polish those glasses once in a while though.
again, you went for the review launch numbers, we already went through this, the TPU numbers are 4% not 10%, drivers change with time especially with AMD it's common. The 4% it's on this sites database for the cards performance.
Posted on Reply
#132
Vayra86
Bomby569again, you went for the review launch numbers, we already went through this, the TPU numbers are 4% not 10%, drivers change with time especially with AMD it's common. The 4% it's on this sites database for the cards performance.
There are no numbers supporting that assumption - you're now crawling back to 4% relative performance in averages across the entire GPU portfolio and not counting the 1% lows. That's including 4K results ;) I think we can agree these are 1080p cards, though. I linked 10% from TPUs own review which you asked for. Now its launch drivers causing the 'problem'.

Back to your first link: the averages aren't far apart to begin with (4% is not uncommon!), I specified the 1% lows from the get-go and its a common theme with GCN-based GPUs. Irrespective of driver versions, as the 390X does the same thing as linked before.

But again - you do your version on whatever numbers suits yours, I'll base mine on the facts and specifics until some written test proves otherwise. I haven't seen any and I'm insensitive to user sentiment.
efikkanThe drivers of all three obviously does some degree om frame pacing, otherwise you would see a lot of fluctuations in the frame time. While it's easy to handle too fast frames, you just delay it a bit, but if a frame takes longer than the previous ones, then there is nothing the driver can do.

The reason why GTX 1060 outperformed RX 480 while the latter was much stronger "on paper" (~50%) has nothing to do with a node advantage, it's the result of better resource management. RX 480's power draw was the result of AMD using "brute force" to throw resources at the problem, while Nvidia used better GPU scheduling to use less GPU resources per rendered frame. Better resource control also means less chances of stalls, either small or large, which is what happens when you see a single frame take too long. AMD have since gotten better, but interestingly enough, Intel is facing basically the same problem with Alchemist; a GPU with a lot of (theoretical) power, which scores decently in synthetic workloads, but poorly in games and especially with frame rate consistency at higher frame rates. This is a GPU scheduling issue, and no amount of driver wizardry can compensate for that.
Yeah, I assumed something of the sort. Thanks for clarifying.

And sure, the node advantage applies mostly to efficiency, not so much performance.
Posted on Reply
Add your own comment
Jun 3rd, 2024 02:02 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts