• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Sparkle Arc A770 ROC

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,753 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
The Sparkle Arc A770 ROC is a custom design variant of the Intel Arc A770. It comes with a capable cooler design that's still compact enough to fit into all cases. How does Intel's discrete GPU do in late 2024? Is it a good alternative to RTX 4060 and RX 7600? Read on to find out.

Show full review
 
Still too driver and game dependent, inconsistent and still could stand to shave off 20-30 bucks off the price. But I will give Intel this - they are miles ahead of the shitshow these cards were at launch, so good on them. Fingers crossed that Battlemage actually manages to be a contender at least for the entry to mid-range segment.

Oh, and:
“While both RTX 4070 and RX 7600 have "only" 8 GB VRAM, our benchmark results take that fact into account and the A770 with 16 GB cannot offer a meaningful advantage (at 1080p Full HD).”

I assume 4060 is what’s meant there in the Conclusion.
 
so, ~double the power consumption over the 4060/7600, delivers inferior performance and that too while NOISIER. At the very least, Intel could have lowered the price to make it a more attractive option.

Do we really need 16GB at this performance level? Perhaps some workloads justify? For gaming it seemed pointless when AMD/NVIDIA dropped their 16GB entry level cards and now Intel. At 4K the performance is impacted positively over 8/12GB cards but over-all performance at higher resolutions SUCK!

On the bright side, more entry-level cards are always a welcome addition. Over time, this should help position them where they truly belong... in the ~$200 price range.

Let’s hope these less-than-stellar efforts don’t discourage Intel from continuing its uphill battle(mage) to smash into and thrive in the GPU market. We need more compo to challenge the top dogs (or Dog)
 
Last edited:
Interesting timing to review this with battle mage (supposedly) on the horizon
 
The Witcher 3 is is still one of the best tells of what a card is capable of.
That test needs to stay in there as long as GPUs are still tested. It scales better than any other game and it even has near perfect SLI scaling.
 
That DLSS remark as a con even with the qualifier is not fair unless you remark on every nvidia and amd review their lack of XeSS and or FSR (which of course work on other cards). DLSS is specifically an nvidia tech. You cant blame and non nvidia card for not having DLSS.
 
Last edited:
The Witcher 3 is is still one of the best tells of what a card is capable of.
That test needs to stay in there as long as GPUs are still tested. It scales better than any other game and it even has near perfect SLI scaling.

An old DX11 game where this GPU just happens to perform 33% better than its regular low FPS? Sure, it may suggest how a GPU behaves in older games though maybe just one older game... but it's not indicative of anything current in this case other than how to optimize a GPU for a single game.
 
so, ~double the power consumption over the 4060/7600, delivers inferior performance and that too while NOISIER. At the very least, Intel could have lowered the price to make it a more attractive option.

Do we really need 16GB at this performance level? Perhaps some workloads justify? For gaming it seemed pointless when AMD/NVIDIA dropped their 16GB entry level cards and now Intel. At 4K the performance is impacted positively over 8/12GB cards but over-all performance at higher resolutions SUCK!

On the bright side, more entry-level cards are always a welcome addition. Over time, this should help position them where they truly belong... in the ~$200 price range.

Let’s hope these less-than-stellar efforts don’t discourage Intel from continuing its uphill battle(mage) to smash into and thrive in the GPU market. We need more compo to challenge the top dogs (or Dog)

- At 400mm2 die, its obvious that the performance level was supposed to be closer to 6800/3070, where 16GB of RAM would have made more sense.

At this point I have to assume the 16Gb is to snag low info folks who think more RAM at this price point means the card is better.
 
“While both RTX 4070 and RX 7600 have "only" 8 GB VRAM, our benchmark results take that fact into account and the A770 with 16 GB cannot offer a meaningful advantage (at 1080p Full HD).”
Yeah, typo, fixed

That test needs to stay in there as long as GPUs are still tested
Confirmed, it's definitely staying for the 2025.1 Test System :)

That DLSS remark as a con even with the qualifier is not fair unless you remark on every nvidia and amd review their lack of XeSS and or FSR (which of course work on other cards). DLSS is specifically an nvidia tech. You cant blame and non nvidia card for not having DLSS.
DLSS is the best upscaler, especially FG is a huge selling point. While I can't blame others for not having "DLSS", I can blame them for not having a better upscaler/framegen than DLSS
 
Yeah, typo, fixed


Confirmed, it's definitely staying for the 2025.1 Test System :)


DLSS is the best upscaler, especially FG is a huge selling point. While I can't blame others for not having "DLSS", I can blame them for not having a better upscaler/framegen than DLSS
I mean in the end you can write what you like. I don't agree with it but I get it.
 
I really think they should play the long game, stop chasing this-quarter profits and cut the price. It's really hard to get into the GPU market, AMD has been at it for YEARS and has only a few % of the market. I'd love to see more competition, but to get in from basically scratch, Intel needs to accept that it won't be profitable right away and sell at close to cost to build market share, until they're competitive with Nvidia and AMD, which I believe will take a few more R&D cycles. Until then, the only thing that have to compete on is price.
 
Considering the 7600XT is coming very close in price now this thing is just... Not it man.
Sparkle also messed up a lot with this card. Slamming the fanspeed to 1200RPM when the card hits 56°C? What year is it? 2015?
And good lord the aggressive fanprofile, I thought Gigabyte had some stinkers in the 7000 series, get a load of Sparkle.
Or not, nobody has this thing and the ones that actually bought it are too embarrassed to talk about it or rigged their own solution.
This thing sucks. Sparkle sucks and the A770 sucks. Do better.
 
I really think they should play the long game, stop chasing this-quarter profits and cut the price. It's really hard to get into the GPU market, AMD has been at it for YEARS and has only a few % of the market. I'd love to see more competition, but to get in from basically scratch, Intel needs to accept that it won't be profitable right away and sell at close to cost to build market share, until they're competitive with Nvidia and AMD, which I believe will take a few more R&D cycles. Until then, the only thing that have to compete on is price.

They are playing the long game.

Their main issue is the drivers, specifically that AMD and Nvidia drivers pretty much have optimizations for every game ever. Another aspect is that Nvidia and AMD have a crew of driver coders with vast experience, while Intel is building that out from mostly scratch.

The hardware here should be competitive with the 4070 and 6700 XT. There are only a few games where this really shows, but it is a feat that never happens with for example a 6600 XT or 3060 Ti.

What this review mostly shows is where they are in that driver and driver team build-out. Looks like at least a couple more years to go.
 
The greatest game ever made you mean? Just so happens to be one of the best optimized titles ever? Cyberpunk 2077 scales just as well, it's just not the greatest game ever made.
Absolute impeccable scaling on modern GPUs from an older title is a great test on the history of GPU advancement as well.
The 33% statement just reiterates the greatness of TW3 btw and the failure of modern games optimization and cements it's importance as as test. It's an academic choice not a promotional one.
look at Cyberpunk RT if you want modern example.
An old DX11 game where this GPU just happens to perform 33% better than its regular low FPS? Sure, it may suggest how a GPU behaves in older games though maybe just one older game... but it's not indicative of anything current in this case other than how to optimize a GPU for a single game.
 
Some cheap low profile card would be nice, but not this.
 
The greatest game ever made you mean? Just so happens to be one of the best optimized titles ever? Cyberpunk 2077 scales just as well, it's just not the greatest game ever made.
Absolute impeccable scaling on modern GPUs from an older title is a great test on the history of GPU advancement as well.
The 33% statement just reiterates the greatness of TW3 btw and the failure of modern games optimization and cements it's importance as as test. It's an academic choice not a promotional one.
look at Cyberpunk RT if you want modern example.

The A770's performance in Cyberpunk is at least closer to it's average performance, only about 8-9% above it's overall average. That's a more reasonable game to compare to for a current performance evaluation as it uses DX12 and Intel is likely to optimize for new games like Nvidia and AMD do.

TW3 being an excellent and long-popular title means it got the gold optimizing treatment from Intel but for that same reason TW3 is a poor guideline for ARC performance in older games, as there are many where Intel will do no optimization at all and some where they'll do... enough to give a decent framerate.
 
Could you please pass along a message for them to continue the Luna roc line? It looks the best.
 
- At 400mm2 die, its obvious that the performance level was supposed to be closer to 6800/3070, where 16GB of RAM would have made more sense.

At this point I have to assume the 16Gb is to snag low info folks who think more RAM at this price point means the card is better.
For reference if anyone is interested: The RTX 4080 is 379mm2. The 4060 is 159mm2. The 7600 is 204mm2

the A770 is 400mm2 and somehow slower then the 4060. Something is either clearly wrong with the Arc design or its drivers, or both.

Still too driver and game dependent, inconsistent and still could stand to shave off 20-30 bucks off the price. But I will give Intel this - they are miles ahead of the shitshow these cards were at launch, so good on them. Fingers crossed that Battlemage actually manages to be a contender at least for the entry to mid-range segment.

Oh, and:
“While both RTX 4070 and RX 7600 have "only" 8 GB VRAM, our benchmark results take that fact into account and the A770 with 16 GB cannot offer a meaningful advantage (at 1080p Full HD).”

I assume 4060 is what’s meant there in the Conclusion.
I'll give them credit for improvement, but ding them because, frankly, they have the resources to fix this. For Arc still having this much trouble competing years after release is just pathetic from a company that has a larger R+S budget 3x higher then AMD.
 
I'll give them credit for improvement, but ding them because, frankly, they have the resources to fix this. For Arc still having this much trouble competing years after release is just pathetic from a company that has a larger R+S budget 3x higher then AMD.

I wonder. Is Arc's problem:

massive bugs (shades of AMD's claim with RDNA3)
design that scales horribly
woefully inefficient

All 3? And more? The tiny preview for this for me was the NUC6 line 8 years ago. That had the first Iris Plus iGPU in the Skylake i5 NUC (currently my TV server's front end) with 384 cores + 64MB eDRAM at 20W TDP, easily beating the older gen Crystalwell Iris Pro with 320 cores +128MB eDRAM using 47W. So seemingly a good design. But then there was the upmarket stablemate i7 Skylake NUC with 512 cores + 128MB eDRAM, 25% faster DDR4 and a 45W TDP. Which was only 15-20% faster. That is damn poor scaling.

I thought this apparent scaling issue would have been fixed 2 GPU architectures later but apparently not. I hope Battlemage is considerably better.
 
For reference if anyone is interested: The RTX 4080 is 379mm2. The 4060 is 159mm2. The 7600 is 204mm2

the A770 is 400mm2 a
nd somehow slower then the 4060. Something is either clearly wrong with the Arc design or its drivers, or both.


I'll give them credit for improvement, but ding them because, frankly, they have the resources to fix this. For Arc still having this much trouble competing years after release is just pathetic from a company that has a larger R+S budget 3x higher then AMD.

-TBF the A770 is on 6nm, so a little more compact than the N7 process the Rx6000 dies went with but not the N5/N4 the Rx7xxx and Ada dies are going with. 7600 is N6 as well.

6700XT on 7nm is 237mm2. 6900xt's N21 die was 520mm2.

So really a huge performance failure in raster workloads.
 
Lets be fair, this A770 is competing with the 3060ti and 4060 both of which are either $280 or more. So it's it's not an unfair price, but yes, the other cards are a much better value at their $230 price. They are THE value cards right now.
 
Luckily I didn't get an A770 2 years ago when I was buying a ~500EUR GPU, ended up with a RX 6700 XT and that seems to beat this even 2 years later.
 
Lets be fair, this A770 is competing with the 3060ti and 4060 both of which are either $280 or more. So it's it's not an unfair price, but yes, the other cards are a much better value at their $230 price. They are THE value cards right now.
Why shouldn't this compete against the other a770's though? What makes this one $50 more special than the other two?
 
Back
Top