• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Formally Announces Arc A-series Graphics

As Intel CPU + AMD GPU notebooks are non-existent, it's mainly NV that needs to worry.


They are working hard at making a CPU to be competitive, and given they success in GPU tech and FPGA like hardware they might pull it off, but they won't make a dent in the serious laptop/desktop market as the whole system is X86-64 based and the compatibility is too much of an issue.
 
View attachment 241834

The 5700G has half the shaders, shares the DDR4 and runs half the speed at low settings.

View attachment 241835

The 6500 has the exact number of cores, memory & bus, but runs at almost 2X the core speed but at high settings, and its gimped by the PCIe.


From what it seems the 370M is about 30-40% slower than the 6500XT at half the power budget. So Watt for Watt they seem to be about 15% behind AMD TSMC 6nm, Intel being 7nm, If their hardware scales and is priced right with god drivers they could be here for the fight.



"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."
40% slower sounds like a good estimate. TPU only tests at Ultra settings so I have to use Gamers Nexus's really old review of the Witcher 3.

1648669263116.png

If this still holds true, then it's in the 1050 Ti range or around the new 6000 series APUs from AMD. I'm using TPU's review of the 1050 Ti for my estimate.

1648669396712.png
 
40% slower sounds like a good estimate. TPU only tests at Ultra settings so I have to use Gamers Nexus's really old review of the Witcher 3.

View attachment 241848
If this still holds true, then it's in the 1050 Ti range or around the new 6000 series APUs from AMD. I'm using TPU's review of the 1050 Ti for my estimate.

View attachment 241850
1080 60FPS gaming at high settings will be here next year with IGP or basic dedicated hardware, my whole (really old) machine could be replaced with a laptop and gain at least 50% more CPU performance and on par GPU performance if not more with scaling tech.
 
View attachment 241834

The 5700G has half the shaders, shares the DDR4 and runs half the speed at low settings.

View attachment 241835

The 6500 has the exact number of cores, memory & bus, but runs at almost 2X the core speed but at high settings, and its gimped by the PCIe.


From what it seems the 370M is about 30-40% slower than the 6500XT at half the power budget. So Watt for Watt they seem to be about 15% behind AMD TSMC 6nm, Intel being 7nm, If their hardware scales and is priced right with god drivers they could be here for the fight.



"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."
Must be RX 680M level depending on the laptop/TDP/Ram speed

25.png




Its the same full 96EU iGPU from intel slides**
 
wait what, that cant be right, unless im stupid or something (or the software is, idk)

i ran a few of my recordings through handbrake at 30 constant quality and turing nvenc basically threw a file twice the size as software medium at me
curiously, slower and up also threw a file of a larger size than medium at me, but maybe they have higher fidelity despite the identical 30 quality preset? (idk, would be the only logical explanation since i did not really inspect/watch the results any further)

It's probably your configuration, you can often use more specific frame settings, optimizations and bit rates towards a given medium so you may maximize size to quality ratio :)

"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."

Personally I am less concerned with the 50x marketing claim and more concerned with its ability to record high-resolution AV1 in real time without bogging down my processor :D

It's not like then Gen 6 NVENC in Turing/Ampere cards is 50 times faster than a modern competent processor either... those claims do give me some eerie vibes from the earliest days of GPU-processed video coding, remember Badaboom? I feel old. :eek:
 
Realtime AV1 encoding is a huge deal! This format beats the pants out of HEVC/h265 and is not bound by the same restrictive licensing that led the format to be mostly ignored.

NVENC's advantage over AMD's competing Video Core Next is pretty much only in AVC/h264 encoding, the sole reason this is relevant is that due to royalties charged by the HEVC patent holders, video streaming services refused to adopt the format and kept using the old AVC format whose patents have already expired, or adopted the VP9 codec.

At 5 Mbps (Twitch's maximum bandwidth), 1080p30 should be imperceptibly encoded using the AV1 format, streams will have practically Blu-ray quality with this format.
Not quite as you describe it but hey...

Even if so, I can't wait to see my favorite e-thot's mascara mistakes in glorious 4K at 10Mbps! Gotta preserve bandwidth, hence preserve data plan, hence save more money to donate as simping!
 
I think these mobile GPUs are more intended to at least put up a fight with apple M1 laptops than actually competing with AMD and Nvidia in gaming space.
Somewhere at the beginning of the presentation they show some NLE timeline and then Adobe photoshop i think, this indicates Intel could've beefed up media engine to decode files like 4k 120fps, 8k 30,60fps.... and better support for Adobe.
Right now windows laptops are a joke for photo/video content creation compared to Apple, they consume a lot of power and get beaten by super thin and light apple laptops, even Full Tower computers with rtx 3090 and 5950x can't match apple m1 timeline playback in video editing programs, it's getting ridiculous.
 
I think these mobile GPUs are more intended to at least put up a fight with apple M1 laptops than actually competing with AMD and Nvidia in gaming space.
Somewhere at the beginning of the presentation they show some NLE timeline and then Adobe photoshop i think, this indicates Intel could've beefed up media engine to decode files like 4k 120fps, 8k 30,60fps.... and better support for Adobe.
Right now windows laptops are a joke for photo/video content creation compared to Apple, they consume a lot of power and get beaten by super thin and light apple laptops, even Full Tower computers with rtx 3090 and 5950x can't match apple m1 timeline playback in video editing programs, it's getting ridiculous.
None of that changes with this architecture - outside of AV1 support it's pretty standard. It doesn't have anything even moderately resembling Apple's ProRes support, nor their level of software/hardware optimization for these tasks. I would love to see Windows platforms improve in this respect, but this ain't it.
 
None of that changes with this architecture - outside of AV1 support it's pretty standard. It doesn't have anything even moderately resembling Apple's ProRes support, nor their level of software/hardware optimization for these tasks. I would love to see Windows platforms improve in this respect, but this ain't it.
Do you have some inside information ? how do you know ?
Apple proRes is not hard at all to decode, Apple m1 shines with long gop codecs and files like 8k 30fps from Canon R5, Sony a7s III 4k 120fps, GH6 5.7k, they are perfectly smooth on apple M1 laptop, 5950x and a rtx3090 not smooth at all.
If this is not addressed ASAP then Intel and Nvidia can kiss goodbye to that crowd, 10-20W laptop beats 500W or more Full tower computer, one does it brute force and one does it with dedicated hardware.
 
Wow, they shrunk Iris down again and added EUs and features. +30% over their IGP. Surprising, indeed. This is 'Arc'? Appearances can be deceiving...

I think these mobile GPUs are more intended to at least put up a fight with apple M1 laptops than actually competing with AMD and Nvidia in gaming space.
Somewhere at the beginning of the presentation they show some NLE timeline and then Adobe photoshop i think, this indicates Intel could've beefed up media engine to decode files like 4k 120fps, 8k 30,60fps.... and better support for Adobe.
Right now windows laptops are a joke for photo/video content creation compared to Apple, they consume a lot of power and get beaten by super thin and light apple laptops, even Full Tower computers with rtx 3090 and 5950x can't match apple m1 timeline playback in video editing programs, it's getting ridiculous.

You might be on to something, the gaming performance here is not going to turn heads at all.
 
Do you have some inside information ? how do you know ?
Apple proRes is not hard at all to decode, Apple m1 shines with long gop codecs and files like 8k 30fps from Canon R5, Sony a7s III 4k 120fps, GH6 5.7k, they are perfectly smooth on apple M1 laptop, 5950x and a rtx3090 not smooth at all.
If this is not addressed ASAP then Intel and Nvidia can kiss goodbye to that crowd, 10-20W laptop beats 500W or more Full tower computer, one does it brute force and one does it with dedicated hardware.
... it doesn't require inside information, it just requires noticing that Intel isn't advertising any form of hardware ProRes decoding. If they had it, they would be advertising it. I'm well aware that it's quite easy to decode - it's nowhere near as heavily compressed as H.264, H.265 or AV1, after all - but most likely Apple just won't give licences to build hardware decoders to its major competitors. That's my guess, at least.

As for how Apple handles those other codecs, that's that hardware/software optimization I'm talking about. They've got several decades worth of experience in optimizing for media playback, encoding, and editing. This has been a core focus for Apple since the 1980s. It's hardly surprising that they vastly outperform competitors that have never shown much of a long-term interest in doing this, and that also lack the full stack integration of Apple. I wouldn't be surprised at all to learn that Apple's decode blocks also accelerate a lot of non-prores codecs that aren't advertised - they ultimately don't need to do so specifically, as long as it works well and their target audience knows it. But of course they've also got some serious software/OS chops here - they managed to make Final Cut vastly outperform Premiere and other competitors on Intel Macs after all.

I completely agree that both Intel and AMD need to step up their hardware acceleration game, but they've got a significant hurdle there: their target markets are much more diverse, which makes it all the harder to justify the die area required by large hardware accelerator arrays that only serve a relatively small niche of that market. Apple of course sells to a lot of people beyond media professionals, but those are their core audience, and they really don't care about anyone else when it comes to Macs - which can be seen in a lot of their hardware choices. They would be fine if everyone else just had an iPhone, and left the Mac to the professionals.
Wow, they shrunk Iris down again and added EUs and features. +30% over their IGP. Surprising, indeed. This is 'Arc'? Appearances can be deceiving...



You might be on to something, the gaming performance here is not going to turn heads at all.
I don't disagree - and the first DG1 implementations were marketed only towards media production, after all - but remember that these are low-end comparisons, in a market where iGPUs are much more powerful than 1-2 years ago. Most likely they expected a bigger difference when these designs were first made, but even still, the larger Arc GPUs are likely to go far past this level (unless they've really messed up the design).
 
They are working hard at making a CPU to be competitive, and given they success in GPU tech and FPGA like hardware they might pull it off, but they won't make a dent in the serious laptop/desktop market as the whole system is X86-64 based and the compatibility is too much of an issue.
Tens of millions of bazinga like MX series is being sold, and if you check price diff, it's about 100 bucks for that "faux discrete GPU" alone.

At least that part of the market would be wiped out. (AMD APUs are already beating that for quite some time, now Intel will)

The "serious laptop market" if you mean laptops wielding 6800XT/3080 like GPUs, is just a small fraction of the market, most is on crap like MX and 1650. All that is now threatened by Intel. (and, mind you, nice discount if CPU+GPU bundle is used, is a given)
 
Last edited:
Honestly, Intel's performance comparison here is kind of funny. On the one had you have an i7-1280P, a 28W 6P+8E 96EU Xe CPU, and on the other hand you have a 12700H, a 45W 6P+8E 96EU (disabled/inactive in this testing, I assume) CPU alongside a 128EU GPU with an undisclosed power budget. And the latter outperforms the former by ... 25-33%? Yeah, that's not something I'd be shouting from the rooftops either. Couldn't they at least have compared them using the same CPU?
 
Honestly, Intel's performance comparison here is kind of funny. On the one had you have an i7-1280P, a 28W 6P+8E 96EU Xe CPU, and on the other hand you have a 12700H, a 45W 6P+8E 96EU (disabled/inactive in this testing, I assume) CPU alongside a 128EU GPU with an undisclosed power budget. And the latter outperforms the former by ... 25-33%? Yeah, that's not something I'd be shouting from the rooftops either. Couldn't they at least have compared them using the same CPU?
I've noticed that too. Maybe it's for the confusion of the people looking at it. Also, good advertisement for the new Intel CPU like 'this one runs faster with the new Intel CPU'.
 
So a lot of numbers, like TW3 at Medium settings is 68FPS which is faster than a no number from Iris, at a unknown TDP, at a unknown frequency, at a unknown memory, with a unknown cooling system.

A lot of fluff in them their slides. Good thing is efluff and not real, they would have a mess on their hands trying to dispose of that much physical fluff.

"The Elden Ring was CAPTURED on a series 7 GPU," not rendered, merely captured.

Hmm are you sugesting they used a thunderbolt egpu to render the game or what?

AV1 encoding, but HDMI 2.0b.

XeSS, but PCIe 4.0 in 2H 2022.

Price better be incredible.

AlderLake mobile (or Ryzen for that matter) don't have PCIe5.0 and it's also not necessary. I'd think the desktop cards (at least the higher end 5 ann 7 series) will have PCIe5.0 but it's not like it will make the performance any different (desktop AlderLake including it was pretty much marketing when there are no devices that use it)

HDMI 2.0 is disappointing (at least they were honest instead of slapping the now allowed 2.1 sticker) but they can offer 2.1 ports with converters from the DisplayPort connectors. I also question if this is just for the laptop market and again in the Desktop cards things will be different (in laptops they need to deal with the connection to/from the igp and/or a mux which might be the limiting factor).
 
I've noticed that too. Maybe it's for the confusion of the people looking at it. Also, good advertisement for the new Intel CPU like 'this one runs faster with the new Intel CPU'.
But the i7-1280P is just as new as the 12700H (actually newer, the U and P series released later than the H series). It just doesn't add up beyond these GPUs just not performing very well.
AlderLake mobile (or Ryzen for that matter) don't have PCIe5.0 and it's also not necessary. I'd think the desktop cards (at least the higher end 5 ann 7 series) will have PCIe5.0 but it's not like it will make the performance any different (desktop AlderLake including it was pretty much marketing when there are no devices that use it)
Yeah, PCIe 5.0 would have been an utter waste. Consumer GPUs don't meaningfully saturate PCIe 3.0 x16 yet, and 4.0 x16 is plenty still. Going to 5.0 just ramps up power consumption (not by much, but still something) and increases board complexity for no good reason.
 
... it doesn't require inside information, it just requires noticing that Intel isn't advertising any form of hardware ProRes decoding. If they had it, they would be advertising it. I'm well aware that it's quite easy to decode - it's nowhere near as heavily compressed as H.264, H.265 or AV1, after all - but most likely Apple just won't give licences to build hardware decoders to its major competitors. That's my guess, at least.
ProRes is easy to decode and important to high end productions for the flexibility, i don't think those people edit on laptops.
If someone shoots ProRes they are gonna color grade and once you start to heavily color grade your footage then RTX 3090s makes sense.
As for how Apple handles those other codecs, that's that hardware/software optimization I'm talking about. They've got several decades worth of experience in optimizing for media playback, encoding, and editing. This has been a core focus for Apple since the 1980s. It's hardly surprising that they vastly outperform competitors that have never shown much of a long-term interest in doing this, and that also lack the full stack integration of Apple. I wouldn't be surprised at all to learn that Apple's decode blocks also accelerate a lot of non-prores codecs that aren't advertised - they ultimately don't need to do so specifically, as long as it works well and their target audience knows it. But of course they've also got some serious software/OS chops here - they managed to make Final Cut vastly outperform Premiere and other competitors on Intel Macs after all.
You talk like this is something very complicated that can't be done, it's very simple, they just need the hardware to decode those files so they play in real time, you can find that hardware even in cheap android phones, even in cameras that actually shoot that footage at 8k, they have silicon dedicated so you can play that file in camera in real time.
They didn't do this until now because it's dedicated silicon space for just that, rather brute force it at 300W than reserving silicon space for dedicated media engine that can decode at 5-10W.
Also as an editor, i just need that file to play in real time at full resolution when i edit it, i don't need it to play at 10x.
 
ProRes is easy to decode and important to high end productions for the flexibility, i don't think those people edit on laptops.
If someone shoots ProRes they are gonna color grade and once you start to heavily color grade your footage then RTX 3090s makes sense.
You seem unaware that quite a few entry level cameras and low end recorders like Atomos's products record in Prores. It's not the most common format, but it's still really common, and certainly not only among high end productions.
You talk like this is something very complicated that can't be done, it's very simple, they just need the hardware to decode those files so they play in real time, you can find that hardware even in cheap android phones, even in cameras that actually shoot that footage at 8k, they have silicon dedicated so you can play that file in camera in real time.
They didn't do this until now because it's dedicated silicon space for just that, rather brute force it at 300W than reserving silicon space for dedicated media engine that can decode at 5-10W.
Also as an editor, i just need that file to play in real time at full resolution when i edit it, i don't need it to play at 10x.
I never said it was complicated, I said that Apple has this down pat because they've worked concertedly towards it for decades while their competitors haven't. I'm quite aware of the ubiquitous nature of hardware video encoders and decoders as well - I don't live under a rock. But that doesn't change any of what I've said - and, for the record, Apple are AFAIK the only computer chipmaker with ProRes hardware accelerations (cameras, recorders etc. are another thing entirely). Also, what you're saying isn't quite accurate: the improved timeline smoothness on Apple devices illustrates precisely that achieving real-time playback by itself isn't necessarily enough. You also need the surrounding software to be responsive, you need to be able to fetch the right data quickly, to handle interrupts and jumping around a file smoothly, and you need the OS to handle the IO and threads in a way that's conducive to this being smooth. Heck, Apple laptops consistently outperformed Intel laptops with the exact same CPU and equally fast storage and GPUs (or faster GPUs) in these workloads, and that certainly wasn't limited to ProRes, nor to QuickSync-accelerated codecs.
 
I never said it was complicated, I said that Apple has this down pat because they've worked concertedly towards it for decades while their competitors haven't. I'm quite aware of the ubiquitous nature of hardware video encoders and decoders as well - I don't live under a rock. But that doesn't change any of what I've said - and, for the record, Apple are AFAIK the only computer chipmaker with ProRes hardware accelerations (cameras, recorders etc. are another thing entirely). Also, what you're saying isn't quite accurate: the improved timeline smoothness on Apple devices illustrates precisely that achieving real-time playback by itself isn't necessarily enough. You also need the surrounding software to be responsive, you need to be able to fetch the right data quickly, to handle interrupts and jumping around a file smoothly, and you need the OS to handle the IO and threads in a way that's conducive to this being smooth. Heck, Apple laptops consistently outperformed Intel laptops with the exact same CPU and equally fast storage and GPUs (or faster GPUs) in these workloads, and that certainly wasn't limited to ProRes, nor to QuickSync-accelerated codecs.

That'd be because it was developed by and is maintained by Apple. Their computers and software really are very much designed for each other, at the complete expense of everything else. It certainly is a peculiar, weird format. Apple's documentation claims that the full quality ProRes (4444 XQ) was designed for a ~500 Mbps bit rate assuming 1080p at NTSC frame rate (29.97 fps), I get that the quality must be downright insane but... it's such an unwieldy format, it's really intended for use during the mastering and development stage, definitely not for consumption. At 4K IVTC film standard (23.976 fps), 4444 XQ has a data rate of around 1.7 Gbit/s, totaling ~764 GB per hour of footage. That's insane, and imo the biggest bottleneck on managing a format like ProRes is storage performance.

Vegas Pro, which was under Sony Creative Software for a very long time before it was acquired by Magix, also supports(ed? out of the loop here) a whole host of Sony-specific codecs used by their high-end cinema cams and many film industry standards, so I would guess that it's just a thing of the trade.
 

Valantar

Did you ever shoot ProRes ? you get huge files for a few seconds of footage, it's insane.
Latest Panasonic GH6 has internal ProRes, 5.7k 25p 422 at 1.6Gbps.
What i want to say is that most people don't care about ProRes hardware decoding, the majority of youtubers , event and corporate videographers, it's all h.264 and h.265, maybe 10 bit for a bit more data.
 
I'm quite happy with a modest low cost GPU upgrade available in laptops. Features seek spot on.

No doubt beefy GPUs coming to the discreet desktop space. Couldn't care less if Intel competes in the very high end, as long as it's competitive in whatever performances brackets those products end up in.
 

Valantar

Did you ever shoot ProRes ? you get huge files for a few seconds of footage, it's insane.
Yep, quite familiar with that. The SSDs my partner uses for her Atomos recorder definitely get to stretch their legs. The file size is definitely a negative, but it's not that problematic.
Latest Panasonic GH6 has internal ProRes, 5.7k 25p 422 at 1.6Gbps.
And yet that's a relatively affordable, compact camera, of a brand, type and class widely used by all kinds of video producers.
What i want to say is that most people don't care about ProRes hardware decoding, the majority of youtubers , event and corporate videographers, it's all h.264 and h.265, maybe 10 bit for a bit more data.
But now you're adding all kinds of weird caveats that don't apply to my initial statement that you are arguing against. I never specified that this applied to "most people", nor did I say anything about the preferences of various content producers. None of this changes the fact that even on identical hardware, Apple managed to make their systems and software (but also third party software to some extent) outperform Windows systems, which speaks to the importance of system, OS and software design on top of hardware encode/decode performance.

And, for the record, I think you're really underestimating the ubiquity of ProRes. Is it mainly for professionals? Absolutely. Is it also used by pretty much anyone with access to it who wants to do color grading or other advanced editing, or want to preserve the full dynamic range of their shots? Again, yes. HEVC or other compressed codecs, even when recording in some kind of log color format, lose way too much information for most videographers I've met who have some kind of artistic ambition. They could no doubt mostly do what they do 95-99% as well without ProRes, but they still use ProRes. It being a widely accepted industry standard is also a huge draw in this regard.
 
But the i7-1280P is just as new as the 12700H (actually newer, the U and P series released later than the H series). It just doesn't add up beyond these GPUs just not performing very well.
But the H series is the upper model in the mobile stack. So from that perspective it has some sort of meaning I think.
 
Back
Top