• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc A730M Tested in Games, Gaming Performance Differs from Synthetic

Hi,
Think it was more intel was missing out on all the easy miner money than anything else.
They've already built mining accelerators for that, so that's not it.
 
They've already built mining accelerators for that, so that's not it.
Hi,
Probably not seeing intel is really good at dropping chips fast seen it over and over against amd so dropping a gpu for mining hungry people is not out of their playbook
But chips are a hell of a lot smaller than a gpu.
 
There’s a good chance no one will be able to buy the first generation as an AIB. These GPUs might only be available in complete OEM systems and released in a limited number of countries.
AFAIK, These are mobile GPUs, so no AIBs.. only OEMs.
 
AFAIK, These are mobile GPUs, so no AIBs.. only OEMs.
Sorry I was referring to the upcoming launch of the desktop versions that suppose to happen this quarter. I thought the original commenter was referring to the price of the desktop boards.
 
Did you guys notice the Nvidia Gameworks technologies are 'ON' in this Metro benchmark? Hairworks and PhysX.

A good sign imho, feature parity is a win.
 
Did you guys notice the Nvidia Gameworks technologies are 'ON' in this Metro benchmark? Hairworks and PhysX.

A good sign imho, feature parity is a win.
Don't both of them work regardless of the GPU vendor? Even in the NVIDIA driver settings you can force CPU-only PhysX.
 
They have also had i740 which was decent and Larrabee (which sucked balls)

i740 was terrible, it only existed as a feature presentation for AGP 1.0. Not competitive at all. Larrabee literally never released, and the demos showed it being close to competitive while offering some interesting features such as ray tracing and real time path tracing. You can't claim Larrabee was good or bad though, nobody outside Intel has ever collected performance numbers.
 
That's not true - the Xe architecture has been running in laptops for a few years now and Intel absolutely has been working with game developers to optimise drivers, game profiles, and fix bugs. Forza is one of the engines that Xe drivers initially struggled with at launch of Tiger Lake about 2 years ago.
Yes i did know that but if these discrete gpus are already in the hands of gamers. Then the developers would already make there titles more optimised just as they do with the green and red
 
I mean, if they're high-compute but terrible at gaming performance, that sounds ideal for a "GPU-compute" build. (assuming the price drops low enough)

EDIT: 12GB GDDR6? Yeah. That sounds great for compute purposes.
 
This is positive news. I just wish there was more about desktop cards, including release dates.
 
Intel has to start somewhere and I think hardware isn't their problem it's drivers. I'm sure it will be a bit rough initially, and why I don't buy first gen products, look at the problems Alder Lake faced due to the little cores, but ultimately more competition is great news. The only real problem I have is yet again Intel failed miserably on their timeline of delivery. And if anyone beleives their BS timeline for Raptor Lake and going forward I have a nice bridge I can sell you in Sydney. Raptor Lake will struggle to be out by December fro latest reports and the diea Meteor Lake is shipping in less than 12 months on desktop is ludicrous.

If Arc had have came out on time by now they'd probably gotten some decent sales and developed drivers a lot more. Now that it'll be facing heavily discounted Ampere and RDNA2 and up against Lovelace and RDNA3 will cause it no end of grief,. Intel don't do cheap hardware, they will have to heavily discount Arc to garner interest and I'll bet they can't bring themselves to do it, it might be 10-15% cheaper than the established players at best.
 
Intel has to start somewhere and I think hardware isn't their problem it's drivers. I'm sure it will be a bit rough initially, and why I don't buy first gen products, look at the problems Alder Lake faced due to the little cores, but ultimately more competition is great news. The only real problem I have is yet again Intel failed miserably on their timeline of delivery. And if anyone beleives their BS timeline for Raptor Lake and going forward I have a nice bridge I can sell you in Sydney. Raptor Lake will struggle to be out by December fro latest reports and the diea Meteor Lake is shipping in less than 12 months on desktop is ludicrous.

If Arc had have came out on time by now they'd probably gotten some decent sales and developed drivers a lot more. Now that it'll be facing heavily discounted Ampere and RDNA2 and up against Lovelace and RDNA3 will cause it no end of grief,. Intel don't do cheap hardware, they will have to heavily discount Arc to garner interest and I'll bet they can't bring themselves to do it, it might be 10-15% cheaper than the established players at best.
But if they do discount it, it might be a decent offering. It's not that the masses need 3090 level performance in their rigs anyway.

Drivers aren't going to be a problem, imo. At least I haven't had any issues with Intel drivers for a while now. Unless first gen Arc is meant to be only a test run like the 5700 XT was, which wouldn't surprise me.
 
EXACTLY as I called it here yesterday:

I suspect intel is shaving off the edges in terms of drivers and any populair benchmark.

This generation failed. The delay is purely due to underwelming expectations. Cant be anything else.
 
Actually after reading the article, I can’t really tell if the result is good or bad. In the first place, we are coming the A730M which is not the flagship A770M and seems to kind of match the RTX 3060 mobile in terms of specs. The conclusion seems to imply that the A730M performs at a RTX 3060 Mobile level, so It does not sound bad. I can’t wrap my head around why the comparison is made with the RTX 3050 and 3060 desktop version in the first place since it is clear that desktop components have more power headroom and almost no thermal restrictions, as compared to a laptop GPU. Also, the faster desktop CPUs will most likely contribute to better performance results at say 1080p, and to a minimal extend, 1440p.
 
This is what Raja came after all those years developing?? :laugh::laugh::laugh:
I'm telling you, the guy is a genius... Time for another sabbatical.
 
I wonder how good those Intel's products are with mining. Maybe Intel will become a new mining kingpin with them graphics :)
It wouldn't be so bad. More availability for us with other vendors :)
 
Don't both of them work regardless of the GPU vendor? Even in the NVIDIA driver settings you can force CPU-only PhysX.
Advanced PhysX is GPU PhysX
 
I find it highly unlikely Intel licensed this from NVIDIA. According to the PhysX SDK on Windows GPU acceleration requires CUDA, and that is something that's never going to be licensed out by NVIDIA.

And yet, the bench says it is being run and they produce a number of FPS with it 'ON'.

Raja probably has a way to simulate some sort of CUDA. Makes some sense too. They don't need it at full performance/feature parity.
Besides, have you seen how close to PhysX the stuff in, say, UE5 is? Physics calculations aren't rocket science, and they can emulate things.

Note also how other technologies, notably the ones that say 'I need a tensor/RT core' are still absent.

Another option is that what we're reading below is just utter bullshit. Or maybe that 9 FPS dip is a PhysX effect :D

1654709366549.png
 
And yet, the bench says it is being run and they produce a number of FPS with it 'ON'.

Raja probably has a way to simulate some sort of CUDA. Makes some sense too. They don't need it at full performance/feature parity.
Unfortunately it's not simple to "simulate CUDA", there have been attempts but not very successful. It's a huge advantage for NVIDIA that made their near monopoly on certain sectors.
Intel has a different vision to those kinds of computational needs - oneAPI&co. However they are going the other way - you write your program in oneAPI tech and it then gets compiled to CUDA in order to run on NVIDIA GPUs. Obviously it can target CPUs, AMD and Intel GPUs/accelerators as well.
Besides, have you seen how close to PhysX the stuff in, say, UE5 is? Physics calculations aren't rocket science, and they can emulate things.
Oh yes, from what I've read PhysX in UE5 is deprecated in favor of Unreal Chaos. Unity also supports Havok and Unity Physics together with PhysX (and Box2D for 2D simulations). I don't think it's an important competitive advantage to NVIDIA any more, its last big thing was open-sourcing the SDK. However NVIDIA is known for very good relations with developers, so maybe Metro is using something special.
Note also how other technologies, notably the ones that say 'I need a tensor/RT core' are still absent.
Intel Arc will have tensor computational capabilities with Matrix Engines and RT acceleration of some sort (it remains to be seen if they go with more specialized units like NVIDIA or more generic ones like AMD). But again, apart from common APIs like Direct3D/OpenGL/Vulkan I don't think they will provide support compatible with, for example, NVIDIA OptiX.
Another option is that what we're reading below is just utter bullshit. Or maybe that 9 FPS dip is a PhysX effect :D
That was my thought as well, it might be the game being confused somehow, or the screenshot is faked, we'll have to wait for official reviews ;)
 
Given how wildly unpredictable the Ampere GPU lineup is, I'm shocked someone is using them as reference.

If TDP/chip size is right, this could be quite damning for NV in the notebook market.
 
Last edited:
It's good to have a triopoly and see a new competitor, but we all know these are meant to be budget GPU's at high prices, intended so intel can sell OEM systems and laptops that are entirely 100% intel hardware

CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.

And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)
 
It's good to have a triopoly and see a new competitor, but we all know these are meant to be budget GPU's at high prices, intended so intel can sell OEM systems and laptops that are entirely 100% intel hardware

CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.

And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)
I can see that happen, though I hope you're wrong. Personally, I'm just waiting for the desktop release to see how they stack up against nvidia and AMD. What bothers me is that there's a lot of news about laptops, but not much about desktop cards, which gives way for suspicion that your theory may be right.
 
It's good to have a triopoly and see a new competitor, but we all know these are meant to be budget GPU's at high prices, intended so intel can sell OEM systems and laptops that are entirely 100% intel hardware

CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.

And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)

Makes sense, this is and has historically been Intel's ticket to keep those chips rolling off shelves. Regardless of whatever they do really. Put sticker on it, brand it, sell it.
 
Back
Top