Thursday, August 11th 2022

Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games

Intel earlier this week released its own performance numbers for as many as 50 benchmarks spanning the DirectX 12 and Vulkan APIs. From our testing, the Arc A380 performs sub-par with its rivals in games based on the DirectX 11 API. Intel tested the A750 in the 1080p and 1440p resolutions, and compared performance numbers with the NVIDIA GeForce RTX 3060. Broadly, the testing reveals the A750 to be 3% faster than the RTX 3060 in DirectX 12 titles at 1080p; about 5% faster at 1440p; about 4% faster in Vulkan titles at 1080p, and about 5% faster at 1440p.

All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.
The testing notes and configuration follows.

Source: Intel Graphics
Add your own comment

85 Comments on Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games

#51
Dristun
john_They where dumb. That's why they ordered huge amount of wafers from Samsung, produced huge amounts of RTX 3000 cards and now they are discounting them like there is no tomorrow to manage to sell them. They where that dumb to order a huge amount of 5nm wafers from TSMC that now wish to pospone getting them 6 months later than the agreed date, they where dump enough to have to announce about 1.4 billions less income, to have to suffer a huge drop on profit margin from about 65% to about 45%. That's even lower than Intel's profit margin at Intel's current situation, lower than AMD's profit margin considering AMD is the little guy between the three.

Now if you think that a 12GB card at a much more expensive process, like TSMC's 5nm with performance close to 3090 Ti will start selling at $650, with all my heart I wish you end up correct. But if in their current position after doing all those price redactions they are at 45% profit margin, selling a card with those characteristics at $650 will be suicidal. Not to mention the price redaction in current RTX 3000 unsold cards we will have to witness. Can you imagine an RTX 3090 selling for $550? An RTX 3070 for $300? I doubt we are in a GTX 970 era. Not to mention inflation.
And it doesn't matter how much money they have in their bank accounts. For companies of this size trying to be on top in new markets like AI, with competition from huge companies like Intel, Amazon, Google, Alibaba etc. the amount of money they have, will never be enough.
Well, mainly, I don't think that nvidia can afford to drop their market share at a time when growth is uncertain - it's their leverage for near future. If we believe that 1) AMD is competitive and they actually care and 2) They are in a better position margin-wise => Jensen's pricing power should be limited by market rivals. Yeah, sure, they dropped all the way back to a number they haven't seen for more than a decade (you can't slow down CapEx immediately over one quarter like you cut prices!) but they're also in a new territory where AMD can undercut at the high end (even if it's by $50) and Intel is about to release (admittedly bad) mainstream products that they promised to price aggressively.
And yes, I can imagine 3090s selling for $550. I mean, there's already used ones popping up at $850-900. Besides, even Rolex prices are going back down this year, if we try to make parallels with premium products, which these GPUs are. So yeah, I rest my case, I guess we'll see in a few months who's going to be right.
Posted on Reply
#52
efikkan
john_Why people have to add this after posting their opinion?
I posted nothing but facts, you posted incorrect statements. At the end of my correction I concluded your statement as nonsense, that's not an insult. If you can't defend your statements, than you shouldn't make them. Grow up.
Here are your statements for the record:
... a 10+ core CPU and modern GPUs are huge carpets to hide underneath any performance problem.
Also an unoptimised game will sell more CPUs and GPUs than an optimized one, meaning not only you can market it faster, you can also get nice sponsor money from Nvidia, AMD and Intel, by partially optimizing for their architecture instead for everyones.
john_Today 4 cores even with HyperThreading are not enough. 12 threads are considered minimum. That will become 16 threads in a year or two, we might move to 24 or 32 threads as a minimum in 5-10 years to avoid having performance problems. How do you explain that? That today we play with 12 or 16 threads to get the best performance? If this post was 5 years old, your post would have being the same with the only difference me talking about 6+ cores CPUs. And your arguments the same.
Nice attempt of a straw man argument there. :rolleyes:
The literature on parallelization has been known since 60s, and the limits of scaling are described by Amdahl's law. This is basic knowledge for CS studies, and don't attempt to approach this subject before understanding it. Assuming you're limited to the scope of gaming here, game simulation("game loop") and rendering are both pipelined workloads, which means you have to apply Amdahl's law on each step of the pipeline, and you need to resync all the worker threads before continuing. Combine this with fixed deadlines for each workload (e.g. 100 Hz tick rate gives 10 ms for game simulation, 120 Hz framerate gives 8.3 ms for rendering), leaves little wiggle room for using large quantities of threads for small tasks. Each synchronization increases the risk of delays either from the CPU side (SMT) or from the OS scheduler. Each delay to synchronization will pile up, and if the accumulated delay is large enough, it causes stutter or potentially game breaking bugs. So in conclusion, there are limits to how many threads a game engine can make use of.

And if you think what I'm posting is opinions, then you're mistaken. What I'm doing here is citing facts and making logical deductions, these are essential skills for engineers.
AretakConsidering that it's likely to be a disaster in DirectX 9/10/11 and OpenGL titles, they can't price this thing at more than $200. At that price it might be worth taking a risk on, even if I expect Intel to abandon it in terms of driver support pretty quickly, because Linux and Mesa make that somewhat irrelevant. Any more than $200 and there's zero reason to consider it over an AMD or Nvidia alternative.
I think even $200 might be a little too much, considering AMD's and Nvidia's next get will arrive soon. If the extra features like various types of sync, OC, etc. in the control panel are still not stable, I would say much less. I think it might soon be time for a poll of what people would be willing to pay for it in its current state, for me it would probably be $120/$150 for A750/A770, and I would probably not put it in my primary machine. But I want to see a deeper dive into framerate consistency on A750/A770.
Posted on Reply
#53
Draxivier
Maybe I´m dumb, but please can you check the numbers of the comparatives and the graphs?

On CoD Vanguard at 1440p, the A750 gets 75 fps and the RTX 3060 gets 107 fps.
But in the graph made by Intel it says that they perfom EXACTLY the same.
¿¿¿???
Posted on Reply
#54
Bomby569
DraxivierMaybe I´m dumb, but please can you check the numbers of the comparatives and the graphs?

On CoD Vanguard at 1440p, the A750 gets 75 fps and the RTX 3060 gets 107 fps.
But in the graph made by Intel it says that they perfom EXACTLY the same.
¿¿¿???
that will be corrected later like everything in this "launch"
Posted on Reply
#55
efikkan
DraxivierMaybe I´m dumb, but please can you check the numbers of the comparatives and the graphs?

On CoD Vanguard at 1440p, the A750 gets 75 fps and the RTX 3060 gets 107 fps.
But in the graph made by Intel it says that they perfom EXACTLY the same.
¿¿¿???
Good spot.
Have anyone checked some of the rest?
It could be a mistake, either the table or the graph, but chances are the table is more correct.
Posted on Reply
#56
Bomby569
efikkanGood spot.
Have anyone checked some of the rest?
It could be a mistake, either the table or the graph, but chances are the table is more correct.
Small family company doesn't have the people to check six graphs for errors. Give them a brake people
Posted on Reply
#57
64K
RedelZaVednoI'm buying ARC even if it's total crap, just to keep Intel's dGPU division alive. God knows we need big 3rd player on dGPU market soooo badly.
Lisa & Huang must be kept in check or their profit margins will keep going up and up and up...
The flip side of that coin is that you will be rewarding Intel when they don't deserve it if it is a turd.
Posted on Reply
#58
ThomasK
All I hope is that Intel can pull off the drivers side of the business. We desperately need a third player.
Posted on Reply
#59
ModEl4
Chrispy_Maybe I'm missing something? Every site and streamer I read/follow (TPU, G3D, KG, GN, HUB, hell - even LTT) were all massively disappointed by the A380 because it failed to live up to claims. I vaguely remember Intel saying that the A380 was originally supposed to be about par with a GTX 1060. It's so late to market that Pascal was the relevant competition!

It turns out that yes, the A380 does actually match a 1060 (or fall somewhere between a 6400 and 6500XT) but with the caveat of only with ReBAR enabled, and only in a modern motherboard with a PCIe 4.0 slot, and only in games that support ReBAR, and only in DX12 or modern Vulkan titles, and only if the drivers actually work at all, which is not a given under any circumstances. That's added to the fact that Intel clearly optimised for misleading synthetic benchmarks as the 3DMark score is way out of line with the performance it demonstrates in any game titles.

The media being unanimously disappointed with the Arc is not because of unrealistic expectations. Those expectations were set by Intel themselves and the fact that post-launch (in China) Intel adjusted their claims to be more in line with how it actually performs (including the whole laundry-list of the non-trivial caveats) is kinda just confirmation of that disappointment. It's even worse than than it seems, too - because the target market for an A380 buyer isn't a brand-new machine playing modern AAA DX12/Vulcan titles. It's going to be someone looking to cheaply upgrade an old PCIe 3.0 board and likely playing older games because the A380 isn't really good enough to get a great experience in the ReBAR-optimised, AAA DX12 titles that Intel actually have acceptable performance with.

Let's see if the results from independent reviewers match these official Intel graphs for the games shown when we actually have an official Arc A750 launch....
I only replied to the following you said:
«After discovering that the previous slides of Intel's GPU performance bore little resemblence to the real game performance, how is anyone expected to trust these Intel slides?
Intel, you were caught lying, and you haven't addressed that yet»

Please send me the link for the previous Intel slides/statements that proved that Intel was lying about gaming performance because I'm not aware.

I said way way early, when the rumors was that ARC-512 was between 3070-3070Ti in performance that it will be around or lower than 3060Ti and that if A380 match RX570 it will be a miracle. I can't answer for the expectations of others.
I'm not aware Intel setting these expectations but leakers (like i said in the past they probably correlated the 3DMark scores that were leaked to them with the actual gaming performance)
If you find any official Intel statement supporting these performance claims please sent it to me.
Sure the hardware should be capable of supporting near 3070 performance in some new DX12/Vulcan games well optimized for Intel's architecture (in some resolutions) and from what i can tell even Intel themselves thought that the performance delta between synthetic-games will be lower (so the S/W team underdelivered) but did they actually officially in a slide showed to the public much higher performance than the one they claim here?
Edit:
I have a feeling that the original internal Intel performance forecast before one year was around 3060Ti performance or slightly above for ARC-512 and around -13% GTX1650 super for ARC-128 so the software team underdelivered around -10%-12% imo)
Posted on Reply
#60
Dragokar
You probably have to do the right rain dance, the right software stack, the proper Windows update version and of course the right game version to match Intels claims. Oh, I forgot that you are forced to use a rbar system with the newest shiny uefi so that ur oprom can talk to it.

Don't get me wrong, I would love one more player, but Intel did all the stuff that Raja did when he was at AMD, they pushed the Hypetrain uphill and forget to build rails downhill. That's typical PR vs engineering. We will see if they make it to Battlemage or not.
Posted on Reply
#61
john_
DristunWell, mainly, I don't think that nvidia can afford to drop their market share at a time when growth is uncertain - it's their leverage for near future. If we believe that 1) AMD is competitive and they actually care and 2) They are in a better position margin-wise => Jensen's pricing power should be limited by market rivals. Yeah, sure, they dropped all the way back to a number they haven't seen for more than a decade (you can't slow down CapEx immediately over one quarter like you cut prices!) but they're also in a new territory where AMD can undercut at the high end (even if it's by $50) and Intel is about to release (admittedly bad) mainstream products that they promised to price aggressively.
And yes, I can imagine 3090s selling for $550. I mean, there's already used ones popping up at $850-900. Besides, even Rolex prices are going back down this year, if we try to make parallels with premium products, which these GPUs are. So yeah, I rest my case, I guess we'll see in a few months who's going to be right.
Nvidia needs mostly to expand to other markets. They are doing it for years, by transforming the dumb graphics chip to a compute monster. They invested heavily in AI and Machine Learning, they spend billions on Mellanox, they tried to become the next Intel by buying ARM, so they can dictate where ARM architecture which direction will prioritize. Probably focussing on servers, desktops, laptops instead of mobiles. They failed, so they will have to play with what ARM will offer them and what they can build on that, like what Apple and others are doing. But to do so, expand to other markets and be the protagonist there, not just the third, forth option, they need money. They need higher profit margins than those they are "suffering" this quarter. So, selling cheap, even if that would expand their market share, it's probably NOT an option. Selling ultra fast cards for cheap means low profit margins AND those buying those cards wouldn't be again customers for the next 1-2-3-5 years. Selling for a premium means, higher profit margins, another chance to get rid of RTX 3000 and RTX 2000 stock at prices that will not be under cost, people without unlimited money buying cheaper models, meaning they will be customers again sooner.
Now, selling expensive (let's suggest 3090 Ti at $700, 4070 for $800, 4080 for $1200 and 4090 for $2000) will open up opportunities for the competition right? Well, what can the competition do?
Intel. Absolutely NOTHING.
AMD. Follow Nvidia's pricing. Why? Because they don't have unlimited capacity at TSMC and whatever capacity they have, they prioritize first for EPYC and Instinct, then for SONY and Microsoft console APUs, then CPUs and GPUs for big OEMs, with mobiles probably being before desktops, then retail Ryzen CPUs and lastly retail GPUs. That's why AMD had such nice financial results. Because they are the smaller player, with the least capacity, selling almost EVERYTHING they build to corporations and OEMs, not retail customers. We just get whatever it's left. So. Can AMD start selling much cheaper than Nvidia. No. Why? Because of capacity limitations. Let's say that RX 7600 is as fast as RTX 3090 Ti and is priced at $500. A weak after it's debut, demand will be so high that the card will become unavailable everywhere. It's retail price will start climing and will get to cost as much as the RTX 3090 Ti. AMD gets nothing from that jump from $500 to $700, retailers get all that difference. We already seen it. RX 6900 XT, RX 6800 XT, RX 6800 came out with MSRPs that where more or less logical if not just very good. Then the crypto madness started and latter mid and low end models where introduced at MSRPs that where looking somewhat higher than expected when compared to MSRPs of Nvidia's already available models.

So, Nvidia can keep prices up and don't fear of losing 5-10% of market share, knowing that whatever loss they will have because of competition will be controllable. On the other hand their profits will be much higher, even with that minus 5-10% market share.
Just an opinion of course. Nothing more.
Posted on Reply
#62
Chrispy_
ModEl4I only replied to the following you said:
«After discovering that the previous slides of Intel's GPU performance bore little resemblence to the real game performance, how is anyone expected to trust these Intel slides?
Intel, you were caught lying, and you haven't addressed that yet»

Please send me the link for the previous Intel slides/statements that proved that Intel was lying about gaming performance because I'm not aware.

I said way way early, when the rumors was that ARC-512 was between 3070-3070Ti in performance that it will be around or lower than 3060Ti and that if A380 match RX570 it will be a miracle. I can't answer for the expectations of others.
I'm not aware Intel setting these expectations but leakers (like i said in the past they probably correlated the 3DMark scores that were leaked to them with the actual gaming performance)
If you find any official Intel statement supporting these performance claims please sent it to me.
Sure the hardware should be capable of supporting near 3070 performance in some new DX12/Vulcan games well optimized for Intel's architecture (in some resolutions) and from what i can tell even Intel themselves thought that the performance delta between synthetic-games will be lower (so the S/W team underdelivered) but did they actually officially in a slide showed to the public much higher performance than the one they claim here?
Edit:
I have a feeling that the original internal Intel performance forecast before one year was around 3060Ti performance or slightly above for ARC-512 and around -13% GTX1650 super for ARC-128 so the software team underdelivered around -10%-12% imo)
Ah shit, you're going to ask me to find a specific slide from the last 3 years?
I am perhaps remembering it wrong, but I'm a deeply cynical man who expects the worst from everyone and I'm rarely disappointed. Intel have managed to achieve that.
Intel leaks/PR/investor talks/rumour mill has been strong, but it feels like 2-3 years of me constantly complaining about the flurry of news about a nonexistent product and broken promises.

It's late here, perhaps someone else can go back through 30+ months of Intel noise to find the full history of Intel article hype to find one particularly incriminating slide but I just have this vague feel that Intel are constantly backtracking on what they said last time. I'll have a proper hunt through the dozens/hundreds of Intel Arc articles tomorrow if workload permits.

Claim-to-claim it's all reasonable, but go back a year or so and they were promising the moon. If A750 is out next week and matches these claims then we're done here - Intel have delivered. Can you buy an A750 yet? No. Can you even buy an A380 in most of the world yet? No. They need to deliver a product to market (and not just a foreign test slice of the market) before it's obsolete and they need to deliver approximately the performance they promised. If the A750 doesn't launch worldwide for another 4-6 months, all of the promises made now are kind of worthless because the price and performance of current competition is in constant flux. As mentioned earlier, Intel's initial claims were comparing to 2017's Pascal cards because that's how overdue this thing is.
64KThe flip side of that coin is that you will be rewarding Intel when they don't deserve it if it is a turd.
This. We may need a third player in the market but if AMD are the pro-consumer underdog compared to the incumbent market-leader, Intel are so horrible (both anti-consumer and anti-competition) that they are orders of magnitudes worse than AMD and Nvidia combined. They do have a full, varied, and longstanding history of well-documented monopolisation, anti-competitive malpractice, bribery, coercion, and blackmail.

Don't take my word for it, read the news articles from the last 30 years, Wikipedia, archive.org - whatever; You need to have been living under a rock to think Intel are the good guys. PC technology would likely be close to a decade ahead of where we are now if it wasn't for Intel playing dirty.
Posted on Reply
#63
80251
"Later this year" 3.5 months left in this year. Hmmmm.
Posted on Reply
#64
eidairaman1
The Exiled Airman
ThomasKAll I hope is that Intel can pull off the drivers side of the business. We desperately need a third player.
It won't reduce pricing, so it's moot.

unintel is in a big hole.
Chrispy_Ah shit, you're going to ask me to find a specific slide from the last 3 years?
I am perhaps remembering it wrong, but I'm a deeply cynical man who expects the worst from everyone and I'm rarely disappointed. Intel have managed to achieve that.
Intel leaks/PR/investor talks/rumour mill has been strong, but it feels like 2-3 years of me constantly complaining about the flurry of news about a nonexistent product and broken promises.

It's late here, perhaps someone else can go back through 30+ months of Intel noise to find the full history of Intel article hype to find one particularly incriminating slide but I just have this vague feel that Intel are constantly backtracking on what they said last time. I'll have a proper hunt through the dozens/hundreds of Intel Arc articles tomorrow if workload permits.

Claim-to-claim it's all reasonable, but go back a year or so and they were promising the moon. If A750 is out next week and matches these claims then we're done here - Intel have delivered. Can you buy an A750 yet? No. Can you even buy an A380 in most of the world yet? No. They need to deliver a product to market (and not just a foreign test slice of the market) before it's obsolete and they need to deliver approximately the performance they promised. If the A750 doesn't launch worldwide for another 4-6 months, all of the promises made now are kind of worthless because the price and performance of current competition is in constant flux. As mentioned earlier, Intel's initial claims were comparing to 2017's Pascal cards because that's how overdue this thing is.


This. We may need a third player in the market but if AMD are the pro-consumer underdog compared to the incumbent market-leader, Intel are so horrible (both anti-consumer and anti-competition) that they are orders of magnitudes worse than AMD and Nvidia combined. They do have a full, varied, and longstanding history of well-documented monopolisation, anti-competitive malpractice, bribery, coercion, and blackmail.

Don't take my word for it, read the news articles from the last 30 years, Wikipedia, archive.org - whatever; You need to have been living under a rock to think Intel are the good guys. PC technology would likely be close to a decade ahead of where we are now if it wasn't for Intel playing dirty.
Aplauses lol

By the time they get these out the newer gpus would of trampled these
Posted on Reply
#65
Makaveli
RedelZaVednoI'm buying ARC even if it's total crap, just to keep Intel's dGPU division alive. God knows we need big 3rd player on dGPU market soooo badly.
Lisa & Huang must be kept in check or their profit margins will keep going up and up and up...
Wrong gif

Posted on Reply
#66
PapaTaipei
I Bellver AMD and NVIDIA in the GPU market are a cartel. They must have the same shareholders. Why do NVIDIA needs AMD? Because otherwise they would be the only GPU selling company which is a monopoly and is in theory illegal in the US.
Posted on Reply
#67
Makaveli
PapaTaipeiI Bellver AMD and NVIDIA in the GPU market are a cartel. They must have the same shareholders. Why do NVIDIA needs AMD? Because otherwise they would be the only GPU selling company which is a monopoly and is in theory illegal in the US.
So to follow this logic Intel and AMD are a cartel also?
Posted on Reply
#68
ymdhis
efikkanWhich Vega cards are you talking about?
Vega 56 performed slightly over GTX 1070 but cost $500 (with a $100 "value" of games).

And how was Polaris bandwidth starved?
RX 480 had 224/256 GB/s vs. GTX 1060's 192 GB/s.

Both Polaris and Vega underperformed due to poor GPU scheduling, yet they performed decently in some compute workloads, as some of then are easier to schedule.
I don't know about Polaris, but Vega was very memory inefficient. Vega56 performed equal to Vega64 once you set the 56 to the same memory clocks, despite the latter having more shaders. The TMUs were not fed fast enough. Apparently Raja's team wanted to give it some form of tile based rendering in hardware, but they couldn't get it working on the silicon, so the card needed a ton of memory bandwidth to perform. AMD fixed that issue with Navi AFAIK.
They also misconfigured the Vega cards with very high voltages, so they couldn't hold their boost clocks at all. The Vega56 I had, I lowered the GPU voltages a bit and it could keep its clock at 1700MHz indefinitely, plus I bumped the memory speed to something around 1100MHz. It got around 24k in Fire Strike (graphics score), which put it between the 2070 and 2080, or 1080 and 1080Ti. This was the $400 Vega 56 mind you.

This same guy who fucked up Vega is now working on Intel Arc, mind you.
Posted on Reply
#69
Pilgrim
I'm honestly impressed by this. It's a good start for Intel.
Posted on Reply
#70
john_
ymdhisThey also misconfigured the Vega cards with very high voltages, so they couldn't hold their boost clocks at all.
I don't think this is misconfiguration. Let's say that you produce 10 Vega GPUs and (random funny voltages) 5 need 2V to work without a problem at their base/turbo frequencies, 4 need 2.1V and one needs 2.2V.
You can set standard voltage for all cards at 2V and throw away 5 cards, at 2.1V and throw away only 1 card, or 2.2V and sell all of them as good working cards. I think companies just go with the third option. That's why we can undervolt most of our hardware at some degree and have no stability problems with it.
Posted on Reply
#71
efikkan
PapaTaipeiI Bellver AMD and NVIDIA in the GPU market are a cartel. They must have the same shareholders. Why do NVIDIA needs AMD? Because otherwise they would be the only GPU selling company which is a monopoly and is in theory illegal in the US.
I believe pretty much all the big American tech companies have the "same" shareholders, as the majority of their shares are owned by mutual funds, either retirement funds or owned by individuals. I seriously doubt these shareholders have a coordinated effort for AMD and Nvidia to maintain a duopoly.
ymdhisI don't know about Polaris, but Vega was very memory inefficient. Vega56 performed equal to Vega64 once you set the 56 to the same memory clocks, despite the latter having more shaders. The TMUs were not fed fast enough. Apparently Raja's team wanted to give it some form of tile based rendering in hardware, but they couldn't get it working on the silicon, so the card needed a ton of memory bandwidth to perform. AMD fixed that issue with Navi AFAIK.
Polaris and Vega have very similar performance characteristics, and they both have more memory bandwidth and TMU throughput than their Pascal counterparts, so that's not the issue, on paper at least. Their issue is resource management, which is why we see some synthetic workloads and the odd outlier do better than most games. The TMUs are responsible for transforming and morphing chunks of textures which are fed from the memory bus. But the efficiency of the entire GPU is very dependent on successful scheduling. Whenever you see more simple workloads scale significantly better than challenging ones, GPU scheduling will be the major contributor, so this is very similar to the problems Intel have with ARC.

Keep in mind Vega 56 have the same scheduling resources of Vega 64, but less resources to schedule. We saw a similar sweetspot with GTX 970 vs. GTX 980, where GTX 970 achieved more performance per GFlop and was very close when run at the same clocks.
ymdhisThey also misconfigured the Vega cards with very high voltages, so they couldn't hold their boost clocks at all. The Vega56 I had, I lowered the GPU voltages a bit and it could keep its clock at 1700MHz indefinitely, plus I bumped the memory speed to something around 1100MHz. It got around 24k in Fire Strike (graphics score), which put it between the 2070 and 2080, or 1080 and 1080Ti. This was the $400 Vega 56 mind you.
That's a misconception. If you think all Vega 56s would remain 100% stable on all workloads throughout the warranty period, then you're wrong. This voltage is a safety margin to compensate for chip variance and wear of the chips, how large this margin is depends on the quality and characteristics of the chip.
ymdhisThis same guy who fucked up Vega is now working on Intel Arc, mind you.
I tend to ignore speculations about the affects of management, regardless if it's good or bad.
I know all the good engineering is done on the lower level, but management and middle-management still needs to facilitate that, through priorities and resources.

But I think this guy is a good example of someone failing upwards.
Posted on Reply
#72
PapaTaipei
MakaveliSo to follow this logic Intel and AMD are a cartel also?
Yes
Posted on Reply
#73
efikkan
I find it quite a bit misleading when Intel portrays newer games as more fitting for their architecture when their own numbers seem to show the opposite. Take for instance F1 2021 which was one of the first games they showcased, achieving a +15% vs. RTX 3060. But now look at the newer game F1 2022 running the same engine; -11%. Now I don't know whether the newer game is somehow bad, but this doesn't bode well. So let's look at some other examples;
CoD Warzone -> CoD Vanguard: +53% -> -30% (same game engine)
Battlefield V -> Battlefield 2042: -10% -> 0% (same game engine)
So it's a mixed bag to say the least.

Among the games viewed as favorable to ARC there are also many either very old or not particularly advanced graphically, to mention a few; Fortnite (Unreal), Deeprock Galactic (Unreal), PUBG (Unreal), WoW, Arcadegeddon, Dying Light 2, Warframe, and a lot more Unreal games. So most of these are either not very advanced games (often Unreal) or ~10 year old engines with patched in DX12 support.

I'm not claiming any of these are bad games, or even invalid for a comparison. My point is that Tom Petersen and Ryan Shrout in their PR tour with LTT, GN, PCWorld, etc. claimed newer and better games will do better on Intel ARC, and I don't see the numbers supporting that. To me it seems like the "lighter"(less advanced) games do better on A750 while the "heavier" games do better on RTX 3060, and my assessment based on that future games will probably scale more like the heavier ones. I would like to remind people about an historical parallel; years ago AMD loved to showcase AotS, which was demanding despite not so impressive graphics, but this was supposed to showcase the future of gaming. Well did it? (no)

What conclusions do you guys draw?
Posted on Reply
#74
TomasK
DraxivierMaybe I´m dumb, but please can you check the numbers of the comparatives and the graphs?

On CoD Vanguard at 1440p, the A750 gets 75 fps and the RTX 3060 gets 107 fps.
But in the graph made by Intel it says that they perfom EXACTLY the same.
¿¿¿???
It seems they swapped Warzone and Vanguard numbers in the 3060 column by mistake
Posted on Reply
#75
Totally
ixiHow it sounds to me...




Don't speak like that. Intel otherwise will get upset and leave us.
The top one is objectively better content-wise.
Posted on Reply
Add your own comment
May 16th, 2024 20:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts