Wednesday, April 10th 2024

Intel Arc Battlemage Could Arrive Before Black Friday, Right in Time for Holidays

According to the latest report from ComputerBase, Intel had a strong presence at the recently concluded Embedded World 2024 conference. The company officially showcased its Arc series of GPUs for the embedded market, based on the existing Alchemist chips rebranded as the "E series." However, industry whispers hint at a more significant development—the impending launch of Intel's second-generation Arc Xe² GPUs, codenamed "Battlemage," potentially before the lucrative Black Friday shopping season. While Alchemist serves as Intel's current offering for embedded applications, many companies in attendance expressed keen interest in Battlemage, the successor to Alchemist. These firms often cover a broad spectrum, from servers and desktops to notebooks and embedded systems, necessitating a hardware platform that caters to this diverse range of applications.

Officially, Intel had previously stated that Battlemage would "hopefully" arrive before CES 2025, implying a 2024 launch. However, rumors from the trade show floor suggest a more ambitious target—a release before Black Friday, which falls on November 29th this year. This timeline aligns with Intel's historical launch patterns, as the original Arc A380 and notebook GPUs debuted in early October 2022, albeit with a staggered and limited rollout. Intel's struggles with the Alchemist launch serve as a learning experience for the company. Early promises and performance claims for the first-generation Arc GPUs failed to materialize, leading to a stuttering market introduction. This time, Intel has adopted a more reserved approach, avoiding premature and grandiose proclamations about Battlemage's capabilities.
Source: ComputerBase.de
Add your own comment

36 Comments on Intel Arc Battlemage Could Arrive Before Black Friday, Right in Time for Holidays

#26
Zendou
Beginner Macro Device1080 Ti is faster than 980 Ti by about two thirds. About 50 percent faster per dollar.
2080 Ti is faster than 1080 Ti by about a third, also being more expensive. It was and still is a horrible $ per FPS release.
3080 Ti is faster than 2080 Ti by about 50 to 60 percent. It's mediocre at best considering the even more so increased price. About 40% $ per FPS improvement.
4090 is faster than 3080 Ti by about the same 50 to 70 percent. Nothing impressive about that, either, since 4090 is so ridiculously expensive.
This seems to be an attempt at framing information negatively. There is a 4080 super, but you jumped to the 4090. If you were just referring the the highest end card of the consumer series, the 3090 TI existed which was more expensive than the 4090, but you skipped that one. Not sure what the point was besides to denigrate Nvidia for perceived misconduct in pricing.
Posted on Reply
#27
Vya Domus
Dr. DroJust look at its fillrates, compute, amount of execution resources..... not be a tie to slight win in raster but a murder scene in RT.
As expected the argument just boils down to RT performance, fill rates and compute are not particularly related to RT. Worse RT isn't indicative of "serious issues with their architecture", I am sure most of these deficiencies come down to a few operations that are a lot slower on RDNA3 in the RT pipeline, hardware related problems are also obfuscated by the tons of Nvidia sponsored titles, where I have no doubt developers spend most of their time, if not all, profiling code and optimizing for Nvidia hardware. If you look at the PS5 it's quite impressive that developers can squeeze decent RT effects in games on what's otherwise laughably underpowered RDNA2 hardware, does anyone spend that much effort when they port their games on PC ? It's speculation but I seriously doubt it.
Posted on Reply
#28
Beginner Macro Device
ZendouThere is a 4080 super
Which is only technically not a "normal" 4080. We are yet to see a game where it's faster than a 4080 by more than 10 percent. This is still, however a xx70 class GPU sold for double the price and praised like it's the CEO of novatorship awhilst the reality is only showing us a mild decrease in the NV's greed.
4090 is a more cut-down Ada than 3080 Ti is a cut-down Ampere. Much more so.

My point stands: 20 percent gen-to-gen FPS per $ improvement is bad, horrid, putrid, you name it. You can't justify it. The only reason why it is a thing is we don't have real competition. AMD do not try to compete (there was absolutely no reason to ask $1000 for a lame-ass 7900 XTX other than "oh these greens asked even more for their quote-unquote 4080." The market proved it: despite being cheaper, 7900 XTX was sold in 5+ times lesser numbers worldwide and it didn't affect NV SKU pricing whatsoever.). Not to mention AMD do still use Turing GPUs as their reference on the RDNA3 presentation slides. This alone proves they are just an ambient noise.

Intel try but they currently can't make NV sweat. Don't see how they will do it this or next year, yet fingers crossed they will.
Vya DomusWorse RT isn't
...the Dro's point. He's talking 7900 XTX underperforms in general and by a lot. Tuned RDNA3 would've allowed for 96 CUs (7900 XTX) obliterating RTX 4080 in pure raster and maybe trailing behind in RT a tad but nothing all too crazy. And by obliteration, we mean 30+ percent difference, not these puny 3 to 15 % wins in AMD favouring games.

I also disagree on scaling being a real issue here. 7800 XT has 60 CUs, 7900 XTX has 96 CUs, and the latter is meant to be about 60 percent faster...

The problem isn't scaling. The problem is RDNA3 itself. It's underdelivering no matter how many CUs there is.
Posted on Reply
#29
L'Eliminateur
"avoiding premature and grandiose proclamations about Battlemage's capabilities."
read as: The snake-oil salesmen and perpetual failer of companies Raja Koduri is not here to tweet continuous BS and unsubstantiated claims
Posted on Reply
#30
Vya Domus
Beginner Macro DeviceAnd by obliteration, we mean 30+ percent difference, not these puny 3 to 15 % wins in AMD favouring games.
7900XTX was never going to "obliterate" a 4080, it has 25% more shading power which doesn't scale linearly anyway, the scaling is much worse going from AD103 to AD102 for example.

AD102 has 70% more shading power than AD103 and it's not even half as fast as you'd expect it to be, not even in RT workloads, it's obvious to me that between the two Ada is actually the one with much bigger architectural problems, it's actually comically bad if you think about it when you realize the 4090 doesn't even have a fully enabled AD102 chip. RDNA3 is doing alright.

Some might still wonder why there was no 4090ti, probably because it would most likely struggle to be even 5% faster than a 4090.
Posted on Reply
#31
Beginner Macro Device
Vya Domusit has 25% more shading power
And almost 50% more memory bandwidth. However, with RDNA3 malfunctioning, it's of no importance.
Posted on Reply
#32
Vya Domus
Beginner Macro DeviceAnd almost 50% more memory bandwidth. However, with RDNA3 malfunctioning, it's of no importance.
No, it's not even close to 50% more like 30% and that doesn't scale linearly either, RDNA3 is functioning as expected, you're just a troll as usual that can't even get his numbers right. This argumentation is completely meaningless anyway, comparing FP32 performance and ROPs and fill rate and whatever can only give you rough performance expectations, variations of -/+ 10% are perfectly in line. Claims such as "there something clearly wrong blah blah blah" are founded on nothing.

Ampere famously doubled FP32 performance as well and it obviously wasn't twice as fast, I do not recall "there is something very wrong with Ampere hur dur" comments at all, people just love clowning on AMD again for literally no reason.
Posted on Reply
#33
Beginner Macro Device
Vya DomusRDNA3 is functioning as expected
By whom? All I see is an X dollar RDNA GPU doing a worse job overall than an X dollar Ada GPU, also being 25 percent more complicated. And it's not my perception, you can see the numbers for yourself.
Vya Domusyou're just a troll
Ambitious statement.
Vya DomusNo, it's not even close to 50% more like 30%
With higher G6X latency added into the mix, it's closer to 50% than you think it is. Of course it's only 35% on paper, yet it's more. But RDNA3 can't make use of it regardless.
Posted on Reply
#34
Vya Domus
Beginner Macro DeviceWith higher G6X latency added into the mix
Latency has nothing to do with bandwidth, GPU workloads are also notoriously tolerant to high latencies as it simply does not matter as much, that's why they can make use of much faster memory chips in the first place vs CPUs, which are always higher latency in case you didn't know.
Beginner Macro Deviceit's closer to 50% than you think it is. Of course it's only 35% on paper, yet it's more. But RDNA3 can't make use of it regardless
"It's more than you think", dude give me a brake, stop talking about this stuff like like you have a clue. This is my last comment on the matter, this is getting dumb.
Posted on Reply
#35
Minus Infinity
Beginner Macro DeviceBy whom? All I see is an X dollar RDNA GPU doing a worse job overall than an X dollar Ada GPU, also being 25 percent more complicated. And it's not my perception, you can see the numbers for yourself.
How are they both X dollar gpu's. One is X one is > 1.6X. So in your tiny mind, a gpu that sells for well under $1K should perform the same as the $1.7K one but it's just because RDNA3 sucks it doesn't
Posted on Reply
#36
Solaris17
Super Dainty Moderator
Damn, yall really turned a battlemage thread into nvidia vs AMD. The inability to process topics is astounding. I will pray all of you can manage to get your blankets over you tonight.
Posted on Reply
Add your own comment
Jun 17th, 2024 15:41 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts