• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Shows Off its Arc Graphics Card at Intel Extreme Masters

And how many quarters of losses in the graphics card sector is Intel going t continue to endure before the board cuts their losses??
If past is any indication then 8-12 quarters easily, assuming only their consumer cards bear losses because if HPC or enterprise ones are also sold on something like "contra revenues" then they'll have to jump ship faster than mice on the Titanic o_O

Of course knowing Intel they'll probably charge you for OCing these cards or turning 3d acceleration on via their premium (upgrade) plans :laugh:
 
Nah, their initial batch failed. Did'nt meet expectations. So they fix it, respin it and hope for the best. By the time it's released next generation is knocking at the door with figures of 120FPS 4K ready cards.
120fps 4k is possible already with a 3080 or a 6800 xt, so next gen will be more like 165+ fps 4k
...which is sick, can't wait to eventually get a 144hz 4k monitor once they become cheaper lol
 
Agreed. RT is also 100% useless. However I can attest that 4K is nice for non competitive games. If the 20xx amd 30xx card didn't had those 100% useless tensor/RT cores it could have 30-40% more performance per die. Also I saw the UE5 demos, yes it looks cool but at the cost of MASSIVE storage requirements and a complete rework of the data pipeline, and even then it would still have bad framerate and more importantly extremely bad mouse input for competitive games.
RT could be nice when implemented right. That is, when the whole scene is ray traced. The problem is that current gen RT cores on both AMD's and Nvidia's side are too slow for this. That's why game devs limit what they do with RT (for example, shadows only) and use traditional rasterization for the rest of the effects - which is what makes RT at its current state kind of pointless, imo. Otherwise, it would be great.

As for Tensor cores, I agree. Everybody praises Nvidia for DLSS, when in fact, if RT cores were strong enough to do their job at a minimal performance loss vs rasterization, then nobody would need DLSS in the first place. Besides, AMD has shown that the whole DLSS shenanigans can be done without Tensor cores so... meh. :rolleyes:

They are kind off, there is two things which in my opinion are keeping the market alive.

One is generated demand via hardware exclusive features, so e.g. with Nvidia, Gsync, DLSS and hardware based RT. All three features thanks to AMD have alternatives that dont require hardware lock in, VRR, FSR and RT using rasterisation hardware.
Exactly. Nearly every single monitor is Freesync capable nowadays, whereas you have to find and pay more for Gsync ones. This is where the slogan "Nvidia - The Way It's Meant To Be Paid" is valid.

I get the merit of VRR, Nvidia deserve praise for introducing the concept, but they did attempt vendor lock in, not only via GPU but also using expensive chips on monitors. DLSS I also get the merit and out of the three this is for me by the far the most useful tech in terms of potential, but initially limited to games where dev's implement it (so limited to a a fraction of new games released), however we have more recently seen a new variant of DSR that uses DLSS so can be implemented now driver side, not sure if FXR can do via driver (someone can clarify for me this would be great). Finally RT, this one I have little love for, I feel lighting can be done very well in games via traditional methods, and I thought was lame where RT developed games would have heavily nerfed non RT lighting and to have good lighting you needed RT hardware, AMD have at least proven the hardware isnt needed to make this a bit less lame. But ultimately RT has served to increase the level of hardware required to achieve a quality level so is a bad thing for me.
Well, the point of DLSS is to increase performance at a minimal image quality loss. But then the question is, why don't you have enough performance in the first place? Is it the gamer's fault for sitting on the 120+ fps craze train, or is it Nvidia's way of selling their otherwise useless Tensor cores when they could have used the same die space for more raster cores? RT is nice, but like I said above...

The second demand for new GPUs is been fuelled by the high FPS/HZ craze, which is primarily desired by first person shooter and esport gamers. If there was no high refresh rate monitors, then Nvidia and AMD would be having a much harder time now in convincing people to buy new gen GPUs, as we starting to get to the point a single GPU can handle anything thrown at it to a degree at 60fps.
They have a much harder time convincing me for sure. I honestly think the high refresh rate craze is probably the stupidest thing the gaming industry has invented so far.
 
I guess that's the only card they have :D :D

But in reality... guess everyone does remember no driver meme? I have a bad feeling no driver will go to the team blue.
 
Nothing really new here, on the contrary below some more interesting Intel news, if they're considered news, it came up 3 days ago (check the AlderLake/Raptor Lake frequencies and the performance differences)
dp66YZYFvqC3BsSqGWtBtS-650-80.png

https://www.tomshardware.com/news/i...hmarked-20-percent-faster-than-core-i9-12900k
@Tigger made a thread:
 
Last edited:
I've just recently downgraded my 2070 to a 6500 XT because of the noise
Were there other incentives? From noise alone I'd have just done a fairly heavy handed undervolt and make a very relaxed fan profile. I'd assume that you made good coin out of the changeover.
 
Intel wont stand tough against rdna3 and rtx4000 series but they could just try to catch up. I remember Intel job openings for discrete gpu graphic driver development in US and China on urgent basis. I guess they want the bells to ring but they just need more and more time to suite up against both green and red team.


Well im not saying their hardware is immortal atleast the build quality engineering is a bit ahead of others. Atleast their hardware carry their reputation whether its a processor or even some controller on pci x1, it would perform as good as it should even after a decade.
Ah I see what you mean now. But then again, other smaller and controller chips on boards the world over perform much the same! And lets not forget the many issues around bent chips/IHS lately. When they have to start doing something new, there are absolutely no guarantees on quality. In fact, we know now 'something is gonna give' and you need a delid, a clamp, or some other workaround to really get what was on the marketing sheet.
 
  • Like
Reactions: aQi
Were there other incentives? From noise alone I'd have just done a fairly heavy handed undervolt and make a very relaxed fan profile. I'd assume that you made good coin out of the changeover.
Nope, I still have the 2070 as a backup in case I need more graphics performance later. I know it doesn't make much sense, but I like saving interesting pieces of hardware, and my bone stock Evga 2070 is interesting enough to be saved as a prime example of the first generation of RT cards. :ohwell:

Apart from noise, my other incentive was heat (as in air temperature inside the PC case and in the room), also curiosity for RDNA 2, not to mention having an Asus TUF themed build now, which wasn't really a reason to swap, just a bonus.
 
Sorry, the drivers aren't at a state that would allow anything to be shown in public.
Apparently they aren't even at a state that would allow them to show a static picture on a monitor. Which gives me incredible, I mean just massive confidence that this product absolutely isn't vapourware.
 
Apparently they aren't even at a state that would allow them to show a static picture on a monitor. Which gives me incredible, I mean just massive confidence that this product absolutely isn't vapourware.
Intel is launching in the PRC next month, with select partners.
 
I missed the Limited Edition label...

So far all of them seem to be limited editions :D :D :D
 
I typically don't think it is "too late", even if they're one generation behind but the problem here is that Intel is a new player, they have no consumer base. It will be very difficult and nigh on impossible to convince people to buy something that not only it wont be as good as the competition but it's completely new altogether.
 
Intel is launching in the PRC next month, with select partners.
I’ve been saying all along, Intel first generation desktop GPUs will only launch in far obscure markets in OEM systems. If Intel can keep going, second generation might be AIB for the DIY market. Techpowerup readers don’t expect to buy an Intel GPU anytime soon.
 
Well I wasn't but I was hoping it could lower the prices from the other two, guess that isn't happening either :wtf:
 
Ah I see what you mean now. But then again, other smaller and controller chips on boards the world over perform much the same! And lets not forget the many issues around bent chips/IHS lately. When they have to start doing something new, there are absolutely no guarantees on quality. In fact, we know now 'something is gonna give' and you need a delid, a clamp, or some other workaround to really get what was on the marketing sheet.
Yes thats another story. Its not the same as it use to be. The IHS use to be top quality back in the days but now its not as it should be. Lets confront this from a business perspective. These consumer units are simplified to the extend where they can handle low grade IHS, suitable for business market but not admirable by enthusiasts or gamers. We need more juice !!! Then if we see the ratio of professional working class to enthusiast gamers etc, business wise professionals will derive more profit.
Its starts to matter when reviewers and users start to make a point all at once. I remember Intel using poor IHS on 3rd gen (if i remember exactly) the reviewers and users pointed it so badly that later batches had it revised immediately.

Now lets talk about the gamer enthusiast class. These ARC gpus are build for them. Intel does keep a very good eye on every market segment and its obvious that every bit of Intel gpu is going to go through hell alot of reviews, breakdowns, loads of tests. If I was in the same business I would work on things other competitors do not offer that is top quality low price and same performance.
 
I’ve been saying all along, Intel first generation desktop GPUs will only launch in far obscure markets in OEM systems. If Intel can keep going, second generation might be AIB for the DIY market. Techpowerup readers don’t expect to buy an Intel GPU anytime soon.
What about the Intel DG1 that has been shipping in OEM systems since January 2021 as Intel Iris Xe Graphics, albeit with limited compatibility? Was that the first generation desktop GPU or Gen 0.5? ;)
 
I typically don't think it is "too late", even if they're one generation behind but the problem here is that Intel is a new player, they have no consumer base. It will be very difficult and nigh on impossible to convince people to buy something that not only it wont be as good as the competition but it's completely new altogether.
They're a new player in the GPU market, but they have the financial background to endure for one generation. It reminds me of RDNA 1. The top offering 5700 XT had bad driver stability and cooling issues, and could only compete with the RTX 2070 (without raytracing as well), but it was still successful enough for AMD to build RDNA 2 upon it. I kind of expect the same from first gen Arc, to be honest: not to kick the GPU market in the backside, but to be just good enough for Intel to build the next generation upon. A test run, basically.
 
Can we rename the thread title to "Intel Teaser Vaporware Again"?

If it was a working product, they'd be demoing it, not showing off a mock-up that will most likely change before launch anyway.

It's about 3 years late already because Intel started teasing the DG2 in 2018 and it was supposed to tape out on 10nm 2H 2019. There was a little problem with Intel's 10nm in 2019 though - they axed it.

This was the 10nm process so unusable that Intel actually scrapped plans for 11th Gen CPUs on 10nm in 2019 and spent 6 months backporting Rocket Lake to 14nm. Xe HPG "DG2" Arc Alchemist was supposed to be on shelves with Rocket Lake CPUs, and Rocket Lake was already 6-9 months late over two years ago.
 
RT could be nice when implemented right. That is, when the whole scene is ray traced. The problem is that current gen RT cores on both AMD's and Nvidia's side are too slow for this. That's why game devs limit what they do with RT (for example, shadows only) and use traditional rasterization for the rest of the effects - which is what makes RT at its current state kind of pointless, imo. Otherwise, it would be great.

As for Tensor cores, I agree. Everybody praises Nvidia for DLSS, when in fact, if RT cores were strong enough to do their job at a minimal performance loss vs rasterization, then nobody would need DLSS in the first place. Besides, AMD has shown that the whole DLSS shenanigans can be done without Tensor cores so... meh. :rolleyes:


Exactly. Nearly every single monitor is Freesync capable nowadays, whereas you have to find and pay more for Gsync ones. This is where the slogan "Nvidia - The Way It's Meant To Be Paid" is valid.


Well, the point of DLSS is to increase performance at a minimal image quality loss. But then the question is, why don't you have enough performance in the first place? Is it the gamer's fault for sitting on the 120+ fps craze train, or is it Nvidia's way of selling their otherwise useless Tensor cores when they could have used the same die space for more raster cores? RT is nice, but like I said above...


They have a much harder time convincing me for sure. I honestly think the high refresh rate craze is probably the stupidest thing the gaming industry has invented so far.

The hardware to ray trace every pixel in Z depth with every light source with 16 bit or 24 bit color and transparently plus texture is so hardware intensive that it will never be a on demand real time function.

We will probably figure out a way to create matrix tables with vertex data precooked in and use fewer ray samples added to the pipeline with current global illumination lighting to achieve the most realistic effect with far less overhead.

I remember tessellated concrete barriers, maybe, hopefully, Intel entering the scene with midrange hardware will put the brakes on stupid implementations of RT
 
The hardware to ray trace every pixel in Z depth with every light source with 16 bit or 24 bit color and transparently plus texture is so hardware intensive that it will never be a on demand real time function.

We will probably figure out a way to create matrix tables with vertex data precooked in and use fewer ray samples added to the pipeline with current global illumination lighting to achieve the most realistic effect with far less overhead.

I remember tessellated concrete barriers, maybe, hopefully, Intel entering the scene with midrange hardware will put the brakes on stupid implementations of RT

I remember a Star Wars demo that was running on a $60,000 DGX Station that had 4 Titan V GPUs and managed only 24 FPS. It's going to be a long, long time (if ever) that tech will advance enough that such a machine will be affordable.


 
I remember a Star Wars demo that was running on a $60,000 DGX Station that had 4 Titan V GPUs and managed only 24 FPS. It's going to be a long, long time (if ever) that tech will advance enough that such a machine will be affordable.


That shows just how badly Nvidia manipulated and distorted consumers quite frankly with deceptive advertising. A lot of people bought into RTRT cards based on that video thinking they would end up with something similar or modestly similar and yet look at Cyber Punk they had to handicap poly count in the end just to insert the bits of RTRT they did insert into the game. Plus that demo of Nvidia's is 24 FPS and no one in their right mind is going to considering gaming at 24 FPS reasonably fluid there is just too much input lag at that FPS. It's noticeably better even at 30FPS on input lag and still not great and further still at 36 FPS it's really not until about 48FPS average things start to approach a generally decent experience that feels fairly responsive. Until GPU's can start to integrate better frame interpolation around a FPS render target like that to compensate for it that type of frame rate will ever being very satisfying to end users. The fact is even 60 FPS is a bit of a general crutch for input responsiveness.
 
Back
Top