• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc DG2-512 Built on TSMC 6nm, Has More Transistors than GA104 and Navi 22

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Some interesting technical specifications of the elusive two GPUs behind the Intel Arc "Alchemist" series surfaced. The larger DG2-512 silicon in particular, which forms the base for the Arc 5 and Arc 7 series, is interesting, in that it is larger in every way than the performance-segment ASICs from both NVIDIA and AMD. The table below compares the physical specs of the DG2-512, with the NVIDIA GA104, and the AMD Navi 22. This segment of GPUs has fairly powerful use-cases, including native 1440p gameplay, or playing at 4K with a performance enhancement—something Intel has, in the form of the XeSS.

The DG2-512 is built on the 6 nm TSMC N6 foundry node, the most advanced node among the three GPUs in this class. It has the highest transistor density of 53.4 mTr/mm², and the largest die-area of 406 mm², and the highest transistor-count of 21.7 billion. The Xe-HPG graphics architecture is designed for full DirectX 12 Ultimate feature support, and the DG2-512 dedicated hardware for ray tracing, as well as AI acceleration. The Arc A770M is the fastest product based on this silicon, however, it is a mobile GPU with aggressive power-management characteristic to the form-factor it serves. Here, the DG2-512 has an FP32 throughput of 13.5 TFLOPs, compared to 13.2 TFLOPs of the Navi 22 on the Radeon RX 6700 XT desktop graphics card, and the 21.7 TFLOPs of the GA104 that's maxed out on the GeForce RTX 3070 Ti desktop graphics card.



View at TechPowerUp Main Site | Source
 
A quater more transistors for roughly equal performance in FP32, while having wider memory bus? Good for first product, but not that much competitive, especially considering the drivers issues Intel has. And another question is TDP/TBP.
 
The transistor count seems higher than i thought, maybe we have an infinity cache Intel equivalent in there? (64MB?)
 
Hopefully the architecture is good and these actually come out soon because from just looking at the specs, this is looks like it would compete with the GPU's they're compared against(6700XT/3070). I expect that level of performance to be at the low-mid range of next-generation AMD & Nvidia GPU's.
 
A quater more transistors for roughly equal performance in FP32, while having wider memory bus? Good for first product, but not that much competitive, especially considering the drivers issues Intel has. And another question is TDP/TBP.
Driver issues? I'm really bummed when people say driver issues for a product that is not out yet. Literally no one has the card and yet people found driver issues with the card. Same goes for TDP/TBP. Even if they share the TDP information for instance, you would still wait for reviews to check the correctness of the information. One TDP is not equal to the other, as we have seen countless times, TDP does not equal power consumption and you dont know what criteria the company takes to describe the TDP.
 
Driver issues? I'm really bummed when people say driver issues for a product that is not out yet. Literally no one has the card and yet people found driver issues with the card. Same goes for TDP/TBP. Even if they share the TDP information for instance, you would still wait for reviews to check the correctness of the information. One TDP is not equal to the other, as we have seen countless times, TDP does not equal power consumption and you dont know what criteria the company takes to describe the TDP.
You do know that intel makes drivers for their iGPUs right? And that they have so for the last 20 years? And that they have been, for lack of a better term, horseshit?

Why would that bum you out? You want to have naive optimism in the face of all available information go ahead, just dont be surprised when everyone around you says "I told you so" when ARC finally comes out and has drivers that would make late 2000s AMD blush.
 
You do know that intel makes drivers for their iGPUs right? And that they have so for the last 20 years? And that they have been, for lack of a better term, horseshit?

Why would that bum you out? You want to have naive optimism in the face of all available information go ahead, just dont be surprised when everyone around you says "I told you so" when ARC finally comes out and has drivers that would make late 2000s AMD blush.
Don't judge now. Just because the iGPU drivers are not perfect it doesn't mean the dGPU will share the same fate. It is a different product in every way. The iGPU driver just needs to display your desktop. That's what it is for and it does it well. Comparing iGPU to dGPU just because these are graphics chips is not a best solution. Especially, Intel knows what is at stake here. We don't even know if the iGPU and dGPU will share the same driver. I doubt it. Dedicated driver for dGPU arch. That's my bet.
 
Driver issues? I'm really bummed when people say driver issues for a product that is not out yet. Literally no one has the card and yet people found driver issues with the card. Same goes for TDP/TBP. Even if they share the TDP information for instance, you would still wait for reviews to check the correctness of the information. One TDP is not equal to the other, as we have seen countless times, TDP does not equal power consumption and you dont know what criteria the company takes to describe the TDP.
Yea my magic 8-ball says that drivers will be an issue initially. Its a rough draft at best until Intel takes the time to really iron them out. The focus has been getting the product to market, driver development will take a back seat. Just because Intel makes drivers for it iGPU, doesnt mean they will be the same, and very likely wont be interchangeable.
 
Don't judge now. Just because the iGPU drivers are not perfect it doesn't mean the dGPU will share the same fate. It is a different product in every way. The iGPU driver just needs to display your desktop. That's what it is for and it does it well. Comparing iGPU to dGPU just because these are graphics chips is not a best solution. Especially, Intel knows what is at stake here. We don't even know if the iGPU and dGPU will share the same driver. I doubt it. Dedicated driver for dGPU arch. That's my bet.
Since you root for those GPUs from Intel, you should be hoping that those delays to bring them to the market since mid-2021 are only due to software problems. Because of not, the product is doomed before it is launched. Software problems are fixable, hardware problems aren't and if they get fixed through firmware, they tend to reduce performance for stability. I would like a 3rd played in this messy GPU market but things don't look rosy for Intel atm.
 
A quater more transistors for roughly equal performance in FP32, while having wider memory bus? Good for first product, but not that much competitive
Exactly what I'd expect from a project with Raja Koduri in lead.
 
Since you root for those GPUs from Intel, you should be hoping that those delays to bring them to the market since mid-2021 are only due to software problems. Because of not, the product is doomed before it is launched. Software problems are fixable, hardware problems aren't and if they get fixed through firmware, they tend to reduce performance for stability. I would like a 3rd played in this messy GPU market but things don't look rosy for Intel atm.
Why wouldn't I be interested in new release of a product and company joining the GPU market as a new player? I'm not trying to predict anything since that is just impossible. The thing is, what we can do is only assume nothing more. We are never gonna know the exact reason for delays and companies will not share that either. Releasing a product like that for the first time, in an environment like that with the opponents like NVidia and AMD who have been in the GPU industry for so long, is demanding and there will be crap load of changes in the plan for the product, release time etc. I'm sure about that 100%. Pointing a driver issue as a reason, without even having the product in hand is immature in my opinion because this enterprise is so complex with so many things that have to be taken into account, that dragging it to a driver issue without even seeing the final product is just wrong.
Obviously there might be some driver glitches that is inevitable for a first release. But saying a driver issue now is the cause why the product has been postponed or delayed? It's like saying, next year it will be raining and when it happens, brag to everyone that you were right.
 
Last edited:
You do know that intel makes drivers for their iGPUs right? And that they have so for the last 20 years? And that they have been, for lack of a better term, horseshit?

Why would that bum you out? You want to have naive optimism in the face of all available information go ahead, just dont be surprised when everyone around you says "I told you so" when ARC finally comes out and has drivers that would make late 2000s AMD blush.
Its true iGPU drivers are garbanzo but i have the feelling for dGPU they have a whole different division, i have the perception they seem to be doing things decently which is something unlike intel. I mean, AV1 encoding/decoding... Remember intel hired some of the high ups in AMD.

Im all for hating on Intel but they do deserve a vote of confidence here, the fact that they have not released something half cooked means a lot imo because they could have just to milk the market
 
AD103 is at least 40 billion 4nm 99 Mtr/mm2 at 400mm2. so that makes it like RTX 4060 competitor or something.
 
Why wouldn't I be interested in new release of a product and company joining the GPU market as a new player? I'm not trying to predict anything since that is just impossible. The thing is, what we can do is only assume nothing more. We are never gonna know the exact reason for delays and companies will not share that either. Releasing a product like that for the first time, in an environment like that with the opponents like NVidia and AMD who have been in the GPU industry for so long, is demanding and there will be crap load of changes in the plan for the product, release time etc. I'm sure about that 100%. Pointing a driver issue as a reason, without even having the product in hand is immature in my opinion because this enterprise is so complex with so many things that have to be taken into account, that dragging it to a driver issue without even seeing the final product is just wrong.
Obviously there might be some driver glitches that is inevitable for a first release. But saying a driver issue now is the cause why the product has been postponed or delayed? It's like saying, next year it will be raining and when it happens, brag to everyone that you were right.
You might want to remember, that even though this is their first discrete GPU in years, their GPU from same family as this one already exists integrated in Intel CPUs? As far as I know, there is A LOT of problems with its drivers. Also if I remember correctly, Intel licensed Power VR GPU without drivers support (with intention to write drivers themselves), then sued Imagination tech. for free drivers support. So sue me for being pessimistic.
 
Videocardz is reporting 16MB L2 cache for DG2-512 and 4MB for DG2-128.
If the report is correct, then either they based the 4MB DG2-128 figure based on the mobile part which has 64bit bus (so 6MB in total L2 on the full 96bit DG2-128 design with 2MB disabled on the mobile 64bit part) or the L2 is not correlated with the memory bus (like RDNA2) but with the render slice (2MB L2/render slice )
EDIT: if the report is correct and the L2 is so small then probably its main reason should be the raytracer, not to help alleviate bandwidth needs regarding classic raster (although it should help also) so then it makes more sense to be correlated with render slices instead of memory controller/bus.
 
Last edited:
Built on TSMC and if/when nvidia goes back to them from Samsung should be a real good supply of video cards then. *sarcasm*
 
Last edited:
Its true iGPU drivers are garbanzo but i have the feelling for dGPU they have a whole different division, i have the perception they seem to be doing things decently which is something unlike intel. I mean, AV1 encoding/decoding... Remember intel hired some of the high ups in AMD.

Im all for hating on Intel but they do deserve a vote of confidence here, the fact that they have not released something half cooked means a lot imo because they could have just to milk the market


Every time Intel takes over driver responsibility fr any GPU, its been shit - including this beauty!


What makes any of you think these drivers won't continue to be shit for years? People are willing put-up with AMD's shitty new product launches because they know over the long term that the architecture is superior, but who among us is seriously going to wait years for ARC (an inferior architecture, or else it would have been competitive with ancient Vega 8 in Cezanne),to top off this pile?
 
Last edited:
Don't judge now. Just because the iGPU drivers are not perfect it doesn't mean the dGPU will share the same fate. It is a different product in every way. The iGPU driver just needs to display your desktop. That's what it is for and it does it well. Comparing iGPU to dGPU just because these are graphics chips is not a best solution. Especially, Intel knows what is at stake here. We don't even know if the iGPU and dGPU will share the same driver. I doubt it. Dedicated driver for dGPU arch. That's my bet.
Dude, the IGP and dGPU are using the same architecture. Intel can improve their drivers but they are what they are now. It's like suggesting if RDNA2 had a driver issue that magically might not apply to Ryzen 6000.
 
That reminds me of GCN: many many transistors yet only give us little performance...
Well, to describe it this way isn't truly fair...
Wait... Isn't it the same Raja working on it? lollllll
 
That reminds me of GCN: many many transistors yet only give us little performance...
Well, to describe it this way isn't truly fair...
Wait... Isn't it the same Raja working on it? lollllll
Yes and like GCN it appears to be focused on compute throughput first. Not sure what people were expecting.
 
A quater more transistors for roughly equal performance in FP32, while having wider memory bus? Good for first product, but not that much competitive, especially considering the drivers issues Intel has. And another question is TDP/TBP.
The focus is GPGPU stuff, it just happens to be a pretty good for gaming design. Intel really wants to avoid losing more HPC market space to AMD and Nvidia.
 
Dude, the IGP and dGPU are using the same architecture. Intel can improve their drivers but they are what they are now. It's like suggesting if RDNA2 had a driver issue that magically might not apply to Ryzen 6000.
I don't think the iGPU in the Alder Lake is arc architecture since it has not been released. the UHD and HD iGPU's and that is not arc. That is something totally different. Just because Intel is releasing it it does not mean these are the same.

You might want to remember, that even though this is their first discrete GPU in years, their GPU from same family as this one already exists integrated in Intel CPUs? As far as I know, there is A LOT of problems with its drivers. Also if I remember correctly, Intel licensed Power VR GPU without drivers support (with intention to write drivers themselves), then sued Imagination tech. for free drivers support. So sue me for being pessimistic.
that does not mean the drivers are already bad and that is definitely not the reason to say that delay for arc has been caused by that. That is an assumption and there is more into the story
 
Every time Intel takes over driver responsibility fr any GPU, its been shit - including this beauty!


What makes any of you think these drivers won't continue to be shit for years? People are willing put-up with AMD's shitty new product launches because they know over the long term that the architecture is superior, but who among us is seriously going to wait years for ARC (an inferior architecture, or else it would have been competitive with ancient Vega 8 in Cezanne),to top off this pile?
for once we still dont know if its inferior, its very likely to be considerably inferior to rdna3 and lovelace, but, currently im guessing it has a good chance to be competitive. The fact it does AV1 encoding is a huge selling point i dont think you are really aware of the difference that makes.

That beauty as you said was responsability of the intel we all know so there was no surprise as to how that venture went down. ARC gpus should be under a different umbrella that is probably from a safe distance of intel's rain.
 
I don't think the iGPU in the Alder Lake is arc architecture since it has not been released. the UHD and HD iGPU's and that is not arc. That is something totally different. Just because Intel is releasing it it does not mean these are the same.


that does not mean the drivers are already bad and that is definitely not the reason to say that delay for arc has been caused by that. That is an assumption and there is more into the story
ARC and UHD are brands, not architectures.
 
Don't judge now. Just because the iGPU drivers are not perfect it doesn't mean the dGPU will share the same fate. It is a different product in every way. The iGPU driver just needs to display your desktop. That's what it is for and it does it well. Comparing iGPU to dGPU just because these are graphics chips is not a best solution. Especially, Intel knows what is at stake here. We don't even know if the iGPU and dGPU will share the same driver. I doubt it. Dedicated driver for dGPU arch. That's my bet.
This is largely true, until Intel decided to introduce their Xe graphics, particularly for the mobile space since the product is supposed to allow 1080p gaming. And really, when you are trying to target the enthusiasts market, a lot of work needs to go into the software aspect. To me, the software support is probably the biggest hurdle and barrier to entry. If one have the money like Intel, getting good hardware is not a problem since this is something you can control internally. But there is no need to speculate since the actual hardware release is not that far off.

I don't think the iGPU in the Alder Lake is arc architecture since it has not been released. the UHD and HD iGPU's and that is not arc. That is something totally different. Just because Intel is releasing it it does not mean these are the same.


that does not mean the drivers are already bad and that is definitely not the reason to say that delay for arc has been caused by that. That is an assumption and there is more into the story
This is not true. From an architecture standpoint, ARC and Xe graphics are likely built on the same foundation, with perhaps some minor differences. Alder Lake’s UHD graphics is basically derived from the same Xe graphic solution. All are just naming conventions, but fundamentally, I believe they are pretty much the same. I feel the naming convention is a huge mess as usual from Intel where they have ARC, Xe, DG1, UHD, etc…
 
Back
Top