• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc Alchemist DG2 GPU Memory Configurations Leak

These intel GPU's dont even need to be great, or price competitive
Why?


Because intel will force them into hundreds of thousands of prebuilt systems and force them upon users with their usual shady business practises

(I look forward to finding out they only work on intel systems with 11th and 12th gen CPU's or some other such nonsense)
 
These intel GPU's dont even need to be great, or price competitive
Why?


Because intel will force them into hundreds of thousands of prebuilt systems and force them upon users with their usual shady business practises

(I look forward to finding out they only work on intel systems with 11th and 12th gen CPU's or some other such nonsense)
Ahhh, that's the beauty of having a 2 horse race isn't it? I mean AMD or Intel, take your pick! :(:ohwell:
 
Dude, stop lying, you know these guys are all going to get fired or sued for releasing something that's under NDA. "Oh! Woopsie!"

@Fouquin see the point now?



Not bad? That's 100 Gbps short of a 3070ti and about equal to a 3070.
So the bestest, most optimistic outlook if Intel has near-perfect drivers is that their most expensive chip with 16GB (!) will end up somewhere along 3070 but likely a good 10% under it because its just not quite as refined.

I hope Intel is not looking for any more than 600 bucks for that, because if they do, and they also postpone beyond Q1 2022, its a DOA and you can easily wait until 2023 for something better. And by then, 500Gbps is lower midrange territory. So... Intel gonna scale up to 384 ~ 512 bit then? Hey Raja... did you think of using HBM? Or still didn't make up your mind? :rolleyes::rockout::toast::oops:

I'm not even half kidding. DG2 looks like old news already, its like Raja still has no grasp of time to market and pre empting that delay.

I mean.. this is just in:

View attachment 233714
In my own experience trying out the RTX 3070 Ti, the higher bandwidth as compared to the 3070 did not make a material difference in game performance. I've reduced the clock speed of the VRAM down and did not see a drastic drop in performance. The bulk of the sub 10% improvement in average performance is somewhat contributed by the increase in CUDA cores and likely less of the memory bandwidth increase. Cards at this range are meant to be solid 1440p performer, and may suit some games in 4K. So there is little concern about the memory bandwidth in my opinion.

These intel GPU's dont even need to be great, or price competitive
Why?


Because intel will force them into hundreds of thousands of prebuilt systems and force them upon users with their usual shady business practises

(I look forward to finding out they only work on intel systems with 11th and 12th gen CPU's or some other such nonsense)
This is possible, but I don't think Intel will be that silly by locking AMD and Nvidia GPUs out of the ecosystem. They know very well that most of the dedicated GPUs are from Nvidia, and by tying Intel CPUs to Intel GPUs, people will just deflect to AMD based systems. In the end, there is a potential that they may lose sales on both CPU and GPU.
 
In my own experience trying out the RTX 3070 Ti, the higher bandwidth as compared to the 3070 did not make a material difference in game performance. I've reduced the clock speed of the VRAM down and did not see a drastic drop in performance. The bulk of the sub 10% improvement in average performance is somewhat contributed by the increase in CUDA cores and likely less of the memory bandwidth increase. Cards at this range are meant to be solid 1440p performer, and may suit some games in 4K. So there is little concern about the memory bandwidth in my opinion.

Semi-relevant, but i found overclocking the VRAM on my 3090 helped massively with mining, but was a performance loss in gaming - because it ate into the power limit of the GPU.

With a raised or unlimited TDP it was different, but there genuinely is a point that higher VRAM clock speeds is a negative.
 
In my own experience trying out the RTX 3070 Ti, the higher bandwidth as compared to the 3070 did not make a material difference in game performance. I've reduced the clock speed of the VRAM down and did not see a drastic drop in performance. The bulk of the sub 10% improvement in average performance is somewhat contributed by the increase in CUDA cores and likely less of the memory bandwidth increase. Cards at this range are meant to be solid 1440p performer, and may suit some games in 4K. So there is little concern about the memory bandwidth in my opinion.
I hear you, but an Nvidia GPU is not an Intel GPU. Nvidia has deployed several things over the years to reduce the pressure on VRAM (dynamically). Its why they think they can make do with less than the competition, too. Nvidia has always had tighter bus / capacity / etc. And works around it. We're seeing more and more trickery applied that is dynamic, so its becoming more and more of a black box. Your FPS might be similar, but are you seeing the same detail?

As much as a 'core' in one architecture isn't the same as in another, bandwidth or even capacity similarly isn't the whole story if you compare different architectures. And even on different situations/games. But there are no true magic bullets here anyway; if Intel thinks it can make do with 500Gbps, we know they're limiting themselves to that bandwidth, and until we hear of special technology to work around it, it is what it is. A low number given the supposed performance level. AMD has infinity cache, for example, to make up for the X they lack in their GDDR6 ;)
 
Back
Top