A recent job listing at Intel for a memory sub-system validation engineer—a person that tests and validates memory with new types with prototype silicon, suggests that Intel is implementing the contemporary GDDR7 memory standard with an upcoming GPU. The position calls for drive-and-deliver pre- and post-silicon validation and characterization of GDDR6 and GDDR7 memory types on Intel's Arc GPU products. Given that Intel is putting this listing out now, in Q2-2025, an actual product that implements GDDR7 could be rather far out, considering typical GPU silicon development timelines. We predict such a chip could be taped out only by 2026, if not later.
A possible-2026 tapeout of a GPU silicon implementing GDDR7 points to the likelihood of this being a 3rd Gen Arc GPU based on the Xe3 "Celestial" graphics architecture. Given Intel's initial success with the Arc B580 and B570, the company could look to ramp up production of the two through 2025, and continue to press home its advantage of selling well-priced 1080p-class GPUs. Intel could stay away from vanity projects such as trying to create enthusiast-class GPUs, sticking to what works, and what it could sell in good volumes to grab market share. This is a business decision even AMD seems to have taken with its Radeon RX RDNA 4 generation.
21 Comments on Intel Job Listing Suggests Company Implementing GDDR7 with Future GPU
Intel ARC could do just fine with GDDR6, I just don't quite understand why they didn't release anything more powerful like B770 this time around or even B780. It's just weird business decision. Sure, their bottom most got better, but they don't even offer something higher up that would really compete with RTX 5060 and RX 9060... and by the time they might do it with C770, NVIDIA and AMD will release their new thing again.
The 5060TI would have been far better when using high clocked GDDR6 memory on a 256bit bus as well. Like the AMD 9070 or 9070XT.
128bit and 8GB VRAM should be banned from existence.
We don't know for sure ofc, but I like the theory.
I swear some people just get fixated on the 128-bit number and start with the predetermined conclusion (this card sucks because 128-bit) and work backwards from there.
Someone will correct me if I'm wrong but there is no theoretical reason why a given bandwidth would be worse on a 128-bit card versus 192 or 256-bit. The overall number is all that matters.
And like was said above, the infinity cache on AMD cards, and large L2 cache on nvidia cards makes the bandwidth much less of an issue than it used to be. So tl;dr - I don't think we need to get hung up on bus width, just look at overall performance. The importance of bandwidth is frequently overestimated by people I think. There are many cases of cards releasing in versions with much faster bandwidth, which hasn't translated into much extra performance, eg.
GTX 1660 to 1660 Super
RTX 3070 to 3070 Ti
and now 4060 Ti to 5060 Ti.
Mainly I want an upper mid range Intel card to try out, sounds like fun.