May 2nd, 2025 06:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

Monday, April 21st 2025

Intel Job Listing Suggests Company Implementing GDDR7 with Future GPU

A recent job listing at Intel for a memory sub-system validation engineer—a person that tests and validates memory with new types with prototype silicon, suggests that Intel is implementing the contemporary GDDR7 memory standard with an upcoming GPU. The position calls for drive-and-deliver pre- and post-silicon validation and characterization of GDDR6 and GDDR7 memory types on Intel's Arc GPU products. Given that Intel is putting this listing out now, in Q2-2025, an actual product that implements GDDR7 could be rather far out, considering typical GPU silicon development timelines. We predict such a chip could be taped out only by 2026, if not later.

A possible-2026 tapeout of a GPU silicon implementing GDDR7 points to the likelihood of this being a 3rd Gen Arc GPU based on the Xe3 "Celestial" graphics architecture. Given Intel's initial success with the Arc B580 and B570, the company could look to ramp up production of the two through 2025, and continue to press home its advantage of selling well-priced 1080p-class GPUs. Intel could stay away from vanity projects such as trying to create enthusiast-class GPUs, sticking to what works, and what it could sell in good volumes to grab market share. This is a business decision even AMD seems to have taken with its Radeon RX RDNA 4 generation.
Sources: Haze2K1 (Twitter), VideoCardz
Add your own comment

21 Comments on Intel Job Listing Suggests Company Implementing GDDR7 with Future GPU

#1
RejZoR
AMD used GDDR6 on RX 9000 series and it's doing just fine. It doesn't always have to be the latest and greatest if the more readily available, tested and reliable tech exists and doesn't particularly affects performance in negative way.

Intel ARC could do just fine with GDDR6, I just don't quite understand why they didn't release anything more powerful like B770 this time around or even B780. It's just weird business decision. Sure, their bottom most got better, but they don't even offer something higher up that would really compete with RTX 5060 and RX 9060... and by the time they might do it with C770, NVIDIA and AMD will release their new thing again.
Posted on Reply
#2
Quicks
Good, just don't put good tech GDDR 7 on a bad bus of 128bit. Then you can just as well use GDDR 6 on a 256bit bus.

The 5060TI would have been far better when using high clocked GDDR6 memory on a 256bit bus as well. Like the AMD 9070 or 9070XT.

128bit and 8GB VRAM should be banned from existence.
Posted on Reply
#3
Jism
RejZoRAMD used GDDR6 on RX 9000 series and it's doing just fine. It doesn't always have to be the latest and greatest if the more readily available, tested and reliable tech exists and doesn't particularly affects performance in negative way.
Infinity Cache says hi.
Posted on Reply
#4
TumbleGeorge
JismInfinity Cache says hi.
Nvidia also have big caches in 40* and 50* series. Total cache size is not smaller or not too smaller than in AMD Radeon.
Posted on Reply
#5
HOkay
RejZoRAMD used GDDR6 on RX 9000 series and it's doing just fine. It doesn't always have to be the latest and greatest if the more readily available, tested and reliable tech exists and doesn't particularly affects performance in negative way.

Intel ARC could do just fine with GDDR6, I just don't quite understand why they didn't release anything more powerful like B770 this time around or even B780. It's just weird business decision. Sure, their bottom most got better, but they don't even offer something higher up that would really compete with RTX 5060 and RX 9060... and by the time they might do it with C770, NVIDIA and AMD will release their new thing again.
I like Hardware Unboxed's theory - the CPU overhead problem. If you scale up the performance of the B580, it might be that the CPU overhead issue also scales, to the point where even the top end CPUs hold things back. If it's caused by something in hardware which they can't fix in firmware / software then that could be a reason they didn't do anything more powerful than the B580.

We don't know for sure ofc, but I like the theory.
Posted on Reply
#6
RejZoR
QuicksGood, just don't put good tech GDDR 7 on a bad bus of 128bit. Then you can just as well use GDDR 6 on a 256bit bus.

The 5060TI would have been far better when using high clocked GDDR6 memory on a 256bit bus as well. Like the AMD 9070 or 9070XT.

128bit and 8GB VRAM should be banned from existence.
Well, 256bit bus is more complex and more expensive (it's why no one does 512bit anymore), it's sometimes more cost effective to have narrower bus with faster VRAM to compensate for that.
Posted on Reply
#7
Unregistered
QuicksGood, just don't put good tech GDDR 7 on a bad bus of 128bit. Then you can just as well use GDDR 6 on a 256bit bus.

The 5060TI would have been far better when using high clocked GDDR6 memory on a 256bit bus as well. Like the AMD 9070 or 9070XT.

128bit and 8GB VRAM should be banned from existence.
128-bit is irrelevant. The 5060 Ti has 55% more bandwidth than the 4060 Ti but is about 11% faster, and some of that can be put down to it having more processing power, not bandwidth.

I swear some people just get fixated on the 128-bit number and start with the predetermined conclusion (this card sucks because 128-bit) and work backwards from there.
#8
Quicks
RejZoRWell, 256bit bus is more complex and more expensive (it's why no one does 512bit anymore), it's sometimes more cost effective to have narrower bus with faster VRAM to compensate for that.
Someone needs to compare what the price difference would be from going 128bit to 256bit and using GDDR6 vs GDDR7. To see ifs it's really cost effective to use 128bit these days.
SilentPeace128-bit is irrelevant. The 5060 Ti has 55% more bandwidth than the 4060 Ti but is about 11% faster, and some of that can be put down to it having more processing power, not bandwidth.

I swear some people just get fixated on the 128-bit number and start with the predetermined conclusion (this card sucks because 128-bit) and work backwards from there.
That would be an interesting test and see if the same bandwidth using different busses makes a difference.
Posted on Reply
#9
TumbleGeorge
RejZoRit's why no one does 512bit anymore
Must agree with a little exception. :)
Posted on Reply
#10
TheinsanegamerN
SilentPeace128-bit is irrelevant. The 5060 Ti has 55% more bandwidth than the 4060 Ti but is about 11% faster, and some of that can be put down to it having more processing power, not bandwidth.

I swear some people just get fixated on the 128-bit number and start with the predetermined conclusion (this card sucks because 128-bit) and work backwards from there.
They absolutely do, were now on the third or fourth generation for people crying that the xx6x tier cards are 128 but and nvdone, ece.
HOkayI like Hardware Unboxed's theory - the CPU overhead problem. If you scale up the performance of the B580, it might be that the CPU overhead issue also scales, to the point where even the top end CPUs hold things back. If it's caused by something in hardware which they can't fix in firmware / software then that could be a reason they didn't do anything more powerful than the B580.

We don't know for sure ofc, but I like the theory.
That's an interesting theory, but then I wonder why they don't revise the hardware for the b700 series. It wouldn't be the first time.
Posted on Reply
#11
HOkay
TheinsanegamerNThey absolutely do, were now on the third or fourth generation for people crying that the xx6x tier cards are 128 but and nvdone, ece.


That's an interesting theory, but then I wonder why they don't revise the hardware for the b700 series. It wouldn't be the first time.
I think it's fairly well known that Celestial isn't too far away - lastest rumours say next year so they're just all-in on that I guess.
Posted on Reply
#12
docnorth
This could mean better availability of GDDR7 with lower cost and/or much higher performing GPUs. Both possibilities are welcome I guess. Besides it might be GDDR7 for the high end (if any) GPUs and GDDR6 for the rest. Even Intel’s plans aren’t probably final yet.
Posted on Reply
#13
Unregistered
QuicksThat would be an interesting test and see if the same bandwidth using different busses makes a difference.
I was trying to think how to conclusively prove it, I can't off the top of my head but I know there was a 3060 8GB version with 128-bit and 3060 12GB with 192-bit, unfortunately the 8GB card also has much slower clocked RAM and only 2/3rds the L2 cache (2MB vs 3MB) so of course you'd expect the former to be slower. I suppose theoretically you could downclock the memory on the 12GB card and test both, I don't even know if it's possible to underclock it by such a large amount. RTX 3050 is another one, came in 128-bit and 96-bit versions but the 6GB version has fewer cores, so that is out as well.

Someone will correct me if I'm wrong but there is no theoretical reason why a given bandwidth would be worse on a 128-bit card versus 192 or 256-bit. The overall number is all that matters.

And like was said above, the infinity cache on AMD cards, and large L2 cache on nvidia cards makes the bandwidth much less of an issue than it used to be. So tl;dr - I don't think we need to get hung up on bus width, just look at overall performance. The importance of bandwidth is frequently overestimated by people I think. There are many cases of cards releasing in versions with much faster bandwidth, which hasn't translated into much extra performance, eg.

GTX 1660 to 1660 Super
RTX 3070 to 3070 Ti
and now 4060 Ti to 5060 Ti.
Posted on Edit | Reply
#14
TheinsanegamerN
SilentPeaceI was trying to think how to conclusively prove it, I can't off the top of my head but I know there was a 3060 8GB version with 128-bit and 3060 12GB with 192-bit, unfortunately the 8GB card also has much slower clocked RAM and only 2/3rds the L2 cache (2MB vs 3MB) so of course you'd expect the former to be slower. I suppose theoretically you could downclock the memory on the 12GB card and test both, I don't even know if it's possible to underclock it by such a large amount. RTX 3050 is another one, came in 128-bit and 96-bit versions but the 6GB version has fewer cores, so that is out as well.

Someone will correct me if I'm wrong but there is no theoretical reason why a given bandwidth would be worse on a 128-bit card versus 192 or 256-bit. The overall number is all that matters.

And like was said above, the infinity cache on AMD cards, and large L2 cache on nvidia cards makes the bandwidth much less of an issue than it used to be. So tl;dr - I don't think we need to get hung up on bus width, just look at overall performance. The importance of bandwidth is frequently overestimated by people I think. There are many cases of cards releasing in versions with much faster bandwidth, which hasn't translated into much extra performance, eg.

GTX 1660 to 1660 Super
RTX 3070 to 3070 Ti
and now 4060 Ti to 5060 Ti.
Correct. Hypothetically, it makes no difference. There is an academic difference when you get down to 64 or 32 but cards where, for example, a 64 bit data call would have to be split in 2 for the 32 but bus holding things up. I think some of the real big Instructions run Into the same issue with 64 bit but these cards are typically so slow it doesn't realistically matter.
HOkayI think it's fairly well known that Celestial isn't too far away - lastest rumours say next year so they're just all-in on that I guess.
Well next year, as in end of year 2026, would be two years since the b580 release. To me that's a long time to wait.

Mainly I want an upper mid range Intel card to try out, sounds like fun.
Posted on Reply
#15
HOkay
TheinsanegamerNCorrect. Hypothetically, it makes no difference. There is an academic difference when you get down to 64 or 32 but cards where, for example, a 64 bit data call would have to be split in 2 for the 32 but bus holding things up. I think some of the real big Instructions run Into the same issue with 64 bit but these cards are typically so slow it doesn't realistically matter.

Well next year, as in end of year 2026, would be two years since the b580 release. To me that's a long time to wait.

Mainly I want an upper mid range Intel card to try out, sounds like fun.
You & me both, assuming no CPU overhead problem I'd have day 1 purchased a B780, just for funsies.
Posted on Reply
#16
Visible Noise
QuicksGood, just don't put good tech GDDR 7 on a bad bus of 128bit. Then you can just as well use GDDR 6 on a 256bit bus.

The 5060TI would have been far better when using high clocked GDDR6 memory on a 256bit bus as well. Like the AMD 9070 or 9070XT.

128bit and 8GB VRAM should be banned from existence.
5060TI isn’t an Intel product. No idea why you would post about it here.
Posted on Reply
#17
R-T-B
QuicksGood, just don't put good tech GDDR 7 on a bad bus of 128bit. Then you can just as well use GDDR 6 on a 256bit bus.
Why? Option one costs less than option 2 for nearly (if not) identical performance.
QuicksSomeone needs to compare what the price difference would be from going 128bit to 256bit and using GDDR6 vs GDDR7. To see ifs it's really cost effective to use 128bit these days.
Increasing bus width will always be more expensive, if that's your question.
Posted on Reply
#18
Quicks
R-T-BWhy? Option one costs less than option 2 for nearly (if not) identical performance.


Increasing bus width will always be more expensive, if that's your question.
Do you have numbers to prove your point?
Visible Noise5060TI isn’t an Intel product. No idea why you would post about it here.
Are you stupid or something I used it as an observation because it's using a 128bit bus.
Posted on Reply
#19
R-T-B
QuicksDo you have numbers to prove your point?
It's basic bandwidth math, no numbers needed other than the chip specs and bus width. This isn't some cosmic unknown. And any vendor will tell you bus width is the chief expense in a design.
Posted on Reply
#20
Quicks
R-T-BIt's basic bandwidth math, no numbers needed other than the chip specs and bus width. This isn't some cosmic unknown. And any vendor will tell you bus width is the chief expense in a design.
You sure about that so you saying it would be more expensive using 256bit bus with GDDR6, than it would be using GDDR7 and 128bit bus?
Posted on Reply
#21
R-T-B
QuicksYou sure about that so you saying it would be more expensive using 256bit bus with GDDR6, than it would be using GDDR7 and 128bit bus?
Most likely yes. That's been the industry expectation for years, so unless prices of gddr7 are through the roof, I'd bet on it.
Posted on Reply
Add your own comment
May 2nd, 2025 06:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts