• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

i7-5775C: why did Intel abandon development of eDRAM?

Joined
Dec 12, 2020
Messages
1,755 (1.09/day)
From what I understand Intel's i7-5775C wasn't produced in large numbers yet had performance above and beyond 4-core/8-thread CPU's of the day, so why did Intel abandon development of eDRAM for CPU's? 128MB of eDRAM proved its worth w/the i7-5775C.
 
@eidairaman1
So the R&D costs of further developing eDRAM were prohibitive? I can't imagine Intel making much profit off the i7-5775c but it's not like people weren't buying them, it's that Intel wasn't producing them.
 
@eidairaman1
So the R&D costs of further developing eDRAM were prohibitive? I can't imagine Intel making much profit off the i7-5775c but it's not like people weren't buying them, it's that Intel wasn't producing them.

I don't think it's only the R&D costs inasmuch as the market realities at the time. Intel has unmatched R&D capabilities in the industry and a downright insane level of CapEx.

This was a very complex piece of silicon, and like the Ryzen X3D processors today, the eDRAM L4 cache had the primary objective of addressing poor memory throughput to improve both processor - and graphics(!) performance. People often forget, but BDW-C had an advanced iGPU, the Iris Pro 5200, which was substantially better than the stuff that they put on the i7-4790K or the i7-6700K.

However, the timing couldn't be worse. Development difficulties coupled with the near-simultaneous launch of DDR4 (which doubled JEDEC bandwidth over DDR3 at the same speed tiers), the Skylake processor with comparably very high IPC and clockability, it was just something that was becoming increasingly hard to justify. Games and most software that were intended for home computers and desktop-grade processors already saw a major boost running on the i7-6700K, which then in turn brought up the money consideration. BDW-C wasn't a cheap CPU and it required basically two fabs to make, since the CPU die was fabbed on 14 nm but eDRAM was not. Then packaging, etc - and the fact that this wouldn't very much scale for the enterprise to begin with, they just dropped it as if it was but a proof of concept. Which, really, it was. Its strengths serviced a very small niche at the time, especially regarding graphics - very few people were interested at all on advanced integrated graphics, especially since the Iris' drivers were still garbage. Its predecessor Crystal Well (Haswell with eDRAM) saw even more limited deployment, just in some laptops at the time, and suffered from all of the same problems. Price, lower general performance, etc.

It's not entirely different from the recent Lakefield processor, which debut the hybrid architecture that eventually found its way into Alder and Raptor Lake designs, plus their upcoming Foveros 3D packaging which was used only on that CPU and no other. Truth is, Broadwell-C is more of a what-if prototype that somehow made it to market, even if in more limited quantities, just like Lakefield.
 
This is also enough close to this topic sample. :)

Ah, the infamous Kaby Lake-G. That CPU was indeed a very weird collab between AMD and Intel, it worked for the NUC system that it was designed for, but its undoing was mostly that they couldn't agree who should provide support for the graphics driver. As you know, that Vega M GL or GH is actually a Polaris 22 graphics core tied to a single HBM2 stack (1024-bit bus) through an EMIB package. Not even Vega, but very interesting technology.

AMD provided them unofficially for a short while, but then they pulled support entirely, claiming Intel owed them money or something. Every now and then Intel would pay them to release an Intel-skinned Radeon driver that had the same control panel of AMD's driver, as it really was AMD's driver, but had an Intel CCC skin. That might have made a few diehard Radeon fans have a bit of an aneurysm if they had the chance to see it, lol. This one rendered some interesting talks back in the day...
 
From what I've read the Crystalwell L4 cache was fundamentally different from that of the i7-5775c and I believe it was only available in BGA form, not socket.

The rise of DDR4 made the i7-5775c obsolete (it's L4 cache wasn't much faster, if at all, than DDR4), but why didn't Intel try and develop a faster and/or larger eDRAM?
 
No Ryzen to compete against.
 
Because it was twice as expensive as the regular model, and they couldn't get past 14nm at the time. The cpu's were too hot.
 
Because it was twice as expensive as the regular model, and they couldn't get past 14nm at the time. The cpu's were too hot.

5775C was anything but hot, it was actually quite conservatively clocked (reason why it regressed performance over 4790K despite higher IPC) and had tons of die area for heat transfer.

It's just that the usefulness of the technology waned as other techs kept up with the weaknesses that eDRAM sought to address, and it wouldn't be until 2017 that AMD had an answer in the form of Ryzen.

From 2011 to 2017, AMD languished with their substandard Phenom II and FX CPUs and that was all they could offer.

All in all, it's definitely one of the most interesting CPUs I've owned in the past. I recommend the AnandTech 2020 revisit article, great read.
 
DRAM requires a very different process than SRAM.

eDRAM however means building DRAM with the same machines as you're making SRAM/CPU Logic with. And that seems to have been a much harder problem than Intel expected. (Or at least, they clearly decided to stop making that technology).

Meanwhile, AMD / Ryzen is using SRAM, so the same TSMC-process that makes dense-logic can be used to make SRAM, and then AMD just ties it all together later. Its a simpler way of doing things, fewer problems to solve.
 
From what I understand Intel's i7-5775C wasn't produced in large numbers yet had performance above and beyond 4-core/8-thread CPU's of the day, so why did Intel abandon development of eDRAM for CPU's? 128MB of eDRAM proved its worth w/the i7-5775C.
Did they?!
Yes in that implementation sure,but,
EDram was just a thing for a purpose IE large L4 cache.

Allegedly something similar is due to return, Adamantium I think it's allegedly called,

MLID has a video on it on YouTube.
 
It's still over $120 on ebay, otherwise it would be an interesting alternative to the 4790K.
 
5775C was anything but hot, it was actually quite conservatively clocked (reason why it regressed performance over 4790K despite higher IPC) and had tons of die area for heat transfer.
eDRAM was higher bandwidth than DDR4 and more importantly, latency was about half of the attached DIMMS. I think the biggest reason it failed is that only Apple was interested in this SKU. By that time, the process for the L4 die, 22 nm, should have been cheap and given that it was a small DRAM die, yields should have been very high.

The graph below is from Chips and Cheese and has a logarithmic scale. The blue line is the 5775C. Notice how its latency is much better than Skylake at working sets larger than the 7700k's L3.

1683307390043.png
 
eDRAM's competitor is SRAM, not DDR4 (or DDR5) RAM.

SRAM is well known to be high-power and low-density. eDRAM should be higher-density, but no one else in the CPU world tries to make DRAM on these processes. So there's a lot of R&D.
 
eDRAM was higher bandwidth than DDR4 and more importantly, latency was about half of the attached DIMMS. I think the biggest reason it failed is that only Apple was interested in this SKU. By that time, the process for the L4 die, 22 nm, should have been cheap and given that it was a small DRAM die, yields should have been very high.

The graph below is from Chips and Cheese and has a logarithmic scale. The blue line is the 5775C. Notice how its latency is much better than Skylake at working sets larger than the 7700k's L3.

View attachment 294620

Indeed, it was great. You'll find that reflects on games, as the 5775C aged the best of all of Intel's former quad cores. Sadly it just didn't shine when it needed to.
 
i think they are going to go with chiplet HBM2e or some variant in the future.
 
i think they are going to go with chiplet HBM2e or some variant in the future.

Future is going to be fully 3D stacked and integrated, that's for sure. Foveros is amazing packaging tech.
 
Indeed, it was great. You'll find that reflects on games, as the 5775C aged the best of all of Intel's former quad cores. Sadly it just didn't shine when it needed to.
It was intended for the IGP, but benefitted applications that had large working sets. Unfortunately, there were very few of those for regular consumers at that time.
 
The i7-5775c could beat the 7700k with DDR4 at 2400 Mhz. but what about 3200Mhz.? Even my 5820k system ran its memory faster than 2400Mhz. (2900Mhz.).
 
The i7-5775c could beat the 7700k with DDR4 at 2400 Mhz. but what about 3200Mhz.? Even my 5820k system ran its memory faster than 2400Mhz. (2900Mhz.).
They should test it at that, but off chip DIMMs can not beat eDRAM for latency. The gap in latency is too great; DDR4 2400 is about twice as slow and higher RAM speeds don't improve latency as much as bandwidth.
 
They should test it at that, but off chip DIMMs can not beat eDRAM for latency. The gap in latency is too great; DDR4 2400 is about twice as slow and higher RAM speeds don't improve latency as much as bandwidth.
I agree with you, please tell it to Intel!!!!!! Get them to further develop eDRAM!
 
Cache is King
  • a brilliant way to reduce the latency in accessing DDRx main memory
  • incredibly wasteful, as duplicate copies of data are held, and together with the management of the cache, pre-fetch, snoop, copy out, miss, stall, flush, is very poor in terms of relative returns per transistor and increase in power/heat divided by increase in performance
  • more layers of cache increase this complexity
  • decreasing/diminishing levels of return as cache sizes are increased, therefore bad economics as you go out
  • just look at those graphs above - cache really helps performance on a "hit". Yes in theory L4 cache makes a big difference compared to DDRx access, but the ratio of cache hit/miss puts a question mark on whether it is actually helping much in real world (rather than benchmark) applications
  • L4 helped Iris Pro 5200 significantly, but how much did it help CPU over L3? Only very specific situations
  • production/manufacturing cost of CPU with L4 is very expensive. OK when you need to maintain a performance crown or demonstrate iGPU, but not a profit maximising mass market product
  • in the xeon workstation/server space, Intel decided to increase L3 rather than adopt L4. There are good R&D analytic reasons for that
  • in the end, we prefer bandwidth over latency: In the same way, we prefer quad channel over dual channel, multi-core even at lower Hz over single core at higher Hz
The i7-5775c could beat the 7700k with DDR4 at 2400 Mhz
At what? Sometimes we get carried away by synthetic benchmarks, and a real world application, the gain e% is relatively small, and can be outdone by other simple actions such as higher clock, more cores, more channels


If you scale those results by clock speeds, I don't see a material difference.
 
DRAM requires a very different process than SRAM.

eDRAM however means building DRAM with the same machines as you're making SRAM/CPU Logic with. And that seems to have been a much harder problem than Intel expected. (Or at least, they clearly decided to stop making that technology).
That's it. eDRAM might be the best choice when one needs to integrate DRAM on the same die with the processor, inevitably using the same manufacturing tech. eDRAM as a separate die ... only existed because Intel set out to make it themselves. They could have asked Micron, for example, to manufacture (and help develop) some real DRAM, tuned for low latency.

By the way, does anyone know what is the bus width or MT/s of the "Crystal Well" eDRAM?
 
At what? Sometimes we get carried away by synthetic benchmarks, and a real world application, the gain e% is relatively small, and can be outdone by other simple actions such as higher clock, more cores, more channels


If you scale those results by clock speeds, I don't see a material difference.

Games. The 5775C's gaming performance is extremely strong and it exhibits the same behavioral pattern of the 5800X3D and its performance will match the substantially faster and newer i5-10600K in games which the 5800X3D also tends to exhibit strong performance gains, such as Borderlands 3:


In Final Fantasy XIV, you'll observe the same type of behavior:


It's funny, this chip was way ahead of its time. It consistently pulls Comet Lake i5 weight on games known to benefit from cache performance, all of that despite being almost 5 years older. I really recommend perusing that revisit Ian did on this CPU back in 2020. It's probably the most comprehensive, informational and useful resource on Broadwell there is.

That's it. eDRAM might be the best choice when one needs to integrate DRAM on the same die with the processor, inevitably using the same manufacturing tech. eDRAM as a separate die ... only existed because Intel set out to make it themselves. They could have asked Micron, for example, to manufacture (and help develop) some real DRAM, tuned for low latency.

By the way, does anyone know what is the bus width or MT/s of the "Crystal Well" eDRAM?

That'd be hard to say, I don't believe Intel disclosed it at all, but it should have around 100 GB/s bandwidth, half each way. Finding this information would probably be hard as the CPUs are now 10 years old.
 
Back
Top