• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Rocket Lake-S Platform Detailed, Features PCIe 4.0 and Xe Graphics

ARF

Joined
Jan 28, 2020
Messages
3,934 (2.55/day)
Location
Ex-usa
Someone random posting stuff on a random BBS is not a "leak", it's a rumor. For it to qualify as a leak there needs to be some level of trust backing up the information, which can be either directly attached to the source (has previously given accurate information ahead of time etc.) or through the leak being verified by well-connected journalists. Until then, it's a rumor and nothing more.

I disagree. We don't know actually if it's a rumour or a leak because there is no way to verify the information, obviously because of lack of CPUs in the wild.
But it more closely resembles to a leak by an internal for the matters person..

Actually, every single employee who has access to this data, can leak it anonymously and you will never know who exactly were they.

Rumour is information by a person who doesn't understand.
Leak is information by a person who has access to the proper data.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I disagree. We don't know actually if it's a rumour or a leak because there is no way to verify the information, obviously because of lack of CPUs in the wild.
But it more closely resembles to a leak by an internal for the matters person..

Actually, every single employee who has access to this data, can leak it anonymously and you will never know who exactly were they.

Rumour is information by a person who doesn't understand.
Leak is information by a person who has access to the proper data.
Screenshot_20200327-070922.png

Your first paragraph succinctly explains why this is exactly a rumor and not a leak. It does not in any way qualify as a leak until the source can be somehow shown to be believable. Until then, it is by definition a rumor.

Pretty much anyone can fake the impression of veracity in something like this, all it requires is some superficial knowledge and the ability to type in a style similar to someone else. All you are saying here is that you are gullible enough to believe anything posted to the internet if it "seems true". Which is a very poor approach to judging the veracity of information. You would do well to turn up your skepticism dial significantly.

There's absolutely nothing in the definition of a rumor that says it originates with someone who "doesn't understand" or doesn't have the access to information. And rumors are prefectly capable of being true. This could be posted by Brian Krzanich himself and it would be a rumor until it is actually verified that the person posting it actually is him.

Damn, I'm so tired of the internet mixing these two up. These days you even see articles about "leaked" renders of consoles etc. that are very clearly nothing more than a fan-made model pulled straight out of someone's imagination. I think it's due to the appearance of "professional leakers" like EVleaks who post their stuff directly on twitter - that they do so and people immediately call it a leak leads others to think "oh, anything posted to twitter is a leak and not a rumor", disregarding the fact that these qualify as leaks due to the long history of accurate information from these sources. Of course even they are often wrong as many leaks are based on outdated information or are just poorly sourced. The point being: nothing is a leak until it can be said to come from a believable source. And "random person on Chinese BBS" is not a believable source.
 
Last edited:
  • Angry
Reactions: ARF

ARF

Joined
Jan 28, 2020
Messages
3,934 (2.55/day)
Location
Ex-usa
No, Chinese BBS is a verified source for information that could have quite high probability to be leaked of origin.

Rumour is when wccftech posts AdoredTV Jim educated/not-so-educated guesstimates of what exactly to expect.
Rumour is like a broken phone wire, one source | origin spreads information to another and during this process, you get the information modified or transfered with missing details, added false details, etc.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No, Chinese BBS is a verified source for information that could have quite high probability to be leaked of origin.

Rumour is when wccftech posts AdoredTV Jim educated/not-so-educated guesstimates of what exactly to expect.
Rumour is like a broken phone wire, one source | origin spreads information to another and during this process, you get the information modified or transfered with missing details, added false details, etc.
A BBS is not "a source". A BBS is essentially a forum. It may or may not be open for anyone to sign up (it might for example be invitation-only), but ultimately it is in no way a verified and trustworthy source in and of itself due to its open and/or unedited nature. Unless the BBS in question only allows verified employees of tech companies to sign up, it cannot possibly be said to be a trustworthy source in and of itself, no matter how many well-connected and knowledgeable users might be there. Individual BBS users might be both trustworthy and well-connected sources, but that doesn't apply a blanket level of trustworthyness to the whole BBS. So unless you can provide some sort of documentation to verify that the specific user writing that post is indeed a trustworthy source, we have zero reason to treat it as anything else than a rumor.

And where do you think WCCFTech and AdoredTV gets their rumors from? Sure, from time to time they make up stuff all on their own, but the vast majority of time it comes precisely from internet randos like the Chinese BBS post you linked. And no, there's nothing in the meaning of the word "rumor" that says the contents and meaning will change in transmission. That can happen with any type of information, verified or not, if transmitted through word-of-mouth. That's a fault of the means of information transmission and not a function of the veracity of the information itself - and as said before, the difference between a rumor and a leak is exactly the veracity of the latter and the insecurity of the former. A rumor may or may not be true, while a leak must be true or have been true at some point (as there would otherwise not be any information to leak).
 

bug

Joined
May 22, 2015
Messages
13,222 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
No, Chinese BBS is a verified source for information that could have quite high probability to be leaked of origin.

Rumour is when wccftech posts AdoredTV Jim educated/not-so-educated guesstimates of what exactly to expect.
Rumour is like a broken phone wire, one source | origin spreads information to another and during this process, you get the information modified or transfered with missing details, added false details, etc.
Unless there's a confirmed official document behind it, it's not a leak. Plain and simple.
 
Joined
Jun 10, 2014
Messages
2,901 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
It's remarkable that yet another discussion have descended into a war of semantics, just a few days after the war over the word "obsolete".
Valantar put it pretty nicely; a leak must originate from something real, a piece of evidence from someone with direct access to sensitive information, a standard which most claimed "leaks" fail to meet. This can, of course, include partners of Intel/AMD/Nvidia.

But it's important to remember that the credibility of the source is always important. Making something look official is easy, and so is tampering with something that is genuine.
 
Joined
Apr 12, 2013
Messages
6,750 (1.68/day)
Perhaps "social distancing" is also the way forward around the internet these days? In the interest of everyone (sanity) involved please ignore six posts after yours :toast:
 
Joined
Feb 26, 2016
Messages
546 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling Corsair H170i Elite Cappelix w/ NF-A14 iPPC IP67 fans
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 1
Power Supply EVGA 1600 T2 w/ NF-A14 iPPC IP67 fan
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Logitech G910 Stickerbombed
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
Can someone explain all those Intel Lakes? Because I have lost count and have no idea what is what. We have like 1500 different Lakes, but every one seems exactly the same.
Same architecture under the hood, basically marketing BS, but yes, Skylake will work on Z370 and Coffee lake will work on Z170/Z270.

I helped edit the Skylake and Coffee lake pages! :)
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
View attachment 149661

via https://wccftech.com/intel-next-gen-tiger-lake-rocket-lake-cpu-platforms-configs-specs-detailed/

Intel Rocket Lake-S Desktop CPUs Feature 8 Cores, GT1 Xe Graphics – Rocket Lake-U 6 Core at 15W, Tiger Lake-U 4 Cores at 15W & Tiger Lake-H 8 Cores at 45W Base TDPs



View attachment 149662
Is there anything new or not rather obvious in any of that? Rocket lake desktop is LGA1200 and spans T- through K-series SKUs. RKL-U is seemingly essentially CML-U refresh it would seem, 6c in 15W, while TGL-U seems to be ICL-U refresh. Slightly boosted LPDDR4X to match AMD. And otherwise ....

One interesting point for speculation I guess (assuming this has any degree of accuracy, which we can't know with rumors like this): if Rocket Lake is a new arch, would they keep doing the current double U-series lineup? I get that there are other differences (10nm vs. 14nm, iGPU, etc.), but why would Intel keep selling two different series of U chips if both are roughly the same arch? Why not then just stick to the one? The only explanation I can see is that they are either unable or unwilling to backport their new iGPU to14nm, as that would then be the only selling point for TGL. Another possibility I guess would be wanting to use 10nm for something rather than just leaving it idle while refining it towards actual usefulness, but .... that's rather fatalist, no? Either way, if this turns out to be accurate Intel buyers are still stuck with the choice of either maximum CPU or maximum GPU performance. That's too bad.
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
Another possibility I guess would be wanting to use 10nm for something rather than just leaving it idle while refining it towards actual usefulness, but .... that's rather fatalist, no?
First and foremost: why do people on this forum still think Intel's 10nm isn't working properly at that point?
Is there anything Intel could do to convince you?
Either way, if this turns out to be accurate Intel buyers are still stuck with the choice of either maximum CPU or maximum GPU performance. That's too bad.
Bad? That's the only logical option.

Some chips are built around faster IGP, which makes them more usable for IGP gaming - a use case where stronger CPU wouldn't be utilized anyway.
Other (most) chips are CPU-oriented, which makes them better for everything outside of IGP gaming - and a much better companion for more powerful discrete GPUs.

If the top of the range SoC offered the best CPU and the best IGP in the lineup, it would mean all other chips are not opimal in use of package size or power consumption. It would make no sense.
 
  • Like
Reactions: bug
Joined
Jun 10, 2014
Messages
2,901 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
if Rocket Lake is a new arch, would they keep doing the current double U-series lineup? I get that there are other differences (10nm vs. 14nm, iGPU, etc.), but why would Intel keep selling two different series of U chips if both are roughly the same arch? Why not then just stick to the one?
I would say, why not?
The majority of laptops and desktops are low-end/cheaper models, so there is certainly a demand for CPUs that are just "good enough", and Intel can't meet demand as of right now. I suspect 14nm will account for a solid portion of Intel's production volume until at least 2022, and I don't see the need for bringing everything quickly to the lower end.

The only explanation I can see is that they are either unable or unwilling to backport their new iGPU to14nm…
The GPU might be a factor. I know some rumors claim "Rocket Lake" will use a 10nm GPU, but very cut down. Either way, the new iGPU ported to 14nm would probably eat a good portion for the U-series power budget, so it probably makes more sense to just keep the old one, as it is a low-end market anyway.</speculation>
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
I would say, why not?
The majority of laptops and desktops are low-end/cheaper models, so there is certainly a demand for CPUs that are just "good enough", and Intel can't meet demand as of right now. I suspect 14nm will account for a solid portion of Intel's production volume until at least 2022, and I don't see the need for bringing everything quickly to the lower end.
People forget that even AMD, having just a fraction of Intel's CPU sales, probably has way over half of manufactured "total die area" on 14nm.
Every Zen2 chip has a massive 14nm I/O die. Mobile SoCs are generation behind - still 14nm. And of course AMD still makes and sells Zen and Zen+ desktop/server parts as well.

In fact, I'd love to see their ratio of 14nm vs 7nm wafers. I bet it's 2:1 at best at the moment.
 
Joined
Jun 10, 2014
Messages
2,901 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
People forget that even AMD, having just a fraction of Intel's CPU sales, probably has way over half of manufactured "total die area" on 14nm.
Every Zen2 chip has a massive 14nm I/O die. Mobile SoCs are generation behind - still 14nm. And of course AMD still makes and sells Zen and Zen+ desktop/server parts as well.

In fact, I'd love to see their ratio of 14nm vs 7nm wafers. I bet it's 2:1 at best at the moment.
True, and two more important facts;
Despite all of Intel's "failures" on 10nm, they are probably close to matching AMD's 7nm in terms of units shipped, while having only a tiny fraction of their total volume on 10nm. By the end of this year Intel will very likely use make more wafers as well, but even then it will not be enough for the majority of their products.

AMD have been lucky to have TSMC's high power 7nm capacity mostly for themselves, but that is changing.

So the reality is more nuanced than most people think.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
First and foremost: why do people on this forum still think Intel's 10nm isn't working properly at that point?
Is there anything Intel could do to convince you?
Absolutely! They could make a 10nm product line that isn't outperformed in the same segment by their own 14nm line (according to their own numbers, no less). They could also phase out 14nm entirely for one or more segments (which would make sense to alleviate the ongoing shortage), as that would demonstrate that 10nm yields are ramping properly - it's not like 10nm and 14nm are made at the same fabs, after all. Though at this point they are even admitting in their own financial disclosures that 10nm is likely never going to be profitable, and that 7nm will likely be outperforming it out of the gate when it launches in late 2021 or early 2022 (barring any delays, obviously). If 10nm was "working properly" it would be able to at least match 10nm on performance when it also has an 18% IPC advantage, or at the very least perf/W compared to a 14nm process that has seen several revisions focused on increasing clocks, not efficiency.

Bad? That's the only logical option.
No. The logical option for a part meant for diverse use cases (like any CPU) is to give it as much performance as possible in all types of workload, and then let the system balance which part gets how much of the power budget under what workload. That would give the best possible performance under any workload, with the only downside being a minor increase in die size (as long as the chip has fine-grained power and clock gating to keep idle components from drawing too much power).

Some chips are built around faster IGP, which makes them more usable for IGP gaming - a use case where stronger CPU wouldn't be utilized anyway.
Other (most) chips are CPU-oriented, which makes them better for everything outside of IGP gaming - and a much better companion for more powerful discrete GPUs.

If the top of the range SoC offered the best CPU and the best IGP in the lineup, it would mean all other chips are not opimal in use of package size or power consumption. It would make no sense.
Uh ... so Intel's own laptop lineup up until the 10th-gen has made no sense? 'Cause up until then they have essentially had what you describe. Sure, there have been Iris-equipped SKUs, but those have been rare to the point of nonexistent (outside of Apple 13" laptops). The thing is, what you're describing here is a scenario with essentially a single workload for each machine, which is unrealistic in general, but particularly on low-power laptops. For most people a device like this is their only device, and as such they are likely to be used for anything from photo and video editing (mainly CPU bound, but benefits greatly from GPU acceleration) to CAD to light gaming to pretty much anything else. This is why forcing users to choose makes no sense for this segment. For higher power segments it makes a lot more sense to have a singular focus on CPU performance as the chance of then having an iGPU is much higher.

Also, what you're saying seems to take for granted that there has to be several pieces of silicon within a market segment, which ... no, there doesn't. A single top-end SKU with everything enabled with lower-end SKUs with parts of the iGPU disabled, some cores disabled, or just lower clocks, makes perfect sense as long as that top-end SKU isn't an overly large die. That's how chips are binned to reduce waste, after all. If all you need is CPU performance that top-end SKU will not in any way be handicapped by also having a powerful iGPU - it will simply be idle. The same goes for the other way around. But users will have the option for better performance in other workloads, which you are arguing against for some reason.
True, and two more important facts;
Despite all of Intel's "failures" on 10nm, they are probably close to matching AMD's 7nm in terms of units shipped, while having only a tiny fraction of their total volume on 10nm. By the end of this year Intel will very likely use make more wafers as well, but even then it will not be enough for the majority of their products.

AMD have been lucky to have TSMC's high power 7nm capacity mostly for themselves, but that is changing.

So the reality is more nuanced than most people think.
That is true, but then Intel is the market leader and recently had a 9:1 or higher sales advantage across all segments (so it would be really weird if AMD somehow had them matched on volume at this point - even with AMD buying wafers from others that tells us Intel has had the capacity to produce enough chips for 90% of the market), and also has a massive fab operation while AMD contracts chip production to TSMC. Thus AMD has to share fab space with TSMC's other customers, including behemoth Apple which has until recently likely occupied the majority of TSMC's 7nm fab capacity, or at least a very large portion of it. There's no such thing as "TSMC's high power 7nm capacity" - there are different design libraries and different revisions of each node, but all TSMC 7nm is still made across the same selection of fabs and on the same equipment (though they obviously keep anything that can be made at a single fab to one locale). So despite Intel shipping Ice Lake in "significant numbers" (though currently limited to a relatively small selection of thin-and-lights, with Comet Lake being far more prevalent), there is currently zero reason to believe Intel's 10nm node is anywhere near the kind of yields you would expect for a high volume node, nor that its performance is where it should be. If it was, they would be transitioning 14nm fabs to 10nm (to increase chip output per fab on the denser node), which would then likely mean transitioning all mobile chips to 10nm while freeing up 14nm capacity for the lucrative server and enterprise markets, while also readying 10nm designs for all market segments. There is nothing indicating that any of this is happening. So no, Intel 10nm is by no means working as it should. All signs (including Intel's own public plans) point towards Intel planning to keep 10nm limping along, with 14nm as the backbone of the company's production, until 7nm arrives to get things going as they should.
People forget that even AMD, having just a fraction of Intel's CPU sales, probably has way over half of manufactured "total die area" on 14nm.
Every Zen2 chip has a massive 14nm I/O die. Mobile SoCs are generation behind - still 14nm. And of course AMD still makes and sells Zen and Zen+ desktop/server parts as well.

In fact, I'd love to see their ratio of 14nm vs 7nm wafers. I bet it's 2:1 at best at the moment.
Not forgetting anything. There is however a significant difference between continuing production of a previous-generation product (which Intel also does) to fulfill long-term contracts and maximise ROI on already taped-out and mature silicon, and making brand-new designs on an old node. AMD absolutely makes a lot of 14nm stuff still, but they aren't announcing new products on the node. Bringing the I/O die into this argument is also a bit odd - it's not a part that would benefit whatsoever from a smaller node (physical interconnects generally scale poorly with node shrinks), so why use an expensive smaller node for that when it's better utilized for parts that would benefit? If Intel did an MCM part I would absolutely not criticize them whatsoever for using 14nm or even 22nm for parts of it - that would be the smart thing to do.
 
Top