Wednesday, April 21st 2010

Intel Sandy Bridge to Introduce New Sockets, Chipsets, Reorganize Platform Further

Intel plans a pair of new sockets for launch with its new processor architecture that succeeds Westmere, codenamed "Sandy Bridge", which are due for 2011. As part of its "tick-tock" product launch strategy, the company is currently transitioning between the 45 nm "tock" (Nehalem architecture), and 32 nm "tick" (Westmere architecture). In 2011, it will transition from the 32 nm "tick" (Westmere architecture), to the 32 nm "tock" (Sandy Bridge architecture). The company uses a "tick-tock" model of process development, where each processor architecture gets to be made in two successive manufacturing processes, while each process gets to build two succeeding architectures. It seems to have become clear that with Sandy Bridge, Intel will also switch to new socket designs, making existing motherboards obsolete then. Architecturally, Sandy Bridge will introduce new feature-sets that make the CPU more powerful, clock-to-clock, such as AVX - Advanced Vector Extensions, an evolution of the SSE instruction set, native AES engine which has been introduced with Westmere, and so on.

The present LGA-1156 package on which Intel builds value-through-performance processors including a bulk of mainstream processors, will be succeeded with the LGA-1155 package. Though similar, LGA-1155 and LGA-1156 are not inter-compatible, meaning that LGA-1155 processors will not work on existing LGA-1156 motherboards, and LGA-1156 processors will not work on LGA-1155 motherboards, either. For these processors, the arrangement of vital components is similar to the LGA-1156 package, except that every LGA-1155 processor - dual-core or quad-core - will feature an on-die display controller.

The die itself will be monolithic, where the northbridge component completely integrates with the processor component, leaving only the southbridge outside the package, on the motherboard. Currently, the "Clarkdale" and "Arrandale" dual-core processors have the processor and northbridge components on separate dies, but on the same package. LGA-1155 is also designated as socket H2 (LGA-1156 is H1), the package is identical in size to LGA-1156, but has a different pin layout and orientation notch.

Chipsets that drive the LGA-1156 platform include P67, H67, H61, and Q67, which will support features which were conceived prior to Ibex Peak platform's launch, but were shelved, such as ONFI NAND Flash "Braidwood", etc. USB 3.0 still isn't part of the feature-set, though native SATA 6 Gb/s support is on the cards.

The next big platform to succeed the LGA-1366, which caters to processors in the upper performance-though-enthusiast segments is the "Patsburg" platform, succeeding the existing "Tylersburg" based Intel X58, 5000 series chipsets. Here, Intel will introduce a massive new socket, the LGA-2011. The pin count is drastically increased for two reasons: the processor will have a 256-bit wide memory interface (quad-channel DDR3), and the northbridge component (currently X58 PCH) will be integrated completely into the processor package, upping the pin count with the PCI-Express and DMI pins. The on-die PCI-Express 2.0 root-complex will give out 32 lanes for graphics (unlike 16 lanes on the LGA-1155), and a DMI link to the so-called "Intel X68" chipset, which is relegated to being a Platform Controller Hub, just like the P55, or P67. The X68 could have a feature-set similar to the P67.Source: bit-tech.net
Add your own comment

119 Comments on Intel Sandy Bridge to Introduce New Sockets, Chipsets, Reorganize Platform Further

#1
Wile E
Power User
by: jmcslob
I gave up Intel with the 775 switch to 1366...

Amd has made some switchs in that time too..But they have mostly been compatible..
So when Bulldozer comes out I'll switch to that..and never go Intel again..
To many sockets in so little time...It is BS..Some are willing to accept it I'm not
You realize that AMD has done the same exact thing, don't you? Remember 939 and 754, and their short lives? It happens in both camps. 1366 will have been out around 3 years when this new socket releases. That's actually reasonable. Sucks from a late adopter's point of view, but it's really not that bad. I'll move to the new socket on the refresh. This 980X will be plenty powerful enough until then.
Posted on Reply
#2
jmcslob
Yeah...I do LOL..
It's just not been nearly as often..and when people got upset and cried about it AMD got the message (to a point) and yet they will be doing a Socket change here pretty soon as well..But AM2/AM2+/AM3 will have lasted about 5 years..

Hey Don't get me wrong I like that Intel does this, as it keeps the average Joe confused as to what works with what, and gives people a Jobs, it's just not for me..
Posted on Reply
#3
kid41212003
Sure, they do last 5 years with no significant improvement between CPUs, or you would end up with a "slow-downed" cpu with the old mobo.
Posted on Reply
#4
TheMailMan78
Big Member
I really don't see what the big deal is. People get mad when companies don't progress in technology (AMD) and they get mad when the progress to fast (Intel). The bottom line is Intel or AMD changing sockets so fast affects a very small group (enthusiasts) but make no difference to anyone else.

I applause Intel for advancing the industry and for us it shouldn't make any difference. Why? Because we change our rigs every other year anyways!

Also if you get to brass tacts what takes advantage of an i7 currently? I mean what really makes them scream for mercy?...........yeah thats what I thought.
Posted on Reply
#5
jmcslob
LOL I know I change mine about every 9 months...(if that)
I use the Tick-Tock as well but for me it's CPU-tick-hard drives-tock Video Card-tick Memory-tock
Posted on Reply
#6
Trigger911
by: TheMailMan78
I really don't see what the big deal is. People get mad when companies don't progress in technology (AMD) and they get mad when the progress to fast (Intel). The bottom line is Intel or AMD changing sockets so fast affects a very small group (enthusiasts) but make no difference to anyone else.

I applause Intel for advancing the industry and for us it shouldn't make any difference. Why? Because we change our rigs every other year anyways!

Also if you get to brass tacts what takes advantage of an i7 currently? I mean what really makes them scream for mercy?...........yeah thats what I thought.
I totally agree that's the world of technology its always moving changing and merging
Posted on Reply
#7
DrPepper
The Doctor is in the house
Again like everyone is saying, big deal it's not like my i7 will be any less good. Hell a Q6600 is still a decent processor these days.
Posted on Reply
#8
HillBeast
All these people going 'I'm glad I'm sticking with AMD' need to realise, yes sure AMD don't bring out heaps of sockets all tbhe time, but think about when K8 came out, they had 754, 939, then AM2 and all were not compatible with each other and this was happening very close to each other.

The other thing is, AMD is most likely (like 99% chance) going to bring out a new socket for Fusion because I personally see no way they can manage to bring out that many features and not upgrade socket, and I can see now that Intel has moved the IOH/MCH into the CPU, this will happen alot now due to features that would have been added to the IOH/MCH that could have been a simple motherboard update now require a full on socket change.

Don't get me wrong, I think it's ridiculous they had to go and change so damned quickly, but when you really think about it, Intel has never been a cheap processor manufacturer. They have always been just the best. It's as simple as that.
Posted on Reply
#9
HillBeast
by: TheMailMan78
Also if you get to brass tacts what takes advantage of an i7 currently? I mean what really makes them scream for mercy?...........yeah thats what I thought.
Prime95 is the only thing that comes to mind, and that's not a real world app.
Posted on Reply
#10
eidairaman1
Ok Remember this Athlon 64, 754 was Single Channel DDR MC, 940 was Dual Channel DDR, 939 came about to replace 754, AM2 was for DDR2, AM3 was to prevent AM2 cpus from being inserted into AM3 boards due to the Memory Controller being DDR2 only and AM3 being DDR3. AM2, AM2+, AM3 are all technologically the same just improvements etc of PhII and faster. I suspect Bulldozer to be on a different Socket in 2011. TBH Im not holding my breath as I will get a PHII X6 1090T BE with a 890FX board, 5870 or 5890, HT Omega Sound card, 8 Gigs DDR3,

by: HillBeast
All these people going 'I'm glad I'm sticking with AMD' need to realise, yes sure AMD don't bring out heaps of sockets all tbhe time, but think about when K8 came out, they had 754, 939, then AM2 and all were not compatible with each other and this was happening very close to each other.

The other thing is, AMD is most likely (like 99% chance) going to bring out a new socket for Fusion because I personally see no way they can manage to bring out that many features and not upgrade socket, and I can see now that Intel has moved the IOH/MCH into the CPU, this will happen alot now due to features that would have been added to the IOH/MCH that could have been a simple motherboard update now require a full on socket change.

Don't get me wrong, I think it's ridiculous they had to go and change so damned quickly, but when you really think about it, Intel has never been a cheap processor manufacturer. They have always been just the best. It's as simple as that.
Posted on Reply
#11
kid41212003
That's a Phenom II with 2 extra cores (45nm), obviously won't able to OC as good. If you're already have a Phenom II quad, there's no reason to upgrade. Beside dual 8GB < triple 6GB < quad channel socket 2011.
Posted on Reply
#12
FordGT90Concept
"I go fast!1!11!1!"
by: TheMailMan78
Also if you get to brass tacts what takes advantage of an i7 currently? I mean what really makes them scream for mercy?...........yeah thats what I thought.
Like any other processor, an app that loads the cores to 100%. I've coded many of those. They make Pidgin run delayed (between message sent and confirmation sound) and IE8 take forever to open. XD

The app attached to this post will make any processor (or multiple processors) beg for mercy for five minutes.
Posted on Reply
#13
Wile E
Power User
by: eidairaman1
Ok Remember this Athlon 64, 754 was Single Channel DDR MC, 940 was Dual Channel DDR, 939 came about to replace 754, AM2 was for DDR2, AM3 was to prevent AM2 cpus from being inserted into AM3 boards due to the Memory Controller being DDR2 only and AM3 being DDR3. AM2, AM2+, AM3 are all technologically the same just improvements etc of PhII and faster. I suspect Bulldozer to be on a different Socket in 2011. TBH Im not holding my breath as I will get a PHII X6 1090T BE with a 890FX board, 5870 or 5890, HT Omega Sound card, 8 Gigs DDR3,
Wrong. 754 was single channel, but 939 was not it's dual channel replacement. It was just like Intel is doing now. A lower chipset, and an enthusiast chipset simultaneously. 940 was an Opteron server socket. Then the rest. They have been able to keep the same socket because they have not added any new features to the cpu, like Intel moving their chipsets to the cpu.
Posted on Reply
#14
TheMailMan78
Big Member
by: FordGT90Concept
Like any other processor, an app that loads the cores to 100%. I've coded many of those. They make Pidgin run delayed (between message sent and confirmation sound) and IE8 take forever to open. XD

The app attached to this post will make any processor (or multiple processors) beg for mercy for five minutes.
Well yeah there are apps that will do it. However how many people need to make an i7 scream. I think you are over analyzing again ford. :laugh:
Posted on Reply
#15
HillBeast
by: TheMailMan78
Well yeah there are apps that will do it. However how many people need to make an i7 scream. I think you are over analyzing again ford. :laugh:
Yeah. In real world sitations: gaming, office work, video editing, etc, an i7 never gets pushed to it's limits. Very few apps can use all 8 threads. Most games barely use 4 of them.
Posted on Reply
#16
FordGT90Concept
"I go fast!1!11!1!"
Video editing could depending on what you are doing and how the software is programmed.
Gaming only uses 3-4 cores at most.
Office work is fine on 1 core.

New games may use all 8 cores but 4 of which are at barely more than idle. Any software that does something and fully loads 8 cores means it will take twice as long on a quad core, four times as much on a dual cores, and so on. Programmers try to avoid creating that much strain on systems unless it is unavoidable (like BOINC/F@H) or intentional (like the app I linked to).
Posted on Reply
#17
Relayer
Do not confuse cores and threads. While your i7's have 8 threads (except the 980) they only have 4 cores. While splitting the core into 2 threads usually improves performance it doesn't come close to doubling it, like adding another core would do. Assuming 1core to two cores, of course.;)
Posted on Reply
#18
FordGT90Concept
"I go fast!1!11!1!"
When I say 8 cores, it's pretty obvious I mean logical and physical cores. My Core i7 has 8 cores (4 logical and 4 physical). My Xeon 5310 server also has 8 cores (8 physical).

I wrote a multithreaded application for benchmarking using a simple counting scheme. Performance was in excess of four times better with hyperthreading enabled than not. SMT, when done right, means the actual number of cores is irrelevent. The more data thrown on to the CPU as a whole the better the performance.

On applications that aren't heavily multithreaded, the architecture of Core i# is inferior to that of Core 2 and Phenom II.
Posted on Reply
#19
TheMailMan78
Big Member
by: FordGT90Concept
When I say 8 cores, it's pretty obvious I mean logical and physical cores. My Core i7 has 8 cores (4 logical and 4 physical). My Xeon 5310 server also has 8 cores (8 physical).

I wrote a multithreaded application for benchmarking using a simple counting scheme. Performance was in excess of four times better with hyperthreading enabled than not. SMT, when done right, means the actual number of cores is irrelevent. The more data thrown on to the CPU as a whole the better the performance.

On applications that aren't heavily multithreaded, the architecture of Core i# is inferior to that of Core 2 and Phenom II.
Good break down my friend. :toast:
Posted on Reply
#20
HillBeast
by: FordGT90Concept
On applications that aren't heavily multithreaded, the architecture of Core i# is inferior to that of Core 2 and Phenom II.
How?
Posted on Reply
#21
Relayer
by: FordGT90Concept
When I say 8 cores, it's pretty obvious I mean logical and physical cores. My Core i7 has 8 cores (4 logical and 4 physical). My Xeon 5310 server also has 8 cores (8 physical).

I wrote a multithreaded application for benchmarking using a simple counting scheme. Performance was in excess of four times better with hyperthreading enabled than not. SMT, when done right, means the actual number of cores is irrelevent. The more data thrown on to the CPU as a whole the better the performance.

On applications that aren't heavily multithreaded, the architecture of Core i# is inferior to that of Core 2 and Phenom II.
Not trying to get into a pissing contest with you. I'll concede you have more knowledge on the subject. There are just those out there though who would misinterpret what you say and think that a quad core i7 is in actuality an 8 core processor rather than a quad core with 8 threads.

I'm curious though, out of say the top 100 commercial programs (just picking a number here) what percentage would scale the way the counting program you wrote, at >4x faster with hyperthreading than without? I'd imagine the percentage would be pretty small?
Posted on Reply
#22
HillBeast
by: Relayer
There are just those out there though who would misinterpret what you say and think that a quad core i7 is in actuality an 8 core processor rather than a quad core with 8 threads.
People who reckon it is an octa-core chip obviously are blind and haven't read the box. Quoting my i7-930 box:

[Intel Core i7 Inside Logo]
QUAD-CORE
DESKTOP
INTEL CORE i7 PROCESSOR

Also about the whole Core 2s being better than Core i7s in poorly threaded apps, it's simply not true. I wrote a program years ago for comparing P4s and it tests the CPUs per thread performance, and per MHz, Nehalem is faster than Core. Can't remember exact numbers, but if memory serves me right my old Core 2 E8400 at 3.0GHz got the same score as a Core i7 920 at 2.6GHz.
Posted on Reply
#23
FordGT90Concept
"I go fast!1!11!1!"
by: HillBeast
How?
Ask Intel. :p


by: Relayer
I'm curious though, out of say the top 100 commercial programs (just picking a number here) what percentage would scale the way the counting program you wrote, at >4x faster with hyperthreading than without? I'd imagine the percentage would be pretty small?
How many applications need to use all the power a computer possesses? None. The only programs that do use as much power as is available are the likes of conversion applications (encoding, decoding, compilers, assemblers, converters, etc.) where waiting is a burden. The percentage of "top 100 commercial programs" has a direct relation to what percentage perform those duties. It would be pretty small but at the same time, it is those applications that are pushing the industry towards faster processors.


by: HillBeast
Also about the whole Core 2s being better than Core i7s in poorly threaded apps, it's simply not true. I wrote a program years ago for comparing P4s and it tests the CPUs per thread performance, and per MHz, Nehalem is faster than Core. Can't remember exact numbers, but if memory serves me right my old Core 2 E8400 at 3.0GHz got the same score as a Core i7 920 at 2.6GHz.
If you have a Core 2 machine we could certainly test that again. I can limit the application I used before to just a single thread.

What makes my app unique is that, theoretically, it can linger in the L1/L2 caches of the processor eliminating the bottleneck on Core 2 machines (extra latency from going to the North Bridge, the RAM, and back again). Low scores (like my Core i7 920 without Hyperthreading) are most likely caused by cache collisions where the core(s) had to run all the way to the RAM instead of staying on the processor. Little faults like that is all it takes to get ahead.
Posted on Reply
#24
HillBeast
by: FordGT90Concept
If you have a Core 2 machine we could certainly test that again. I can limit the application I used before to just a single thread.
I have tested it again. Several times. I didn't believe such a boost could come from a simple architecture change but it did. Before I even got my i7 I tested it and I made sure it wasn't being unfair on the Core 2 but it simply wasn't. I have tested this on heaps of CPUs in the past and it is a very fair test of the processor.

I highly doubt Intel would be dumb enough to release a chip worse than it's predecessor after the fiasco they had with Netburst.

Bottom line: i7 is more powerful in every way, multi or single threaded.
Posted on Reply
#25
Relayer
People who reckon it is an octa-core chip obviously are blind and haven't read the box. Quoting my i7-930 box:

[Intel Core i7 Inside Logo]
QUAD-CORE
DESKTOP
INTEL CORE i7 PROCESSOR
Not all processors come with the box. Most come inside the assembled PC. I've seen a professional animator who believed that his i7 920 had 8x 2.67GHz cores because there are 8 rendering threads and it's a 2.67GHz rated processor. He didn't build his workstation. He bought it though because the person selling it to him lead him to believe that while it wasn't really an 8 core processor, it was the same thing. I was just trying to avoid that type of misinformation. If it's that obvious though, carry on. :)
Posted on Reply
Add your own comment