Tuesday, February 23rd 2010

AMD Starts Shipping 12-core and 8-core ''Magny Cours'' Opteron Processors

AMD has started shipping its 8-core and 12-core "Magny Cours" Opteron processors for sockets G34 (2P-4P+), and C32 (1P-2P). The processors mark entry of several new technologies for AMD, such as a multi-chip module (MCM) approach towards increasing the processor's resources without having to complicate chip design any further than improving on those of the Shanghai and Istanbul. The new Opteron chips further make use of third-generation HyperTransport interconnect technologies for 6.4 GT/s interconnect speeds between the processor and host, and between processors on multi-socket configurations. It also embraces the Registered DDR3 memory technology. Each processor addresses memory over up to four independent (unganged) memory channels. Technologies such as HT Assist improve inter-silicon bandwidth on the MCMs. The processors further benefit from 12 MB of L3 caches on board, and 512 KB of dedicated L2 caches per processor core.

In the company's blog, the Director of Product Marketing for Server/Workstation products, John Fruehe, writes "Production began last month and our OEM partners have been receiving production parts this month." The new processors come in G34/C32 packages (1974-pin land-grid array). There are two product lines: the 1P/2P capable (cheaper) Opteron 4000 series, and 2P to 4P capable Opteron 6000 series. There are a total of 18 SKUs AMD has planned some of these are listed as followed, with OEM prices in EUR:
  • Opteron 6128 (8 cores) | 1.5 GHz | 12MB L3 cache | 115W TDP - 253.49 Euro
  • Opteron 6134 (8 cores) | 1.7 GHz | 12MB L3 cache | 115W TDP - 489 Euro
  • Opteron 6136 (8 cores) | 2.4 GHz | 12MB L3 cache | 115W TDP - 692 Euro
  • Opteron 6168 (12 cores) | 1.9 GHz | 12MB L3 cache | 115W TDP - 692 Euro
  • Opteron 6172 (12 cores) | 2.1 GHz | 12MB L3 cache | 115W TDP - 917 Euro
  • Opteron 6174 (12 cores) | 2.2 GHz | 12MB L3 cache | 115W TDP - 1,078 Euro
Sources: AMD Blogs, TechConnect Magazine
Add your own comment

125 Comments on AMD Starts Shipping 12-core and 8-core ''Magny Cours'' Opteron Processors

#1
pantherx12
TIGR said:
Or give all cores the ability to clock up and down as needed.
Even better :laugh:
Posted on Reply
#2
FordGT90Concept
"I go fast!1!11!1!"
pantherx12 said:
And does that not make you think a hybird system is the way forward?

have perhaps 4 cores that run at 4ghz+ ( or more) and have the reminder low clocked ( 1.5) for handling non intensive tasks etc?
If your objective is to identify people, transistors and more specifically, binary, is not the way to go. You need a processor that thinks in terms of shapes and other visual cues. The brain can quickly determine if what it is looking at is the shape of a human, the shape of a face, the shape of a hand, etc. It can then rapidly pick abnormalities out of the face like distribution of hair, moles, wrinkles, etc. The trouble with binary is describing any of the above in terms of color differences. It is a real PITA.

Work smarter, not harder.

The best upgrade to a human would be a calculator. Humans are ridiculously bad at seeing things in 1s, 0s, and derivatives thereof. If you could add that capability to the brain, it would be far more efficient at processing numbers (rather, the concept of numbers). Likewise, if we could create a co-processor that works in terms of shapes, it would drastically increase the capability of computers. For instance, it could look at a web page and read everything on it so long as it can identify the character set. It could look at a picture and identify people and what those people are most likely doing. It could also name every object it knows that appears in the picture like cars, bikes, signs, symbols, etc. In order to engineer said processor, we'd have to throw what we know about processing out the window and start from scratch with that goal in mind. As far as I know, that's not going to happen any time soon because their all too busy milking the ARM, POWER, and x86 cash cows.


They would not be marketed by instructions per second like current CPUs; they would be marketed by shapes per second and detail per shape. And hey, because it works on shapes, it could actually create a seamless arch on an analog display (digital would pixelate it). ;)
Posted on Reply
#3
pantherx12
Of course, but we're ages away from that sort of thing really.

This is the best we got for now.
Posted on Reply
#4
FordGT90Concept
"I go fast!1!11!1!"
I don't think we are; we (the people with the resources) refuse to go there because initial research would be very costly and, because it wouldn't be directly compatible with any existing processor technologies, implementation wouldn't exactly be smooth. Communication between them would need subprocessing of its own (binary would have to be converted from and to symbols). The result would be a major jump in computing though.

After shapes, we'd need a speech processor (decodes sound waves and can produce its own including pitch, tone, and expressiveness). With some good programming, it could completely replace call centers and you'd never be able to tell you were actually talking to a computer.
Posted on Reply
#5
WarEagleAU
Bird of Prey
Well I am just blown away that they got these out so quick, in my opinion. Well done AMD. Cannot wait to see some type of review if it is possible.
Posted on Reply
#6
TIGR
FordGT90Concept said:
I don't think we are; we (the people with the resources) refuse to go there because initial research would be very costly and, because it wouldn't be directly compatible with any existing processor technologies, implementation wouldn't exactly be smooth. Communication between them would need subprocessing of its own (binary would have to be converted from and to symbols). The result would be a major jump in computing though.

After shapes, we'd need a speech processor (decodes sound waves and can produce its own including pitch, tone, and expressiveness). With some good programming, it could completely replace call centers and you'd never be able to tell you were actually talking to a computer.
BCIs (Brain Computer Interfaces) are quickly advancing and I agree with you that adding to humans the ability to process data as computers do is coming. By that point I don't think the architecture of the computers connected to the brain will resemble anything like today's systems, but once again I think it will be highly parallel, and with many parallel connections to our highly parallel brains (of course, who knows what modifications we might make to our own organic brains, aside from adding computers to them?).

Speech processing has already advanced far beyond what most people realize, because the text to speech and automated calling systems to which most people have been exposed are far from state of the art. That crap that comes bundled with operating systems and even some of the more expensive speech processing software a regular consumer can buy, are not representative of the speech processing computers are already capable of. Speech processing in real time might be one of those things that is best done without too much parallel processing due to the latency introduced—but then again, it would be small-minded to assume that said latency will always be the issue it is today.
Posted on Reply
#7
jasper1605
pantherx12 said:
Just imagine a time when 100 core CPUS are availble for desktop use, CPU core per individual program :p

have 10 of those cores higher clocked then the rest for handling games and heavy duty apps, rest for everything else.

Computer will never slow down ( theoretically)
just like computers will never need more than 640K of memory?
Posted on Reply
#8
pantherx12
jasper1605 said:
just like computers will never need more than 640K of memory?
My statement wasn't implying it would stop at 100 cores :laugh:
Posted on Reply
#9
sly_tech
When I was a computer science student, I try to apply how computer process with my brain activity. It improves little by little. Try to apply what to do next, what is repeating process then organize the step to do things, decreasing the process step and try to use both side of my brain when I do my work. And much more try to keep changing or improve the way you think. Like re-architecture the way you think. The result is very good. Now my job as programmer is keep improving over the time. It is more organized than before. The basic computation brain can produce is comparing 2 different things whether it is true or false. So basically, processor it’s almost same with brains. Why? Human create it. Comparing between ON and OFF, 1 and 0 yeah same to brain, TRUE and FALSE. But because of around 100 billion of neuron, all the things calculate on very high speed and efficient. So same with computer if you have more core processor so more performance. The weakness of processor is connection between its core, unlike brain have balance count of synapse, yeah its architecture and speed of the connection too. Brain also have two biggest block of processing, logic and art, same goes too computer, CPU and GPU. Yeah I know, now all of you talking about CPU only. Whatever AMD already had it on their plan, APU. Maybe Intel(with GPU on die now but it was not APU) and the rest will follow. Human create computer, it never can compete with us, I means of all part of thinking process, because it doesn't have soul or desire to think. Its designed only to one purpose, to help human processing very large count of raw data to useful information. Simple way to understand, right? ;D Parallel processing is good, but for now we screw up to full utilizes it potential. But wait, 2011 perhaps AMD will bring Bulldozer, next step of multi threading. It was combining 2 cores with a better way. I like it. ;D. So the conclusion is, I was on pantherx12 and others who on his side too. Parallel computing is more than welcome. We know die shrink have near its limit, GHz too, and then what we have now to explore more? APU and multi core have a bright future. (Quote: future and potential is hard to predict) I think it's ok buy their products (playing games and etc2 on multi core CPU) to help the manufacturer gain the money to support their research on it. I think gamers help a lot in this industry. They demand more than others(on new technology and features). On business side I think like many company around the world will change their equipment at most once in 2 years. But pc gamers change their part or upgrade at least once per year. LOL.
Posted on Reply
#10
pantherx12
Very nice post sly-tech.

I especially liked the first bit.

Its true, the brain can be trained just as well as muscles can be trained.
( albeit differently off course heh)
Posted on Reply
#11
sly_tech
Hehe, thanks pantherx12. Because i can see what is the really point you want to bring. ;D
Posted on Reply
#12
TIGR
Welcome to TPU, sly! interesting post, gonna read it again. :laugh:
Posted on Reply
#13
TIGR
sly_tech said:
I think it's ok buy their products (playing games and etc2 on multi core CPU) to help the manufacturer gain the money to support their research on it. I think gamers help a lot in this industry. They demand more than others(on new technology and features). On business side I think like many company around the world will change their equipment at most once in 2 years. But pc gamers change their part or upgrade at least once per year. LOL.
:toast: Agreed 100%.
Posted on Reply
#14
FordGT90Concept
"I go fast!1!11!1!"
Multi-cores are not parallel computing. They can be made to simulate higher clockspeeds through synchronous execution but again, that creates a lot of wasteful overhead and massive headaches with desyncing and inter-thread interrupts. Multi-core is today, not tomorrow. The future should move away from threads and move towards non-algorithmic parallel computing or, at bare minimum, hardware synchronization.
Posted on Reply
#15
eidairaman1
I may just wind up getting a 2 way setup with a properly laid out motherboard
Posted on Reply
#16
TIGR
FordGT90Concept said:
Multi-cores are not parallel computing. They can be made to simulate higher clock speeds through synchronous execution but again, that creates a lot of wasteful overhead and massive headaches with desyncing and inter-thread interrupts. Multi-core is today, not tomorrow.
Whatever they "simulate," multiple anything working concurrently constitutes parallelism of some form. Splitting a larger problem into smaller ones to be solved simultaneously is the essence of parallel computing. There's bit-level, instruction level, data and task parallelism, etc. Multi-core CPU architecture will be replaced by something else in the future, so sure, you could say it's "today, not tomorrow," but it is a necessary stepping stone to tomorrow, which is why I take exception to your first post in this thread asserting that adding cores is the wrong path to go down.

FordGT90Concept said:
The future should move away from threads and move towards non-algorithmic parallel computing or, at bare minimum, hardware synchronization.
That's basically a repeat of what the article I linked to earlier in this thread said.

_______________________________

But okay, my arguments aside. Let's say that the problems of multi-core latency, overhead, etc. are impossible to ever improve or overcome and there's no alternative to a "clog-prone" one-core-managing-many ("master thread") architecture. Let's assume that hardware-managed thread states on multi-core CPUs simply cannot work (you mentioned earlier that that would virtually eliminate software overhead, but you still argue against multi-core CPUs, so that's out). Basically, let's say multi-core is simply unacceptable tech and you get to determine the design of future CPUs, and they will all be single-core monsters that smoke their multi-core inferiors. How are you going to do it?

1. Will the performance come from streamlining processes via new instruction sets?
2. You stated earlier that "a 12 GHz CPU can handle more work than a 4 x 3 GHz CPU because of having no overhead." Will you succeed where AMD and Intel have failed and find ways to overcome the ILP, memory, and power walls that in our current reality makes such high operating frequencies unfeasible?

3. If you were running AMD, starting five years ago, what path would you have set the company down, and what products would they now be releasing instead of these 8 and 12-core CPUs that you criticize?

I ask out of a genuine desire to learn, seriously. I'm completely up for better ways of doing things than the norm.
Posted on Reply
#17
FordGT90Concept
"I go fast!1!11!1!"
TIGR said:
Multi-core CPU architecture will be replaced by something else in the future, so sure, you could say it's "today, not tomorrow," but it is a necessary stepping stone to tomorrow, which is why I take exception to your first post in this thread asserting that adding cores is the wrong path to go down.
The only time multi-core was innovative is when it debuted with the IBM POWER4 architecture. To keep expanding on what is already known is to stagnate efficiency.


TIGR said:
Let's assume that hardware-managed thread states on multi-core CPUs simply cannot work (you mentioned earlier that that would virtually eliminate software overhead, but you still argue against multi-core CPUs, so that's out).
That would be the best solution to the current issues. That is where we should currently be heading--not adding more cores that few applications can put to work. The goal should be an architecture which any program can utilize the full potential of any given number of cores.


TIGR said:
Basically, let's say multi-core is simply unacceptable tech and you get to determine the design of future CPUs, and they will all be single-core monsters that smoke their multi-core inferiors. How are you going to do it?
Think of it as reverse hyperthreading. Instead of one core accepting two threads, two or more cores accept one thread. They share variable states which should allow each core to process a portion of the algorithm while others prepare or dispose of their state. Instructions would still take 2+ cycles to execute but they would be staggered across the cores so that each cycle, one instruction completes. It takes more hardware to accomplish it but the instructions per clock would at least double. I imagine the processor would have four cores exposed to the operating system but under the hood, it could have 16 or more cores internally. The x86 instruction set could still be used.


TIGR said:
3. If you were running AMD, starting five years ago, what path would you have set the company down, and what products would they now be releasing instead of these 8 and 12-core CPUs that you criticize?
AMD shouldn't have made x86-64. We need a new CISC instruction set that doesn't drag a decades of garbage with it.

Other than that, we need to look at Bulldozer before deciding if they screwed up since Athlon 64 or not.
Posted on Reply
#18
TIGR
So you are really proposing the continuation of multi-core CPUs?
Posted on Reply
#19
FordGT90Concept
"I go fast!1!11!1!"
Only in the short term (5-10 years). You need multiple cores in order to prevent the system from coming to a stand-still but any more than four or so ends up being wasted unless using specialized software.

The objective to accelerate both single and multi-threaded performance with smarter engineering.
Posted on Reply
#20
TIGR
What should come after 5-10 years? Single-core CPUs or GPGPUs or something else entirely?
Posted on Reply
#21
FordGT90Concept
"I go fast!1!11!1!"
Something else entirely. If it doesn't happen, processor performance will flatline due to the limitations of electrons. The transistor has to go. Processor architecture will have to change to whatever structure supports the new physical medium. There's really only question marks 10 years from now. There's a lot of ideas but nothing is gaining traction yet.
Posted on Reply
#22
pantherx12
Carbon nano tubes should be sorted by around then, which will nicely fuck up moores law : ]
Posted on Reply
#23
yogurt_21
FordGT90Concept said:
Only in the short term (5-10 years). You need multiple cores in order to prevent the system from coming to a stand-still but any more than four or so ends up being wasted unless using specialized software.

The objective to accelerate both single and multi-threaded performance with smarter engineering.
since when is the home machine completely practical? This is TPU right? I haven't wandered into productivity central by mistake have I?

last I checked we have several memebers with quad sli, quadfire, i7 rigs with 12gb memery and etc. all of which is overkill for gaming or anything else a home user typically does.

but epeen plays a role in the purchase.

these chips are more than likely going to be used in enterprise and server environements but a few home users will toss them in as well because you gain x amount of epeen for ever core your machine hase.

on an unrelated topic, why of why is it "magny cours" that's far to close to mangy cores if you ask me.
Posted on Reply
#24
TIGR
yogurt_21 said:
since when is the home machine completely practical? This is TPU right? I haven't wandered into productivity central by mistake have I?

last I checked we have several memebers with quad sli, quadfire, i7 rigs with 12gb memery and etc. all of which is overkill for gaming or anything else a home user typically does.

but epeen plays a role in the purchase.

these chips are more than likely going to be used in enterprise and server environements but a few home users will toss them in as well because you gain x amount of epeen for ever core your machine hase.

on an unrelated topic, why of why is it "magny cours" that's far to close to mangy cores if you ask me.
I liked that "productivity central" comment. :laugh:

That's true though. Many of us do WCG or Folding@home (got ten video cards/fifteen GPUs and five computers running myself and GT90 himself with a dual Xeon quad-core system).

GT90: you said yourself that you've written applications that can fully load 8+ cores to 100%. If some but not all software can utilize all this cores, then that tells me the problems lies with software. Anyway, judging multi-core CPU architecture based on how current software utilizes it would be a mistake.

There's always lag between new technology and mainstream software support for it. The first mainstream dual-core CPUs came out in 2005, quads in late '06/early '07 (although there were earlier multi-core CPUs, they were not mainstream enough to get the attention of mainstream software developers). Most applications I know of can already utilize two cores, and many can already fully utilize quads—just three years after they first hit the mainstream consumer market.

There are probably still far more single- and dual-core CPUs than there are quad-core CPUs out there in consumer systems, and you're saying because of how well 4+ cores perform with today's software, it's no good? If the Wright brothers had that attitude about new technology, they would have given up before their first flight because obstacles like gravity were too hard to overcome. But they had faith they could tackle the obstacles, and thus they did. The technology wasn't bad—it just needed time and work to survive its own infancy.
Posted on Reply
#25
Wile E
Power User
HalfAHertz said:
Oh, so your magical ball looked into the problem and found the answer? Well thanks for sharing, might want to drop a ring to AMD's driver department! Unless you did some extensive testing to prove that it is indeed the drivers and not the chipset, you're just as right / wrong as mussels...

BTW correct me if I'm wrong but AMD measured their TDP differently from Intel, right? AMD were measuring the average, while intel was just showing the peak. If memory serves me right then AMD's 115W equals Intel's 130W chips.
No, it's basic logic. Since the 5k series doesn't have the issue, it is clearly something that ATI was able to solve.
Posted on Reply
Add your own comment