Tuesday, February 23rd 2010

AMD Starts Shipping 12-core and 8-core ''Magny Cours'' Opteron Processors

AMD has started shipping its 8-core and 12-core "Magny Cours" Opteron processors for sockets G34 (2P-4P+), and C32 (1P-2P). The processors mark entry of several new technologies for AMD, such as a multi-chip module (MCM) approach towards increasing the processor's resources without having to complicate chip design any further than improving on those of the Shanghai and Istanbul. The new Opteron chips further make use of third-generation HyperTransport interconnect technologies for 6.4 GT/s interconnect speeds between the processor and host, and between processors on multi-socket configurations. It also embraces the Registered DDR3 memory technology. Each processor addresses memory over up to four independent (unganged) memory channels. Technologies such as HT Assist improve inter-silicon bandwidth on the MCMs. The processors further benefit from 12 MB of L3 caches on board, and 512 KB of dedicated L2 caches per processor core.

In the company's blog, the Director of Product Marketing for Server/Workstation products, John Fruehe, writes "Production began last month and our OEM partners have been receiving production parts this month." The new processors come in G34/C32 packages (1974-pin land-grid array). There are two product lines: the 1P/2P capable (cheaper) Opteron 4000 series, and 2P to 4P capable Opteron 6000 series. There are a total of 18 SKUs AMD has planned some of these are listed as followed, with OEM prices in EUR:
  • Opteron 6128 (8 cores) | 1.5 GHz | 12MB L3 cache | 115W TDP - 253.49 Euro
  • Opteron 6134 (8 cores) | 1.7 GHz | 12MB L3 cache | 115W TDP - 489 Euro
  • Opteron 6136 (8 cores) | 2.4 GHz | 12MB L3 cache | 115W TDP - 692 Euro
  • Opteron 6168 (12 cores) | 1.9 GHz | 12MB L3 cache | 115W TDP - 692 Euro
  • Opteron 6172 (12 cores) | 2.1 GHz | 12MB L3 cache | 115W TDP - 917 Euro
  • Opteron 6174 (12 cores) | 2.2 GHz | 12MB L3 cache | 115W TDP - 1,078 Euro
Sources: AMD Blogs, TechConnect Magazine
Add your own comment

125 Comments on AMD Starts Shipping 12-core and 8-core ''Magny Cours'' Opteron Processors

#76
Frick
Fishfaced Nincompoop
FordGT90ConceptOh, so you want some nameless corporation 1,000 miles from where you live to know everything you did and are doing? That's the Google wet-dream there. They would know everything about you from how much is in your checking account to which sites you frequent, to all your passwords and user names, to your application usage, to everything that would exist in your "personal computer." Cloudcomputing/virtualization is the epitome of data-mining. Google already knows every single search you made in the past six months with their search engine.

Corporations want this. It is vital we not give it to them. Knowledge is power.
That's the way it is, and it looks like it will be more like this in the future. As you say, that is what the cloud is, and more and more stuff is being shoved into it. I for one really like local apps and resources, but the cloud is already here even if I don't like it.

EDIT: BTW panther, how in heck did you manage to get your post count to 4000 in a year?
Posted on Reply
#77
HalfAHertz
Wile EIf it only happens with 38x0 and 48x0 and only on certain chipsets, it's a driver problem, or a hardware fault by ATI. Either way, it's ATI's fault.
Oh, so your magical ball looked into the problem and found the answer? Well thanks for sharing, might want to drop a ring to AMD's driver department! Unless you did some extensive testing to prove that it is indeed the drivers and not the chipset, you're just as right / wrong as mussels...

BTW correct me if I'm wrong but AMD measured their TDP differently from Intel, right? AMD were measuring the average, while intel was just showing the peak. If memory serves me right then AMD's 115W equals Intel's 130W chips.
Posted on Reply
#78
pantherx12
Frick.

EDIT: BTW panther, how in heck did you manage to get your post count to 4000 in a year?
Unemployed :laugh:

Also have very little to do in town so I'm online a lot.
Posted on Reply
#79
TIGR
I would like to submit this article to the discussion.

The other articles linked to at the end of this one are worth looking over too.
Posted on Reply
#80
FordGT90Concept
"I go fast!1!11!1!"
TIGRIn terms of parallel computing, you haven't seen anything yet.

If you want to fight the concept, go build a better system that doesn't utilize it. Otherwise, take a look around. Multi-core CPUs, multi-CPU systems, Crossfire and SLI, RAID ... running components in parallel isn't perfectly efficient, but guess what: neither is anything else.
I have a dual Xeon server sitting right next to me and have written a lot of multithreaded applications (capable of loading 8+ cores to 100%). It has its uses but, if it weren't for BOINC, most of the time it would be sitting around 0-5% CPU usage. A super computer without works is little more than an inefficient furnace.
pantherx12Unlike humans that build up muscle memory and automatic responses a machine has to think about moving, so once its phsyical speed starts building up it becomes more and more difficult.
Humans require more brain activity than a computer does during isometric contractions (contralateral sensorimotor cortex, premotor areas, and ipsilateral cerebellum light up like a Christmas tree in an EMG): www.ncbi.nlm.nih.gov/pubmed/17394210

All a computer needs is a sensor to tell what position the motor is in, it calculates how to power a trajectory to get the motor to its needed position, and verify it arrived. It's just a bunch of 1s and 0s--the language computers eat for lunch.
pantherx12Where as having a CPU core for each sensor it has it will be able to adjust things all that much quicker.
Very wasteful. Monitoring a motor takes only a few cycles every second. An ARM processor is more than sufficient. Hell, your cellphone could probably manage a dozen robots depending on precision.
TIGRI would like to submit this article to the discussion.

The other articles linked to at the end of this one are worth looking over too.
That's similar to what I said before: cores need to work synchronously on the hardware level so on the software level, you only have a few cores but there are many behind the scenes handling the data.

That would mean a different instruction set (doubt x86 would work) and a new generation of processors.
Posted on Reply
#81
pantherx12
That was an awful example, concious movement is completely different kettle of fish : /
Posted on Reply
#82
FordGT90Concept
"I go fast!1!11!1!"
You can move your arm without "thinking" about it. You do it all the time in your sleep and when you touch something hot (you have to override the brain's reaction to keep touching it). The brain still does a lot of work to make it happen subconsciously.
Posted on Reply
#83
pantherx12
Of course but its still a bad example.

I've trained Parkour for long enough that I've experienced completely automatic responses.

Say I've vaulted a wall but I can't see what's on the other side and suddenly there's another obstacle I wasn't expecting, I don't think " HOLY SHIT WALL DO SOMETHING" My body just reacts to a visual stimulus like when someone throws a punch you just flinch. Shame sort of concept.

A Machine can't do anything like that without a lot of cores as rather then a reaction it will have to see the object and then essentially think what to do next the more cores it has the more potential responses to the obstacle it could think of, having a core per sensor will also allow it to truly accurately judge is position in space and thus go over the next obstacle with no problems.


With less cores it simply won't be able to plan the next movement efficiently.
Posted on Reply
#84
Mussels
Freshwater Moderator
no, it'd add extra latency as the right hand literally wouldnt know what the left hand was doing. we evolved with a central brain because its more effective.
Posted on Reply
#85
TIGR
A central brain consisting of many synapses, which are rather well-connected.
Posted on Reply
#86
pantherx12
Musselsno, it'd add extra latency as the right hand literally wouldnt know what the left hand was doing. we evolved with a central brain because its more effective.
And what about spinal chord with can control automatic responses?


The brain is in one place but its definitely "multi core" :laugh:

if it wasn't humans would barely function, I certainly know I can think of more then one thing at once.

Hell right now I'm typing whilst listening to music and planning what I'm doing tomorow, all simultaneously in my brain with no slow downs XD
Posted on Reply
#87
btarunr
Editor & Senior Moderator
FordGT90ConceptOh, so you want some nameless corporation 1,000 miles from where you live to know everything you did and are doing?
Huh? Never heard of data-centres? You think everyone who has a website or company VPN has his own server?
Posted on Reply
#88
WhiteLotus
The brain can not be compared to a computer chip.

The brain has many, MANY locations in it that all work to together to do the simplest of tasks. Say you want to bend down and pick up a pencil. You have different nerves that tell your fingers to open and close compared to the nerves that carry the signal to your shoulder to move in the correct position to allow the movement to happen. You have different routes, parasympathetic and sympathetic. One allows your body to rest and relax, the other allows your body to get up and move around.
If it wasn't for the brain nerves (cranial nerves) telling your heart to slow the fuck down you'd all be having a heart rate of about 200 odd beats a minute which ain't healthy.

Again, you can not compare the brain with a computer chip. A brain is just too damn complicated.
Posted on Reply
#89
TIGR
Parallelism doesn't necessarily require a multi-core CPU architecture. It can take many forms and function in many different ways. Multi-tasking does not require parallel computing if the system is fast enough to be perceived as doing things simultaneously. So a computer (or brain) wouldn't necessarily need multiple CPU cores (or multiple anything operating in parallel) to do the things you have mentioned, panther (BTW I'm tired so hope I'm making sense).

However, massive parallelism is, in one form or another (or more likely, in multiple forms—parallel parallelism?), going to be required for the extremely demanding tasks that humans will be handing to computers in the coming several decades. I doubt that the architecture of a system designed fifty years from now would be recognizable or make sense to any of us discussing this here and now, but I do know it will feature a lot of parallel computation.

And technology like these 8 and 12-core CPUs from AMD is a necessary stepping stone along the way.
Posted on Reply
#90
Frick
Fishfaced Nincompoop
FordGT90ConceptI have a dual Xeon server sitting right next to me and have written a lot of multithreaded applications (capable of loading 8+ cores to 100%). It has its uses but, if it weren't for BOINC, most of the time it would be sitting around 0-5% CPU usage. A super computer without works is little more than an inefficient furnace.
Has anyone stated anything else? Of course you need software for it, that's self explanatory.

Err.. Anyway, I tend to think of the future where massive parallel computing power is stored in a few locations around with next to no local applications at all. Everything electronic will be connected to the net, for good and bad. Phones, toasters, fridges, everything.. Pretty scary, but it seems like it's heading that way.
Posted on Reply
#91
FordGT90Concept
"I go fast!1!11!1!"
pantherx12A Machine can't do anything like that without a lot of cores as rather then a reaction it will have to see the object and then essentially think what to do next the more cores it has the more potential responses to the obstacle it could think of, having a core per sensor will also allow it to truly accurately judge is position in space and thus go over the next obstacle with no problems.
Go to a modern computerized factory and simple computers with good sensors are completing tasks at in human speeds (like ejecting bad seeds, plucking out fertilized eggs out of line up, mixing inks, painting and assembling cars, etc.).

Like a computer, the body generally has only one response to save itself from danger. If it guesses wrong, bad things happen. If it guesses right, things end up not so bad. An example is when someone sees a deer 20 feet in front of them. Some do the stupid thing and veer off the road, rolling it, and killing the occupants. Some do the smart thing and brake to a collision (a trained response). The brain isn't developing an extensive list of possibilities and weeding out which is best. It just knows the status quo won't do and takes the first alternative.
pantherx12With less cores it simply won't be able to plan the next movement efficiently.
With computers, it is prescribed. If this, this, and this conditions are true, do that. That's why they are so efficient.

Where computers are slowest is very human tasks like recognizing a face, determining if someone is "beautiful" or not, recognizing voice tones, identifying body language, and detecting emotions. The brain can do all of these tasks in little more than 100ms. It takes a computer that long just to realize it is looking at a "face." Because of the extreme variety in the real world, checking for a specific list of conditions takes a lot of computing horsepower. The technology is improving but again, it stems from the shortfalls of binary: neurons vs. transistors.
btarunrHuh? Never heard of data-centres? You think everyone who has a website or company VPN has his own server?
It's the same deal: privacy is non-existant.
Posted on Reply
#92
Frick
Fishfaced Nincompoop
FordGT90ConceptIt's the same deal: privacy is non-existant.
Welcome to the future mate. :(
Posted on Reply
#93
pantherx12
TIGRParallelism doesn't necessarily require a multi-core CPU architecture. It can take many forms and function in many different ways. Multi-tasking does not require parallel computing if the system is fast enough to be perceived as doing things simultaneously. So a computer (or brain) wouldn't necessarily need multiple CPU cores (or multiple anything operating in parallel) to do the things you have mentioned, panther (BTW I'm tired so hope I'm making sense).

However, massive parallelism is, in one form or another (or more likely, in multiple forms—parallel parallelism?), going to be required for the extremely demanding tasks that humans will be handing to computers in the coming several decades. I doubt that the architecture of a system designed fifty years from now would be recognizable or make sense to any of us discussing this here and now, but I do know it will feature a lot of parallel computation.

And technology like these 8 and 12-core CPUs from AMD is a necessary stepping stone along the way.
Of course but we're a long way off from CPUS that are that fast, maybe when optic light computing gets into full swing. ( give it 15 years at the current rate of laser size halfing, although that's 15 years for a working prototype not public )


I still think multi cores are the way forward, as I just mentioned when light computing is out heat output would be miniscule so you could pack as many cores into one package as you phsycally could, and why the hell not, 10 1 terahurtz cpus would pwn an individual one after all :laugh:

I'm a firm believer that when it comes to hardware theres no such thing as overkill :laugh:
Posted on Reply
#94
TIGR
WhiteLotusThe brain can not be compared to a computer chip.

The brain has many, MANY locations in it that all work to together to do the simplest of tasks. Say you want to bend down and pick up a pencil. You have different nerves that tell your fingers to open and close compared to the nerves that carry the signal to your shoulder to move in the correct position to allow the movement to happen. You have different routes, parasympathetic and sympathetic. One allows your body to rest and relax, the other allows your body to get up and move around.
If it wasn't for the brain nerves (cranial nerves) telling your heart to slow the fuck down you'd all be having a heart rate of about 200 odd beats a minute which ain't healthy.

Again, you can not compare the brain with a computer chip. A brain is just too damn complicated.
Sure, a human brain is more complicated that today's computer chips, but within the next decade, computers will be built that easily exceed the computing power of the human brain, and in the decades that follow, software will be written that will make it look simple by comparison. After all, we [and our brains] are but complex machines, as are computers. The difference is, computer technology is evolving at an exponential rate, while the evolution of our brains is ... oh probably something like going from a Pentium II to a Pentium III over the course of 100,000 years.

I would say that computers not only can be, but must be (and will be) compared to the human brain. Reverse engineering such a marvelous piece of machinery is a powerful tool in the quest for progress, and it will certainly impact the way we develop our technology.
Posted on Reply
#95
pantherx12
FordGT90ConceptGo to a modern computerized factory and simple computers with good sensors are completing tasks at in human speeds (like ejecting bad seeds, plucking out fertilized eggs out of line up, mixing inks, painting and assembling cars, etc.).

Like a computer, the body generally has only one response to save itself from danger. If it guesses wrong, bad things happen. If it guesses right, things end up not so bad. An example is when someone sees a deer 20 feet in front of them. Some do the stupid thing and veer off the road, rolling it, and killing the occupants. Some do the smart thing and brake to a collision (a trained response). The brain isn't developing an extensive list of possibilities and weeding out which is best. It just knows the status quo won't do and takes the first alternative.



With computers, it is prescribed. If this, this, and this conditions are true, do that. That's why they are so efficient.

Where computers are slowest is very human tasks like recognizing a face, determining if someone is "beautiful" or not, recognizing voice tones, identifying body language, and detecting emotions. The brain can do all of these tasks in little more than 100ms. It takes a computer that long just to realize it is looking at a "face." Because of the extreme variety in the real world, checking for a specific list of conditions takes a lot of computing horsepower. The technology is improving but again, it stems from the shortfalls of binary: neurons vs. transistors.

.
Your two statements sort of conflict there, your first example is cancelled out by this "Because of the extreme variety in the real world,"

A machine on the production line has consistency, real life does not.

Also when it comes to sorting seeds that's actually done by just shaking the hell out of them, seeds that are not ready yet fall through a mesh, it then goes to a second sieve for further sorting followed by a bath, dead seeds sink.

So then they just scoop of the good stock and dry it.

BAM seeds that grow everytime.


*edit* if anyone finds it odd that I know that, I spend most of my time researching and reading about things, I love to know how things work :D
Posted on Reply
#96
btarunr
Editor & Senior Moderator
FordGT90ConceptIt's the same deal: privacy is non-existant.
Regardless, a majority use rented servers. So AMD is catering to a majority.
Posted on Reply
#97
Mussels
Freshwater Moderator
btarunrRegardless, a majority use rented servers. So AMD is catering to a majority.
dind ding ding.


data centers will gobble these up - going quad core to 12 core gets them 3x the work in the same physical space.
Posted on Reply
#98
FordGT90Concept
"I go fast!1!11!1!"
FrickHas anyone stated anything else? Of course you need software for it, that's self explanatory.

Err.. Anyway, I tend to think of the future where massive parallel computing power is stored in a few locations around with next to no local applications at all. Everything electronic will be connected to the net, for good and bad. Phones, toasters, fridges, everything.. Pretty scary, but it seems like it's heading that way.
But what is the software? You can make any program consume 100% of a given processor but if it isn't doing something useful, it is wasteful. The industry appears to be willfully pulling itself apart. You got the CPU market trying to achive parralism less than 5% of the market can even use, you got the GPU market attempting to do the same and, in the process, diminishing the ability of the GPU to perform its original task, and you got developers with all these hardware resources available to them with either a 1000 page manual how to do it or nothing that could possibly require that many resources. It's like the whole "32-bit for gaming is all that is needed" argument for the rest of hardware. It's not good. :(


Oh goodie, so they know how you like your toast and what is in your fridge too. :shadedshu I think I will move to Mars now (or die trying to get there).
pantherx12Your two statements sort of conflict there, your first example is cancelled out by this "Because of the extreme variety in the real world,"
That's in human appearance and behavior. Computers have been able to do everything I listed individually. It is just impractical to combine them all.
btarunrRegardless, a majority use rented servers. So AMD is catering to a majority.
Musselsdata centers will gobble these up - going quad core to 12 core gets them 3x the work in the same physical space.
True and true. They (IBM, Intel, Sun, AMD, etc.) created that industry and they are going to feed it. I never said these wouldn't be good for data centers/enterprises. My concern is sticking them in consumer computers (wasteful and inevitably coming because consumers are creating demand for waste) or moving consumers to clouds of them (privacy).
Posted on Reply
#99
pantherx12
FordGT90ConceptThat's in human appearance and behavior. Computers have been able to do everything I listed individually. It is just impractical to combine them all.




.
And does that not make you think a hybird system is the way forward?

have perhaps 4 cores that run at 4ghz+ ( or more) and have the reminder low clocked ( 1.5) for handling non intensive tasks etc?
Posted on Reply
#100
TIGR
pantherx12And does that not make you think a hybird system is the way forward?

have perhaps 4 cores that run at 4ghz+ ( or more) and have the reminder low clocked ( 1.5) for handling non intensive tasks etc?
Or give all cores the ability to clock up and down as needed.
Posted on Reply
Add your own comment
Apr 19th, 2024 12:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts