• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Starts Shipping 12-core and 8-core ''Magny Cours'' Opteron Processors

This is getting ridiculous. Most applications aren't very good candidates for multithreading so more per-core performance is still ideal. Someone has to change this trend of gluing more cores on to more core performance. Multiple cores create needless overhead and before long, applications will be slower tomorrow than they are today because overhead exceeds actual work done.

While multi-core scaling isn't perfect , it's a necessary step to overcome the fact that a single core has its limits. Diminishing performance gains from clock rate increases, exponentially increasing power consumption for each factorial increase in operating frequency, ILP and memory walls, and simple limits to how well a single core can be designed, force us to use multiple cores to keep up the exponential rate of progress that has come to be expected from chip makers.

The human brain (the most powerful computer known) is massively parallel.

Improving per-core performance is still extremely important, and I don't think AMD or Intel are abandoning that in favor of just increasing core count. Look at the per-core difference between C2D and Core lines of CPUs.

Anyway, mainstream multi-core computing is still in its infancy. The main issue seems to be software algorithms and implementation, not some flaw with the concept of multiple CPU cores itself. There will be challenges in the future, such as the jump from multi-core to many-core CPUs, but I see no signs that multi-core computing is a dead end.
 
The human brain (the most powerful computer known) is massively parallel.
The human brain is designed to process senses; computers are designed to process binary. As a human brain to process binary and it fails. Ask a computer to process imagery and it fails.


Improving per-core performance is still extremely important, and I don't think AMD or Intel are abandoning that in favor of just increasing core count. Look at the per-core difference between C2D and Core lines of CPUs.
Core performs miserably without Hyperthreading enabled. The major improvements to Core are improving the instruction sets which streamline complex processes.


Anyway, mainstream multi-core computing is still in its infancy. The main issue seems to be software algorithms and implementation, not some flaw with the concept of multiple CPU cores itself. There will be challenges in the future, such as the jump from multi-core to many-core CPUs, but I see no signs that multi-core computing is a dead end.
The hardware causes the software flaws but maybe that's just it. Instead of having multiple asynchronous cores, why not make the cores themselves synchronous. Actual thread states are handled on the hardware, instead of software level. That would virtually eliminate software overhead.

I see the signs although they generally aren't anything to be worried about now on a quad-core; however, the more cores there are, the bigger the problem becomes. I don't want to imagine how much trouble it will be to multithread the code that handles multiple cores. The potential for errors, collisions, and other problems are exponentially increased.
 
Desktop versions in AM3 or new socket? I still can't see (above) average Joe using more than 4 cores, let alone 6

ditto

These r useless for everyday users. Most games arent even coded to use 2 cores let alone 12
 
prices are awesome, top performance per dollar cpu's
 
ditto

These r useless for everyday users. Most games arent even coded to use 2 cores let alone 12

and the rate by which pc devs are jumping ship, there never will be a time where games use them all
 
and the rate by which pc devs are jumping ship, there never will be a time where games use them all

?

dual cores have been out for how long now and we still dont see universal acceptance of them by dev's... I <3 people who spew bs
 
?

dual cores have been out for how long now and we still dont see universal acceptance of them by dev's...

:wtf: and my point is pretty much all devs are abandoning pc game releases or release crap ports that are hardly optimized...... I <3 people who cant read
 
Consoles are going the same way as PCs though. Xbox360 has a tri-core w/ SMT (6 threads at a time) CPU and PS3 has a dual-core with up to 8 sub-processors. The only exception is the Wii which still has a single core CPU (as far as anyone can tell).
 
?

dual cores have been out for how long now and we still dont see universal acceptance of them by dev's... I <3 people who spew bs

Lay off the booze bro .........:toast: He was agreeing with you :slap:
 
I was under the understanding that that the 12 core was 2x 6 core cpu's "sticky taped" I could be wrong though

Yea i new about the 12 cores been two 6 cores sticky taped together (AMD's lingo) But i had no idea on the 8 core as i have not heard anything about the 8 core CPU's till now :ohwell:
 
well, my next system might just be AMD... would let me re-use my 4870's in crossfire at least (my crossfire problems stem from the intel chipset)



one thing all you naysayers are forgetting, is that DX11 comes with multithreading as part of its basic design.. next gen games are going to use our spare threads quite well :)
 
Meh. Useless for desktop market. And I doubt we'll see this in a desktop variant. Just look at the package size.

And what Intel Crossfire issues? Your Crossfire issues do not stem from the Intel chipset, they stem from the shitty ATI drivers.
 
Meh. Useless for desktop market. And I doubt we'll see this in a desktop variant. Just look at the package size.

And what Intel Crossfire issues? Your Crossfire issues do not stem from the Intel chipset, they stem from the shitty ATI drivers.

no, they stem from a problem where my cards flicker with Vsync off on intel chipsets. they dont do it on AMD boards.
 
no, they stem from a problem where my cards flicker with Vsync off on intel chipsets. they dont do it on AMD boards.

And if they coded proper drivers, it wouldn't be an issue.
 
And if they coded proper drivers, it wouldn't be an issue.

its a chipset issue. works on x58 boards, just not on 965 through x48/45. only seems to happen on 38x0 and 48x0 cards too
 
one thing all you naysayers are forgetting, is that DX11 comes with multithreading as part of its basic design.. next gen games are going to use our spare threads quite well :)
But that doesn't alleviate the problem of the master thread (orchestrates the worker threads) bringing everything else to a crawl; moreover, Windows 7 does a really, really bad job at synchronizing threads. For example, you can't play most games with WCG running because performance will drop like a rock despite 4 cores being completely idle. One core gets held back just a tiny bit then other cores end up waiting for it. We also can't forget that Windows 7 itself suffers from the same thread prioritizing problems when dragging and dropping files while the CPU is 100% loaded (idle).

It's difficult to explain but multi-core doesn't have a very bright future. Everything about them multiplies complexity of operating systems to software. Until that is fixed on the hardware level, no one is going to be excited about more cores except Intel/AMD (because its cheap and easy) and consumers (because it's the new fad for incorrectly cataloguing performance like clockspeeds were up to Pentium 4/D).

Call me a pessimist but this trend is more harmful than helpful to developers and by extension, consumers.
 
But that doesn't alleviate the problem of the master thread bringing everything else to a crawl; moreover, Windows 7 does a really, really bad job at synchronizing threads. For example, you can't play most games with WCG running because performance will drop like a rock despite 4 cores being completely idle. One core gets held back just a tiny bit then other cores end up waiting for it. We also can't forget that Windows 7 itself suffers from the same thread prioritizing problems when dragging and dropping files while the CPU is 100% loaded (idle).

It's difficult to explain but multi-core doesn't have a very bright future. Everything about them multiplies complexity of operating systems to software. Until that is fixed on the hardware level, no one is going to be excited about more cores except Intel/AMD (because its cheap and easy) and consumers (because it's the new fad for incorrectly cataloguing performance like clockspeeds were up to Pentium 4/D).

Call me a pessimist but this trend is more harmful than helpful to developers and by extension, consumers.

it may no solve it, but it'll help - and in every (DX11) game, too.
 
its a chipset issue. works on x58 boards, just not on 965 through x48/45. only seems to happen on 38x0 and 48x0 cards too

If it only happens with 38x0 and 48x0 and only on certain chipsets, it's a driver problem, or a hardware fault by ATI. Either way, it's ATI's fault.
 
it may no solve it, but it'll help - and in every (DX11) game, too.
It makes it easier for the developer by making the GPU render stream work somewhat asymmetrically. The CPU load is the same.
 
Parallel computing is without a doubt the future; it just needs to mature, like every other technology in the history of humankind.
 
This is getting ridiculous. Most applications aren't very good candidates for multithreading so more per-core performance is still ideal. Someone has to change this trend of gluing more cores on to more core performance. Multiple cores create needless overhead and before long, applications will be slower tomorrow than they are today because overhead exceeds actual work done.


Your thinking about this the wrong way fella.

Firstly these are for servers at the moment, as people were saying what used to take 12 cpus ( 4 cores each) can be done with 4 cpus.

That's space saving! ( aswell as cheaper eventually)

Also it means servers can process more incoming requests etc so online games could hold much more avatars in one area etc .

Also means if someone made a modified L4D server they could have 1000 or more zombies come at you at once rather then the typical 50 or so :P



Ontop of that imagine running several OS at once simultaneously, got a program that won't run on windows, no problem just switch to linux instantly.

You need to think outside your current thinking and see the potential.



Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.

With more powerful cpus with more cores it will be able to function even better.

Can use bunches of 10 cores to control individual body parts as well to give it much greater dexterity etc.
 
Last edited:
Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.

I'm too lazy/tired to respond to him on a point to point basis at the moment, but this is an important consideration. Research of the human body shows we are more like computers than ever thought before (DNA a digital code, for example), and R&D into the most powerful and promising future computer systems is being done by reverse engineering the way the human brain works. Things our brains can do well are what we increasingly want our computers to do, so it makes sense: things like pattern recognition (identifying distinct objects in two or three dimensional video/simulations), learning (evolutionary programming), etc.

Seeing how effective massively parallel computing makes the human brain at such tasks, is teaching researchers that if we want our computers to perform increasingly "intelligent" and profound operations, we're going to have to step out of the box to take computing to the next level. We have to think beyond traditional methods, because they can only take us so far. At this point, the "next level" is massively parallel hardware. The ability of software to utilize it well will come as the technology matures.
 
Your thinking about this the wrong way fella.

Firstly these are for servers at the moment, as people were saying what used to take 12 cpus ( 4 cores each) can be done with 4 cpus.

That's space saving! ( aswell as cheaper eventually)

Also it means servers can process more incoming requests etc so online games could hold much more avatars in one area etc .

Also means if someone made a modified L4D server they could have 1000 or more zombies come at you at once rather then the typical 50 or so :P



Ontop of that imagine running several OS at once simultaneously, got a program that won't run on windows, no problem just switch to linux instantly.

You need to think outside your current thinking and see the potential.



Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.

With more powerful cpus with more cores it will be able to function even better.

Can use bunches of 10 cores to control individual body parts as well to give it much greater dexterity etc.

To put that in one word: Virtualization.

In one line: Virtual servers in data centers, where one physical server with one or two physical CPUs can be used to rent 12 web-servers, each suiting the customer's needs.
 
Parallel computing is without a doubt the future; it just needs to mature, like every other technology in the history of humankind.
Parallel computing is the past. Super computers have been doing it for decades. It comes to your home and everyone is in awe. Problem is: what use is a screw driver without screws? Hence, the fad.


Firstly these are for servers at the moment, as people were saying what used to take 12 cpus ( 4 cores each) can be done with 4 cpus.
I know that and they suite server tasks well. The problem is these processors have no use in workstations because most workstations aren't highly scalable like server applications. That's not very likely to change either so Intel/AMD are trying to convince corporations to virtualize and cloud compute. Well, cloud computing especially doesn't work in homes because very few homes have a server and gaming through virtualization is nothing more than a pipe dream today.

Intel/AMD is trying to cater to one crowd (enterprise) while consumers get shafted because workstation/home computers are well-rounded machines and not task oriented.


Also means if someone made a modified L4D server they could have 1000 or more zombies come at you at once rather then the typical 50 or so :P
Your GPU will be crying for mercy long before your CPU. And still, there is little one fast core couldn't do than 100 slow cores. Personally, I think mainstream processors should have no more than four cores. The focus needs to be on core performance. If, as I stated earlier, that takes symmetrical core design, so be it. The point is: most users with quad cores rarely see their CPU usage over 50% if not 25% doing anything they do on a day to day basis.


Ontop of that imagine running several OS at once simultaneously, got a program that won't run on windows, no problem just switch to linux instantly.
Unless you are talking about virtualization, that doesn't work: resource collisions.


You need to think outside your current thinking and see the potential.
I'm looking 10-50 years out here. The prognosis starts getting grim in about 6 years when die shrinks are no longer possible. From there, it's nothing but question marks. Nothing revolutionary has happened in computing since the 1970s. We're still using the same old techniques with different materials.


Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.
Which demonstrates the brain is falling behind. We can build processors faster, not brains (at least not yet). It is still inefficient because images don't translate well to binary but that's the nature of the beast.


With more powerful cpus with more cores it will be able to function even better.
Only if the process is not linear. If step b requires the result from a, step c requires the result from b, step d require the result from c, and so on, it is doomed to forever be slow in the foreseeable future. That is what most concerns me (aside from manufacture process).


Can use bunches of 10 cores to control individual body parts as well to give it much greater dexterity etc.
A 486 could handle that with lots of room to spare. Computer controlled robots have been in use a long time.


To put that in one word: Virtualization.

In one line: Virtual servers in data centers, where one physical server with one or two physical CPUs can be used to rent 12 web-servers, each suiting the customer's needs.
Oh, so you want some nameless corporation 1,000 miles from where you live to know everything you did and are doing? That's the Google wet-dream there. They would know everything about you from how much is in your checking account to which sites you frequent, to all your passwords and user names, to your application usage, to everything that would exist in your "personal computer." Cloudcomputing/virtualization is the epitome of data-mining. Google already knows every single search you made in the past six months with their search engine.

Corporations want this. It is vital we not give it to them. Knowledge is power.
 
Last edited:
In terms of parallel computing, you haven't seen anything yet.

If you want to fight the concept, go build a better system that doesn't utilize it. Otherwise, take a look around. Multi-core CPUs, multi-CPU systems, Crossfire and SLI, RAID ... running components in parallel isn't perfectly efficient, but guess what: neither is anything else.

Sure, maybe there's overhead, and maybe more than we'd like (although that's improving), but as a car guy [I gather] you should understand very well that sometimes you have to take a loss to make bigger gains (unless you don't believe in forced induction either?).
 
Back
Top