Tuesday, September 18th 2012

AMD Shows Off A10-5800K and FX-8350 Near IDF

It's traditional for AMD to camp outside an ongoing IDF event (at a nearby hotel suite), siphoning off a small portion of its visitors. In the backdrop of this year's IDF event in San Francisco, AMD showed off two of its upcoming flagship client processors, the socket FM2 A10-5800K "Trinity" APU, and socket AM3+ FX-8350 "Vishera" CPU. The two chips were shown running fully-loaded gaming PCs.

The FX-8350 was shown installed on a machine with ASUS Crosshair V Formula (-Z?) motherboard, liquid cooling, and Radeon HD 7970 graphics card. The chip was clocked at 5.00 GHz (4.80 GHz when the picture was taken), and running popular CPU-intensive benchmarks such as WPrime and Cinebench. The A10-5800K was shown running application demos, including a widget that displays real-time boost states of the processor and GPU cores.
Source: Hardware.fr
Add your own comment

80 Comments on AMD Shows Off A10-5800K and FX-8350 Near IDF

#51
eidairaman1
The Exiled Airman
fix your posts by combining both
Posted on Reply
#52
Dent1
SteevoMy Phenom II is faster at cine bench and most applications that Bulldozer is clock for clock.

I have proven it with scores.
Good for you.
SteevoShould I be banned?
Why are you asking me this?
Posted on Reply
#53
HumanSmoke
Andy77Recognize rhetoric when you see it...intel beats AMD in optimized compiled code and low task count while AMD has good grips in parallel jobs and high task count..[wall of text containing a lot of "if" ]...If you look at intel you'll have some improvements, but they are still limited to single core processing...
So Intel must suck at server and HPC ?...Oh no, that's right they don't. In fact AMD's defecit seems to getting worse


(from Anandtech's E5-2660 review)
Andy77it's nice to have...your own fab
Hey, AMD used to have their own too. Weird that they actually paid Globalfoundries to get rid their remaining stake in the company, no?
Andy77When GloFo surpasses intel in fabbing
This is a company that made a complete hash of 32nm while Intel were fabbing 22nm, and are presently ramping 28nm (and still have a tricky 20nm/transition to gate last to come) while Intel are full steam ahead on 14nm.Presumeably "GloFo surpassing Intel" involves some far future date-yet-to-be-fixed, a magic wand and a sprinkling of faerie dust.
Posted on Reply
#54
Super XP
HumanSmokeSo Intel must suck at server and HPC ?...Oh no, that's right they don't. In fact AMD's defecit seems to getting worse
You missed the guys point. Benchmarks are coded to perform better on Intel and punish none Intel CPU's. This is a basic and we'll known fact.

As for the manufacturing processes, Intel's been excellent in that department for many years now. Still with AMD's limited funds and resources, they kept up very well.
Posted on Reply
#56
Aquinus
Resident Wat-man
Super XPYou missed the guys point. Benchmarks are coded to perform better on Intel and punish none Intel CPU's. This is a basic and we'll known fact.
Find me a benchmark compiled to use FMA3 and XOP to do floating point operations and I bet you will see Bulldozer take off since it will be using the two 128-bit FP units separately instead as 1 unified 256-bit FP unit. We already know that BDs integer performance is pretty reasonable. AVX is also obviously going to run faster on Intel processors considering Intel developed it as well.

Also you have to consider the performance benefits per core. Consider for a moment a quad-core Intel processor with hyper-threading. You have 8 threads to use, but if all 4 cores are doing the same task using the same resources, HT isn't going to boost the speed very much. BD on the other hand has dedicated resources for each thread so the gain per thread is better by the time you start using hyper-threading.

Hyper-threading helps doing different tasks simultaneously where BD (on paper,) is better at doing similar tasks concurrently. BD has its architectural deficiencies, but AMD has more room to make its IPC better while saving a lot of die space. All in all, AMD is trying to efficiently use CPU space because they know that there will come a point where CPUs can't become smaller (and we're slowly but surely getting to that point.)

All in all, yeah, Intel is winning the CPU game but that doesn't mean that they will always be winning it. Think about it. Last year Intel had 54 billion USD in revenue and AMD had a little under 6.6 billion. The difference in size of each of these companies is huge. Intel simply has more resources and more money to invest into CPU innovation. I also might add that AMD also has a GPU market they have to satisfy, so CPUs isn't their only game. All in all, I think that experience with GPUs is what will make AMD processors take off. AMD knows how to play the concurrency game.
Posted on Reply
#57
fullinfusion
Vanguard Beta Tester
Glad to see AMD's finally bringing out a new chip to the enthusiasts to play with.

good luck this time AMD! I hope it's faster then PII, and much better then BD :eek:
Posted on Reply
#58
Steevo
Andy77Recognize rhetoric when you see it...


Higher IPC parts can't really clock high, which high clocking parts like BD can make up in performance just because of that. Add to it the capabilities, it's not just a clocking chip for benchers.

The intel vs AMD is outdated thinking, and past arguments and uArchs have no value here. Sure, intel beats AMD in optimized compiled code and low task count while AMD has good grips in parallel jobs and high task count. i for one won't upgrade to run a few apps at a time to get full performance.

Higher clock "normally" needs higher voltages, but that's not the norm and this is pretty much dependent on the process than on the uArch.

For low power usage in notebooks, it's nice to have tons of cash, your own fab and make several architectures in parallel, picking from each the winning one when it's done. When GloFo surpasses intel in fabbing we'll see AMD's use less power than intel.

You're probably stuck with in the thinking that today's processing power still relies heavily on IPC alone. That may be for low task devices, where Celerons are great for, but today I want to run more on one machine, not just a game and that's it.

BD's longer inst pipeline was a step in concurrent processing which is a "cure" for the limitation Si bring. Maybe when Graphene will become a standard then you can revisit the IPC idea at 100GHz CPU's. But for now, we don't need a faster pipeline as bad as we need a better instruction management and more core integration per module for better concurrent processing. A lot of the hours-long-work is done in concurrent tasks these days, and not in one uber-fast task using only a quarter of the CPU because hey, it' doesn't know how to else with a short pipeline. This is the important aspect of AMD's module "invention". If you look at intel you'll have some improvements, but they are still limited to single core processing with some optimization like HT unless the programmer invests in multi-core processing and not all invest as much as it is needed to produce the best results. Some, not at all, beside Valve and one other did anyone else look at parallel game engine?

We don't need that anymore. Unless we're talking about competitions, but for actual work this is old tech and AMD's "module" will bring the needed push forward. I'm still eager to hear some 4 core per module announcement.



Time / completed task.

Unless you care about pointless or little-to-no-value data, what a non-tech person is interested in, you know the guys with the money that push the world forward, including governments of the big nations, are not interested in the inside workings, but how much output will I get from this much input and is it faster than its competitor or older version.

Damn, car analogy :laugh:
Old vw scirocco, vs new vw scirocco, not much of a difference performance wise, some like the old one because it feels like a real machine, others the new one because of the comfort. Outside people are interested in these things, not ps-for-ps.

BTW, are all aspects of the compared CPU's the same to decide how much IPC is responsible for the performance? AFAIK, CPU's aren't the only ones t get upgraded, the rest of the platform does as well and this will eat the findings, resulting in higher error margins.

Maybe for an engineer it has some meaning, but how many of you actually make CPU's for a living. /s
HIgh IPC can't clock high?


Is that why no one with a Intel I series gets above 3Ghz?
Posted on Reply
#59
NeoXF
eidairaman1Hopefully, Steam Roller should literally make it disappear.
Yeah... Like Phenom II did with Phenom... let's hope FX-8350 isn't to FX-8150 JUST what Phenom X4 9650 was to Phenom X4 9600 tho...
TheGuruStudI'm in. I can run it at this speed on air. Like its purpose, it'll be stop gap till steamroller, for me.

At this clock is where the limitations in BD start to break down. It scales better than 100%.
With the fixes in steam roller and even higher clocks, it should do well.
It's shown to run on AMD's in-house closed-loop water-cooling kit... of course you have to consider every possibility... 1. It's and ES sample 2. It's a binned sample, so as to perform as well as possible, ES or not 3. AMD in-house water-cooler isn't exactly the best cooling out there 4. It's bench stable on decent voltages apparently.
nt300That was back in April of 2012. It very clear that the desktop Piledriver are going to be beasts in performance and priced very well. In this respect, AMD really has no choice but to offer better price/performance vs anything Intel has out. AMD need to sell and keep selling real bad.

Some crazy rumour about Piledriver is really a Bulldozer but higher clocked is garbage talk. Dont remember where I read this, but this will not be the case, we r talking about the same design but greatly tweaked with added MMX instructions for Piledriver.

Piledriver IMO will not be a slouch, it's should perform very well and finally outperform any Phenom II's out.
Well, considering Zambezi could beat, clock for clock, i7 Sandy Bridge in about 2-3 tests out of 20 (such as x264 HD second pass or WinRAR)... Piledriver could at least bring the same scenario vs i7 Ivy Bridge as well as close the gap in others, while hopefully lowering power usage quite a bit as well (which is one of it's main weaknesses IMO).
Posted on Reply
#60
seronx
Cinebench R13(The benchmark running with the FX-8350) is more optimized for Bulldozer. You'll finish workloads faster with a FX-8150 with less power than a Phenom II with R13 and R14.
Andy77BD's longer inst pipeline was a step in concurrent processing which is a "cure" for the limitation Si bring.
Bulldozer has a shorter pipeline than Nehalem.
Posted on Reply
#61
HumanSmoke
Super XPYou missed the guys point. Benchmarks are coded to perform better on Intel and punish none Intel CPU's. This is a basic and we'll known fact.
No I didn't. AMD fanboys seem to always fall back on the " all benchmarks are Intel compile" excuse. As far as I'm awareGCC 4.7.0, Clang etcaren't Intel, and BD still needs some very specialised apps and coding in order to shine.
Posted on Reply
#62
D007
Why mention the processor runs at 5ghz but run it at 4.8ghz? To me that says it doesn't run at 5 at all. If it was stable at 5, they'd of left it at 5.. That's my 2 cents anyway.. But I like 5.. 5 is a nice, round number. :)
Posted on Reply
#63
seronx
D007Why mention the processor runs at 5ghz but run it at 4.8ghz? To me that says it doesn't run at 5 at all. If it was stable at 5, they'd of left it at 5.. That's my 2 cents anyway.. But I like 5.. 5 is a nice, round number. :)
The processor ran @ 5 GHz
www.techpowerup.com/img/12-09-18/87b.jpg
9.06 = 5 GHz
8.73 = 4.8 GHz

www.xtremesystems.org/forums/showthread.php?276245-AMD-quot-Piledriver-quot-refresh-of-Zambezi-info-speculations-test-fans&p=5137550&viewfull=1#post5137550
Posted on Reply
#64
Super XP
I already have FX 8120 @ 4.40 GHz - 8-Cores running with low volts. I can only imagine 5.0GHz :eek: Give me Piledriver with higher clocks and lower power.
Posted on Reply
#65
Aquinus
Resident Wat-man
seronxBulldozer has a shorter pipeline than Nehalem.
Wrong. Intel used a 14-stage pipeline ever since the Core 2 series processors.
Posted on Reply
#66
seronx
AquinusWrong. Intel used a 14-stage pipeline ever since the Core 2 series processors.
www.realworldtech.com/nehalem/
At this IDF, Intel is announcing the details of Nehalem, a second generation 45nm microprocessor and the next step in the evolution of their flagship line. Nehalem differs from the previous generation in that it was explicitly designed not only to scale across all the different product lines, but to be optimized for all the different product segments, from mobile to MP server. This implies a level of flexibility above and beyond the Core 2. Nehalem refines almost every aspect of the microprocessor, although the most substantial changes were to the system architecture and the memory hierarchy. This article describes in detail the architecture and pipeline of Nehalem, a quad-core, eight threaded, 64 bit, 4 issue super-scalar, out-of-order MPU with a 16 stage pipeline, 48 bit virtual and 40 bit physical addressing, implemented in Intel’s high performance 45nm process which uses high-K gate dielectrics and metal gate stacks
^-- source Singhal, Ronak. Inside Intel Next Generation Nehalem Microarchitecture. Intel Developers Forum, April 1, 2008.

Bulldozer has a 15 stage pipeline
Nehalem has a 16 stage pipeline
Sandy Bridge has an 18 stage pipeline

No wonder why Nehalem and Sandy Bridge has less power consumption it has less gates per stage!
Posted on Reply
#67
eidairaman1
The Exiled Airman
general geewizz about pipelines

courses.engr.illinois.edu/cs232/fa2011/lectures/l14.pdf
Posted on Reply
#68
Steevo
Ohez noez. You mean read, AND comprehend?


Its easier to flail around about megahurts and how IPC doesn't matter.


I think about 80% of that pdf is accurate, but would either need convincing fro the other 20%, or would argue the advantages/disadvantages of some items. Such as the compiler removing all hazards, how much time does that take if we run it in real time VS how much larger do we make a dataset by making items redundant to prevent issues (hard faults and stalls).

Many things are due to X86 and its own issues, and the lack of programming in pure X64, as well the almost inherent need to move to a coutingiously mapped memory space with OS controlled and aware redundancy. Add to this the hardware to schedule between OpenCL or CUDA but transparent to the software (NOT DRIVER LEVEL!!!) and you increase application performance to the same level as developed platforms get.
Posted on Reply
#69
Aquinus
Resident Wat-man
Steevothe lack of programming in pure X64
You don't program X64, it just represents 64-bit memory blocks where the CPU can do math operations on numbers up to 2^64 instead of 2^32. X86 has no problem doing operations in 64-bit on a 64-bit processor. Not sure what you're trying to actually say here. Sorry.
Posted on Reply
#71
seronx
AquinusTroll less please. :mad:
Ronak Singhal is the lead architect of Nehalem.
AquinusYou don't program X64, it just represents 64-bit memory blocks where the CPU can do math operations on numbers up to 2^64 instead of 2^32. X86 has no problem doing operations in 64-bit on a 64-bit processor. Not sure what you're trying to actually say here. Sorry.
x86 can only do 32-bit(and less) math/ops.
Posted on Reply
#72
Steevo
seronxRonak Singhal is the lead architect of Nehalem.x86 can only do 32-bit math/ops.
X86 can do X64, but it takes longer. More memory addresses available and able to be addressed without needing translated means faster processing.
Posted on Reply
#73
seronx
SteevoX86 can do X64, but it takes longer. More memory addresses available and able to be addressed without needing translated means faster processing.
Only the reverse is true.
X64 can do X86, but it takes longer.
x86 CPUs can not run x64 code.
Posted on Reply
#74
Aquinus
Resident Wat-man
seronxwww.realworldtech.com/nehalem/^-- source Singhal, Ronak. Inside Intel Next Generation Nehalem Microarchitecture. Intel Developers Forum, April 1, 2008.

Bulldozer has a 15 stage pipeline
Nehalem has a 16 stage pipeline
Sandy Bridge has an 18 stage pipeline

No wonder why Nehalem and Sandy Bridge has less power consumption it has less gates per stage!
I believe my source more than your source. :shadedshu

www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf

Troll less please. :mad:
Posted on Reply
#75
xorbe
You guys aren't going to resolve the # of pipe stages unless someone spells out what each stage does or what opcode it references. 14-18 vs 15, it's all in the same ballpark. It's not like 10 vs 24 or something.
Posted on Reply
Add your own comment
May 2nd, 2024 06:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts