Wednesday, April 16th 2014

AMD Demoes its Next-Gen x86 APU Running Fedora Linux

Red Hat Summit 2014 -- AMD today announced another major milestone in the development of its enterprise software ecosystem with the first public demonstration of its second-generation AMD Opteron X-Series APU, codenamed "Berlin," running a Linux environment based on the Fedora Project. The Fedora Project is a Red Hat-sponsored, community-driven Linux distribution, providing a familiar, enterprise class operating environment to developers and IT administrators worldwide. This is important to companies looking to transition to x86 APU servers but who are reluctant to introduce new tools and software platforms to their IT environments. This demonstration also represents a significant step forward in expanding the footprint of x86 APU accelerated performance within the data center.

AMD's premiere demonstration of "Berlin" will showcase the world's first Heterogeneous System Architecture (HSA) featured server APU ahead of its official launch later this year. The demonstration features advancements incorporated in "Project Sumatra" that enable Java applications to take advantage of graphics processing units (GPUs) within AMD server APUs. The combination of Linux and Java on AMD APU platforms provides an ideal platform for server-based multimedia workloads and general purpose GPU compute that will help drive new levels of workload efficiency in the data center. AMD also will demonstrate software-based on OpenCL and OpenGL on "Berlin."

"As servers adapt to new and evolving workloads, it's critical that the software ecosystem support the requirements of these new workloads," said Suresh Gopalakrishnan, corporate vice president and general manager of the Server Business Unit, AMD. "We are actively engaged with a broad set of partners in the data center software community who are bringing to market the software infrastructure to seamlessly enable x86 APU based servers."

AMD is a founding member of the HSA Foundation, an organization dedicated to building robust ecosystems to support APU technologies. For more information please visit AMD's booth 621 at the Red Hat Summit today and tomorrow where the HP Moonshot M700 Cartridge based on the AMD Opteron X-Series X2150 APU, the upcoming second generation AMD Opteron X-Series "Berlin" APU, and the AMD SM15000 servers as well as partner technologies will be on display.
Add your own comment

17 Comments on AMD Demoes its Next-Gen x86 APU Running Fedora Linux

#1
refillable
They should really able to develop some revolutionary things (Like 128-bit, for example) to be able to compete. Right now both x86 giants are innovating slowly.
Posted on Reply
#2
Covert_Death
the point of 128-bit would be what? a Qubit CPU would be much more beneficial for calculations. the advantage of 128 will likely NEVER be needed...
Posted on Reply
#3
RejZoR
Covert_Death said:
the point of 128-bit would be what? a Qubit CPU would be much more beneficial for calculations. the advantage of 128 will likely NEVER be needed...
Never say never. 64KB was also supposedly enough for everything and we are currently at what, 64GB ?
Posted on Reply
#4
Vinska
btarunr said:
The demonstration features advancements incorporated in "Project Sumatra" that enable Java applications to take advantage of graphics processing units (GPUs) within AMD server APUs. The combination of Linux and Java on AMD APU platforms provides an ideal platform for server-based multimedia workloads and general purpose GPU compute that will help drive new levels of workload efficiency in the data center.
"If Java had true garbage collection, most programs would delete themselves upon execution." —Robert Sewell

refillable said:
They should really able to develop some revolutionary things (Like 128-bit, for example) to be able to compete. Right now both x86 giants are innovating slowly.
Correction: Intel would gladly halt most, if not all innovation, but for the past [many] years, AMD held the innovation ball and kept forcing Intel to push forward; following AMD's innovations.
P.S. that also means whenever You people praise Intel processors for how well they are made, thank AMD for forcing Intel to not slack off all those years.
Posted on Reply
#5
pr0n Inspector
Vinska said:
"If Java had true garbage collection, most programs would delete themselves upon execution." —Robert Sewell



Correction: Intel would gladly halt most, if not all innovation, but for the past [many] years, AMD held the innovation ball and kept forcing Intel to push forward; following AMD's innovations.
P.S. that also means whenever You people praise Intel processors for how well they are made, thank AMD for forcing Intel to not slack off all those years.
Selective memory. Intel was always the x86 leader with the exception of the Netburst era. AMD has been going downhill since K8 and judging by their road map they are not even trying to get back in the game.
Posted on Reply
#6
sweet
pr0n Inspector said:
Selective memory. Intel was always the x86 leader with the exception of the Netburst era. AMD has been going downhill since K8 and judging by their road map they are not even trying to get back in the game.
It is something like the Russian try to best the anti-air, cause the US's air force is superior to theirs. AMD will never catch up Intel in term of performance, so they have to build another strategy to stay in the market.
Posted on Reply
#7
MikeMurphy
I'm most excited about the very wide memory bus this could bring to desktop APUs.
Posted on Reply
#8
Vinska
pr0n Inspector said:
Selective memory. Intel was always the x86 leader with the exception of the Netburst era. AMD has been going downhill since K8 and judging by their road map they are not even trying to get back in the game.
If not for AMD, on desktops we would still have single core chips that are only x86 (not x86_64), to name a few things. If You hadn't noticed, the only reason Intels have those is because they had to catch up to AMD who did something new, really.
Posted on Reply
#9
pr0n Inspector
Vinska said:
If not for AMD, on desktops we would still have single core chips that are only x86 (not x86_64), to name a few things. If You hadn't noticed, the only reason Intels have those is because they had to catch up to AMD who did something new, really.
You speak as though if Y never happened we would never be able to have an improved X that is just as good.

If AMD64 never happened we would have some other flavors of 64-bit. Who knows, maybe RMS Itanium would actually take off, or something less x86-ish since so many other architectures were fully 64-bit. But AMD64 did happen and it was the easiest way out because it was compatible with IA-32.

And do I even need to point out how illogical it is to say "multi-core because AMD"(IBM was the first btw). Multi-core is such a blatantly obvious path I don't know why you think Intel would've never gone down this road had AMD didn't do it first.
Posted on Reply
#10
Yorgos
pr0n Inspector said:
Selective memory. Intel was always the x86 leader with the exception of the Netburst era. AMD has been going downhill since K8 and judging by their road map they are not even trying to get back in the game.
I would like to put more quotes like the above, anyway it would be meaningless.

history teaches us that we do not learn from history and no one bothers to read history.
go read what Cyrix was and what amazing things transmeta done for x86 (and cyrix too)
go read how those companies got ruined by intel and see how people are attached to logos and brand name in the netburst period.
Most innovations on x86 the last 20 years have been introduced in the labs of 3 companies, Transmeta, Cyrix and AMD, from low power processors, to internal risc pipelines and powerfull AMD64 instruction sets for servers.

and to continue with the quote from above:
"Intel was always the x86 leader" in the period where noone else made x86 processors or when noone else allowed to make x86 processors.

one mans garbage is someone else's gold, IBM has x86 chips as cheap substitutes for their massive processor line.
Posted on Reply
#11
pr0n Inspector
1. Really? You're now blaming the demise of Cyrix on Intel instead of mismanagement from its buyers?
2. Transmeta didn't even make "real" x86 processors. And how eaxactly did Intel ruined Transmeta?
3. Another one bringing up AMD64 like it's anything but an evolutionary step of x86. Intel had a different idea: IA-64. We just pretend that never happened because it was too radical of a change for us mortals. Ironic really, their own success backfired on them.
4. Considering how x86 completely changed the landscape of supercomputers, I think it's safe to say that it's gold for lots of people!
Posted on Reply
#12
RejZoR
pr0n Inspector said:
Selective memory. Intel was always the x86 leader with the exception of the Netburst era. AMD has been going downhill since K8 and judging by their road map they are not even trying to get back in the game.
I have to disagree. K6 was very well competent against Intel Pentiums. When K7 arrived, AMD was the first to breach 1GHz barrier. And i had AMD K7 1GHz Thunderbird CPU and it was as fast as 1,2GHz Intel Pentium. AthlonXP (Palomino/Tbred) were again chewing Intel with supreme per clock efficiency. Athlon64 was the first 64bit CPU and like AthlonXP it had very good efficiency. AMD was the only one to ever use odd numbered core number (triple cores). AMD was the first to provide true 6 core CPU's and while those X6 didn't excel that much in single thread ops, they were awesome for multithreaded. Only time where we can say AMD sort of failed was the Buldozer where it was even worse than older X6 CPU's.

But all in all, they made a lot of "firsts" (many of which i haven't even mentioned here) and they have been pushing Intel forward to compete. Without AMD, Intelw ould never release Core architecture. And later the Nehalem which was the first i7 high end that finally properly utilized HyperThreading technology (older Pentium 4's HT were often rather bad in performance).
Posted on Reply
#13
pr0n Inspector
K6 actually has a bit of interesting history. For years AMD was always making cheaper clones way after Intel has launch their products, K5 was their first original design which is best described as "meh" by the time it's on the market. The K6 design we know today is actually the work of NexGen, which was bought by AMD in 1996, the original K6 AMD had designed themselves was supposed to be disappointing. Atiq Raza of NexGen would also went on to be the president of AMD and was supposed to be Jerry Sanders' successor(that didn't happen). K6 made AMD competitive instead of just getting by, but it was K7 that put them in the lead.

Now K7. Dirk Meyer brought his team of ALPHA engineers to AMD, combined with the NexGen and AMD's "original" people created this gem while Intel was busy developing NetBurst(P III was a half-hearted stop-gap product before P4 came out). The rest, well, we already know how that turned out.

Like I said AMD is on a downward spiral after K8. Phenom was flat-out disappointing, Phenom II merely matched Core 2 clock-for-clock while Intel was already selling Nehalem. Yes a Core 2 Hexa at that price point was pretty cool which is why I bought a 1090T, but it was a generation behind what Intel had to offer. They were back to the "economical alternative" mode. Then Bulldozer happened.

Frankly, the only time AMD was truly competitive is when they took in new blood who are smarter than their own. I don't know what they did with those people now that they've given up on performance desktop and instead trying to sell APUs that have weak CPUs and GPUs that can only be described as "suck less than Intel IGP" with the promise of future software improvements that will make your sucky CPU you have today suck less by leveraging your weak GPU.
Posted on Reply
#14
Vinska
pr0n Inspector said:
You speak as though if Y never happened we would never be able to have an improved X that is just as good.

If AMD64 never happened we would have some other flavors of 64-bit. Who knows, maybe RMS Itanium would actually take off, or something less x86-ish since so many other architectures were fully 64-bit. But AMD64 did happen and it was the easiest way out because it was compatible with IA-32.

And do I even need to point out how illogical it is to say "multi-core because AMD"(IBM was the first btw). Multi-core is such a blatantly obvious path I don't know why you think Intel would've never gone down this road had AMD didn't do it first.
'cept if You'd look more closely, Intel always tried to slack of (heck, I can understand them – slacking off means no R&D costs) as much as they could on the desktop / laptop CPU market. And they would have slacked off a whole LOT more if they could have managed to keep the x86, which was/is ubiquitous in desktops and laptops, all for themselves. But OH NOES! AMD kept pushing new things, forcing Intel to stop slacking off.
And yeah, sure, multicore and some flavour of a 64-bit version of x86 would have surfaced, sure. But I bet your ass it would have happened years later and we'd be several years behind on x86 based CPUs compared to now if not for AMD pushing those things. Don't forget multicore && x86-64 ain't the only things AMD was responsible for.
Posted on Reply
#15
pr0n Inspector
Vinska said:
'cept if You'd look more closely, Intel always tried to slack of (heck, I can understand them – slacking off means no R&D costs) as much as they could on the desktop / laptop CPU market. And they would have slacked off a whole LOT more if they could have managed to keep the x86, which was/is ubiquitous in desktops and laptops, all for themselves. But OH NOES! AMD kept pushing new things, forcing Intel to stop slacking off.
And yeah, sure, multicore and some flavour of a 64-bit version of x86 would have surfaced, sure. But I bet your ass it would have happened years later and we'd be several years behind on x86 based CPUs compared to now if not for AMD pushing those things. Don't forget multicore && x86-64 ain't the only things AMD was responsible for.
I don't think you understood what I said. There were plenty of other 64-bit architectures before either side announced theirs, not other flavors of x86-64. Intel spent the late-90s developing NetBurst and IA-64, an entirely different architecture. The idea was that NetBurst would be the last x86 arch, 64-bit computing would move onto IA-64 and maybe somewhere down the line they would extend the "legacy" x86 into 64-bit. AMD instead spent those years to do what Intel did once(16-bit to 32-bit): extend x86 into 64-bit, less ambitious but maintained full backward compatibility.
Posted on Reply
#16
Vinska
@pr0n Inspector that's why I said desktop/laptop market. If Intel had the x86 market[1] all to themselves they would have had little interest in improving it, save for process shrinks and maybe some minor improvements here and there every once in a while.
Due to MS Windows (along with all those windows programs and games) at the time being ubiquitous (now, too, but not as much) in the Desktop/laptop market (And that runs on x86 only), they had a market that is unlikely to just go away even if they stopped bringing significant improvements. So they could have, and I'd say would have let themselves do little to no innovation there.
And push their Itanium in the enterprise markets, blah blah blah

[1] I don't think Intel intended x86 based CPUs to be used for servers & similar stuff. But that's another story.
Posted on Reply
#17
pr0n Inspector
No, Intel thought x86 was hitting a dead end, they planned for IA-64 to be the future mainstream architecture, they didn't planned for it to fail spectacularly and forever stuck in the tiny little market it has today.
Intel obviously knew software support is important which is why they helped MS to port Windows to IA-64, provided tool chain and even (slow) x86 emulation. Again at the time they thought it would all work out by the time 64-bit computing is relevant to the masses.
See, it's not that Intel weren't doing anything, they just went down the wrong road and wasted their efforts on something that failed.

----
Pentium Pro and its successors were sold in the high-end workstation/server market.
Posted on Reply
Add your own comment