Wednesday, May 11th 2011

AMD A-Series APUs Tested Against Sandy Bridge CPUs at Gaming on IGP

What happens when you pit Intel's "Visually Smart" Sandy Bridge processors against Radeon-enriched AMD Fusion A-Series accelerated processing units? They do terribly at gaming on integrated graphics. Surprise! That is notwithstanding the fact that AMD is pitching its A-Series Fusion APUs to be a lot more than CPUs with embedded GPUs, they're pitched to be processors that make lower-mainstream graphics pointless, and to alter the software ecosystem to be more GPGPU intensive, so applications could benefit from the over 500 GFLOPs of computation power the 400 stream processor DirectX 11 GPU brings to the table.

A leaked presentation slide shows AMD's performance projections for the A-Series GPU, tests included GPU-heavy DirectX 10 titles such as Crysis Warhead and Borderlands; as well as DirectX 11 ready titles such as Dirt 2. AMD's quad-core A8-3850, A8-3650 and A8-3450 were included alongside Intel's dual-core Sandy Bridge Core i3-2100, and quad-core Core i5-2300, Core i5-2500K. The Atom-competitive E350 Zacate dual-core was also in the comparision, perhaps to show that it is nearly as good as Intel's much higher segment Core series processors at graphics.
30 frames per second (FPS) is considered "playable" limit by some tech journalists, but AMD created a range between 25 and 30 FPS to define what's playable. Each of the three A-Series chips scored above 25 FPS in every test, while the A8-3650 and 3850 reached/crossed the 30 FPS barrier. Going by the test results, AMD certainly achieved what it set out to, which is to use its immense GPU-engineering potential to lift up its CPU business. The A-Series APUs should make a formidable option for home desktop buyers who require strong graphics for casual gaming, cost the same as Intel's dual-core Sandy Bridge, and give four x86-64 cores for the same price. AMD's new A-Series Fusion APUs will launch in early June.
Source: DonanimHaber
Add your own comment

50 Comments on AMD A-Series APUs Tested Against Sandy Bridge CPUs at Gaming on IGP

#26
Bjorn_Of_Iceland
Love that yellow blast graphic with the red "Playable Frame Rates" written on it :D. So 60s commercial
Posted on Reply
#27
cadaveca
My name is Dave
Assimilator[H] posted a review of 6990 + 6970 (essentially 6970 tri-fire) a while back and they found it scaled excellently, beating out GTX 580 SLI in a number of scenarios and at a much lower price point.
I think you mis-understood what I meant by NOW. I meant upon release, will it work with 2x VGAs, plus the "IGP", hence my comment about it only being a signle discrete add-in.;)
Posted on Reply
#28
happita
spynoodleI somehow doubt this. A lot.
Why is that? The graphs might very well be fake, but we all know how Intel's integrated graphics performs, especially how their earlier procs w/ IGP have done. Now, Sandy Bridge has improved quite a bit on that front, but from what I have always noticed, AMD and NVIDIA motherboards IGP's have almost ALWAYS outperformed the Intel's counterpart.
Posted on Reply
#29
NC37
Depends what you use your machine for. Personally I'd rather have better graphics than CPU if I had to choose between either. Generally if I play games, the ones I had the most problems with are the more GPU bound. CPU intensive games I don't find that difficult because they tend to not be as demanding as time goes.

Hydrophobia: Prophecy is a good example right now. It is a GPU eating monster. I have to turn my 460's fan up to 100% just to play it and keep temps out of the 100C range. GPU usage is also almost always above 90%.

Not that I'd be looking to buy a budget level laptop right now, but if I did, Fusion would be very attractive. Actually wish Apple would switch to these in their low end Macbooks cause my old iBook is in need of an update. Only use the thing for writing and web browsing. Haven't wanted to spend the $$ to replace yet till I see some interesting features.
Posted on Reply
#30
xenocide
I only am interested in these for HTPC's\Laptops. Even if this slide is legit, you have to account for "AMD Inflation" which means the Llano APU's which prioritized equal performance from CPU + IGP is barely capable of outperforming SB which kind of just tacked the IGP on there. That stupid clip art starburst makes me think this is all fake though...
Posted on Reply
#31
Frizz
If these results are the same as the final product then we're going to be seeing alot more PC gamers in the future :toast:
Posted on Reply
#32
sethk
Do these APUs or their corresponding chipsets (NB) have a dedicated Video Transcode / encode / decode block, like the Z68's "QuickSync" feature?
Posted on Reply
#33
donanimhaber.com
APU: the definition

An APU integrates a CPU and a GPU on the same die thus improving data transfer rates between these components while reducing power consumption. APUs can also include video processing and other application-specific accelerators.

So, Amd is (will) simply but effectively targeting Intel's heart, "the onboard video"
Posted on Reply
#34
Unregistered
donanimhaber.comAn APU integrates a CPU and a GPU on the same die thus improving data transfer rates between these components while reducing power consumption. APUs can also include video processing and other application-specific accelerators.

So, Amd is (will) simply but effectively targeting Intel's heart, "the onboard video"
they already stabbing onboard video and let it bleeding while mutilate it for breakfast
Posted on Edit | Reply
#35
wiak
this is gonna be epic in notebooks and premium netbooks, who wants an atom now? or some cheap pentium with onchip crapstics?

same goes for htpcs, who wants to have a radeon hd 5570 class gpu and a phenom II class cpu in the same chip in their little htpc in the living room? instead of intel with sucky bultin crapstics
Posted on Reply
#36
laszlo
finally office pc's with good integrated graphic
Posted on Reply
#37
btarunr
Editor & Senior Moderator
laszlofinally office pc's with good integrated graphic
...to play Battlefield when the boss isn't looking.
Posted on Reply
#38
Imsochobo
btarunr...to play Battlefield when the boss isn't looking.
My work pc has a I7 2.13 ghz quad with HT 16gb ram and a HD5870 mobility (hd5770 based)
I can play games all nicely :)
Posted on Reply
#39
p3ngwin1
AssimilatorYes because Plants vs Zombies requires a high-end DX11 GPU! No offense AMD, but unless these CPUs are able to compete with Sandy Bridge in terms of performance, no-one is going to give a hoot about how good the integrated graphics are.



[H] posted a review of 6990 + 6970 (essentially 6970 tri-fire) a while back and they found it scaled excellently, beating out GTX 580 SLI in a number of scenarios and at a much lower price point.
you do know that GPGPU processing is used for more than gaming right?

Apple use OpenCL in the OS and Windows will use it in Win8, Office 2010 has GPU acceleration, as do all today's Internet browsers (getting towards everything on and off screen practically), media players, etc.

it's not only needed for games, it's usable for almost anything you do with your OS and it is increasing every year. the customer may not care about graphics processors, they just need to know price/performance information.

to say a GPGPU is not needed for "normal" applications and casual computer usage is to show ignorance of today's applications and their abilities.
Posted on Reply
#40
Fourstaff
p3ngwin1to say a GPGPU is not needed for "normal" applications and casual computer usage is to show ignorance of today's applications and their abilities.
On the other hand, to say GPGPU is needed for "normal" applications and casual computer usage is hopelessly optimistic, there are very few if any "normal" applications which benefit from the boost provided by the GPGPU. In specific applications like Photoshop etc. is another story though.
Posted on Reply
#41
cadaveca
My name is Dave
laszlofinally office pc's with good integrated graphic
btarunr...to play Battlefield when the boss isn't looking.
With browsers and stuff becoming 3D accelerated, I do think this has a purpose in the office other than gaming.
Posted on Reply
#42
p3ngwin1
FourstaffOn the other hand, to say GPGPU is needed for "normal" applications and casual computer usage is hopelessly optimistic, there are very few if any "normal" applications which benefit from the boost provided by the GPGPU. In specific applications like Photoshop etc. is another story though.
read my post again, the other part you didn't quote:

"Apple use OpenCL in the OS and Windows will use it in Win8, Office 2010 has GPU acceleration, as do all today's Internet browsers (getting towards everything on and off screen practically), media players, etc. "

Apple's "Core Image" for example uses the GPU for acceleration, as well as paid programs like "Pixelmator".

GPU's are ALREADY in use for "normal" operations.

On March 3, 2011, Khronos Group announces the formation of the WebCL working group to explore defining a JavaScript binding to OpenCL. This creates the potential to harness GPU and multi-core CPU parallel processing from a Web browser.

On May 4, 2011, Nokia Research releases an open source WebCL extension for the Firefox web browser, providing a JavaScript binding to OpenCL.


the current versions of Internet browsers such as Opera, Firefox and I.E. ALL have GPU acceleration for many aspects of browsing, from off-screen web-page composition, CSS, JPEG decoding, font rendering, window scaling, scrolling, Adobe Flash video acceleration, and an increasing list of the pipeline that makes the internet surfing experience.

everything in the browser is evolving to take advantage of the CPU and GPU as efficiently as possible, using API's such as DirectX, OpenCL, etc.

Office apps, internet browsers, video players.....

between office apps and browsers alone, that's most of the planet's "normal" usage.

what definition of "normal" are you thinking of that refutes that GPU's are used already in today's everyday usage scenarios ?
Posted on Reply
#43
p3ngwin1
cadavecaWith browsers and stuff becoming 3D accelerated, I do think this has a purpose in the office other than gaming.
that's not even considering the power efficiency of having the processor performing the tasks better and saving power.

imagine all of the current companies and corporations, even datacentres, clusters etc that benefit from either better performance, efficiency, or their desired balance of either.

this is not just for casual customers on the street, these chips are going to be in everything, and the more efficient they are the better for us in many ways that few people here seem to appreciate.

some people may ask what the point is of having the browser, or Office 2010 being accelerated ? maybe they won't notice the performance, but they are not the whole equation and they shouldn't be considered such.

even if they don't perceive the performance, the applications developers now have more potential processing to make the apps better. if that doesn't happen, then the processing become more efficient and prices drop, saving money. even if the money saved is not a consideration for these office people, their managers and bosses WILL consider it.

it's a WIN for us any way we look at it.

more efficient processors benefit us one way or another, and to think "i don't need a more efficient processor" demonstrates ignorance of this technology and the applications you use.
Posted on Reply
#44
Fourstaff
p3ngwin1what definition of "normal" are you thinking of that refutes that GPU's are used already in today's everyday usage scenarios ?
I should have phrased it better: The benefit of having a powerful gpu diminishes extremely rapidly as the gpu performance increases for normal applications. In other words, the onboard HD4200 and Sandy Bridge's integrated IGP is going to be enough, and Bulldozer's 400SP's will not provide a noticeable boost (noticeable as in you can see and feel that its faster, rather than faster only in benchmarks).

As a side note, I don't feel any difference between my Mobility 4570 and my sister's Mobility 5650 in web page performance.

Edit: Currently the main usage of processing power revolves around the x86 architecture, so as long as we are primarily using the x86, I cannot see how we will be able to migrate to GPGPU.
Posted on Reply
#45
Unregistered
FourstaffI should have phrased it better: The benefit of having a powerful gpu diminishes extremely rapidly as the gpu performance increases for normal applications. In other words, the onboard HD4200 and Sandy Bridge's integrated IGP is going to be enough, and Bulldozer's 400SP's will not provide a noticeable boost (noticeable as in you can see and feel that its faster, rather than faster only in benchmarks).

As a side note, I don't feel any difference between my Mobility 4570 and my sister's Mobility 5650 in web page performance.

Edit: Currently the main usage of processing power revolves around the x86 architecture, so as long as we are primarily using the x86, I cannot see how we will be able to migrate to GPGPU.
you know the irony part is sb didn't support opencl yet, despite its support dx10.1.
Posted on Edit | Reply
#46
Fourstaff
wahdangunyou know the irony part is sb didn't support opencl yet, despite its support dx10.1.
Ah :o I thought after that Larabee nonsense they should have came up with GPU supporting OpenCL already. Guess not :ohwell:
Posted on Reply
#47
p3ngwin1
the next evolutionary step in homogenous processing...
FourstaffI should have phrased it better: The benefit of having a powerful gpu diminishes extremely rapidly as the gpu performance increases for normal applications. In other words, the onboard HD4200 and Sandy Bridge's integrated IGP is going to be enough, and Bulldozer's 400SP's will not provide a noticeable boost (noticeable as in you can see and feel that its faster, rather than faster only in benchmarks).

As a side note, I don't feel any difference between my Mobility 4570 and my sister's Mobility 5650 in web page performance.

Edit: Currently the main usage of processing power revolves around the x86 architecture, so as long as we are primarily using the x86, I cannot see how we will be able to migrate to GPGPU.
for your mobility reference, it's good that you have hardware that satisfies you for the present, and foreseeable future. no need to upgrade unless you perhaps need better battery life from more efficient processors.

regarding x86, that may change with ARM and Microsoft changing things with Google's and Nvidia's help (let alone the continuing investments from T.I., Qualcomm, Marvel, Samsung, etc), and it certainly will change with languages like OpenCL that mean homogenous acceleration on any hardware.

currently you need to program for either the GPU or the CPU, with our current baby steps to make languages like OpenCL programmable to automatically take advantage of any OpenCL compatible hardware. with the OpenCL the goal is to write code without thinking of the hardware. we're just starting, and we've already got traction and practical benefits today. this will only get better with time.

as for the hardware, the traditional CPU and GPU have been on a collision course for decades, with increasing features being shared such that it's getting more difficult to differentiate them by the day.

things like the GPU becoming programmable with DirectX's programmable shaders, than later unified shaders, then Direct Compute, and OpenGL's similar parallel efforts.

we now have GPU's like Nvidia's Fermi architecture with L1 and L2 caches, ECC and a bunch of other HPC and supercomputing features that rock the previously CPU-only-driven supercomputing world. even Intel with it's Sandy bridge CPU's has evolved from the cross-bar memory bus, to what GPU's have been using for years: a ring-bus.

the defining lines separating CPU's and GPU's are blurring and there will very soon be a single processor that computes everything, there will be no more GPU, there will only be an evolved new generation of processors that may still be called CPU's, but they will no longer be "general", they will be agnostic: neither specific, like a GPU used to be, or general, like the CPU used to be.

it only leaves what architecture these chips will follow: x86, ARM, or some other flavour.

these new generation processors will be no longer "jack of all trades, yet master of none" to evolve into "master of all"

signs of such evolutionary processor design have been seen already.

Sony evolved their thinking when designing their processors with Toshiba and IBM for their Playstation consoles along this way. their playstation 2 console had a PPC with custom V0 and V1 units. these were similar to DSP's that could blaze through vector maths and could be programmed more than traditional DSP's.
games developers used them in all sorts of ways, from improving the graphics functions to add to the GPU's hard-wired features set, to EA creating an accelerated software stack for Dolby 5.1 running solely on the V0 unit, freeing up the rest of the CPU.

with the PS3 Sony took the idea forward again, and evolved the idea of the V0 and V1 units, generalizing even more their functionality, further again from their DSP heritage, and came up with the "The synergistic processing unit (SPU)". sometimes called SPelements, confusingly.

these SPE's would not only be more powerful than their previous V0 and V1 units,, they would increase in number, such that whereas in the PS2 they were the minority of the processing in the main CPU, in the PS3's CELLbe, they would be the majority of the processing potential. these SPE's would amount to 8 units attached to a familiar PPC.

sony prototyped the idea of not having a separate CPU and GPU for the PS3, toying with the idea of two identical CELLbe chips, to be used by developers as they wish. the freedom was there, but he development tools to take advantage of massively parallel processors wasn't, as seen with Sega's twin SH2 processors from Hitachi generations ago.

we have the parallel hardware, we simply need to advance programming languages to take advantage. this is the idea behind OpenCL.

to find a better balance, Sony approached Nvidia late into development of the PS3 and finally decided on the 7800GT with it's fixed function vertex and pixel shader technology to go up against ATI's more modern unified shader architecture in the XBOX360.

it will be interesting to see Sony's plans for the PS4's architecture if they continue their commitment to massively parallel processing.

meanwhile PC architectures like Intel's "Larabee" and AMD's "Fusion" projects, show that the evolution of processing is heading towards homogenous computing, with no specialty chips, and all processing potential efficiently being used due to no idle custom functions.

AMD bought ATI and their Fusion project will eventually merge the GPU's SIMD units together with the CPU's traditional floating point unites to begin the mating of the CPU and GPU into what will eventually be homogenous processor.

just as the smart folks like PC game-engine designers and Sony have predicted since 2006 and beyond
Posted on Reply
#48
p3ngwin1
FourstaffAh :o I thought after that Larabee nonsense they should have came up with GPU supporting OpenCL already. Guess not :ohwell:
it's really odd, you would think Intel with it's 50x more resources than AMD would have at least passable GPU tech in their latest CPU's. the CPU's are great, but damn how long does it take for Intel to catch-up in graphics?

AMD, small as they are in comparison, made a massive bet buying ATI years ago, and while it may have been a little premature by a year or so, and nearly broke the company....it's is already paying MASSIVE rewards.

Intel are at least 2 years to catch up to what AMD is selling THIS YEAR with it's Fusion technology. Bulldozer tech+GPU processors have great potential. these current Fusion's are based on old AMD CPU designs, so there's even more potential ahead.

Intel seem to be fumbling around with GPU technology as if they don't understand it or something, like it's exotic or alien. why can't they make a simple and decent GPU to start ? what's with the ridiculous sub-standard and under-performaing netbook-class GPU's ?
Posted on Reply
#49
lashton
hmmmm

Well thats strange because i seem a Lllano is physically smaller than a Sandybridge!!
Posted on Reply
#50
laszlo
cadavecaWith browsers and stuff becoming 3D accelerated, I do think this has a purpose in the office other than gaming.
my current office pc is a joke;integrated intel 8mb shared video memory,

mainboard has only pci slots;have agp card but were to put?

is a pain in the ass as i barely can watch a presentation not to mention movie.

companies buy the cheapest configuration and this is a rule everywhere so this segment users will benefit the most from this solution
Posted on Reply
Add your own comment
Apr 26th, 2024 15:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts