Friday, June 17th 2011

AMD Charts Path for Future of its GPU Architecture

The future of AMD's GPU architecture looks more open, broken from the shackles of a fixed-function, DirectX-driven evolution model, and that which increases the role of GPU in the PC's central processing a lot more than merely accelerating GPGPU applications. At the Fusion Developer Summit, AMD detailed its future GPU architecture, revealing that in the future, AMD's GPUs will have full support for C, C++, and other high-level languages. Integrated with Fusion APUs, these new number-crunching components will be called "scalar co-processors".

Scalar co-processors will combine elements of MIMD (multiple-instruction multiple-data,) SIMD (single-instruction multiple data), and SMT (simultaneous multithreading). AMD will ditch the VLIW (very long instruction word) model that has been in use for several of AMD's past GPU architectures. While AMD's GPU model will break from the shackles of development that is pegged to that of DirectX, it doesn't believe that APIs such as DirectX and OpenGL will be discarded. Game developers can continue to develop for these APIs, and C++ support is more for general purpose compute applications. That does, however, create a window for game developers to venture out of the API-based development model (specifically DirectX). With its next Fusion processors, the GPU and CPU components will make use of a truly common memory address space. Among other things, this eliminate the "glitching" players might sometimes experience when games load textures as they go over the crest of a hill.
Source: TechReport
Add your own comment

114 Comments on AMD Charts Path for Future of its GPU Architecture

#26
Casecutter
Nvidia bought a Physic company… AMD bought a graphics company. So yes it make sense that Nvidia wanted to get and got a lead. Although they wanted and kept it (as much as they could), their proprietary Intellectual Property, which is understandable.

AMD got in the graphic side, dusting off ATI and got them back in contention, all along wanting to do achieve this. It just takes time and research is bearing fruit.

The best reason for us that AMD appears to maintain to the open specification, and that will really make more developer want in.
Posted on Reply
#27
bucketface
this could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.
Posted on Reply
#28
Benetanegia
W1zzardthis is basically what intel tried with larrabee and failed
I always thought they failed because they didn't have good enough graphics drivers and as an accelerator it would not be financially viable. I remember reading that for HPC Larrabee was good enough, but you know better that me that in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.
bucketfacethis could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.
Eh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.
Posted on Reply
#29
pantherx12
BenetanegiaAMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.
No they haven't man, they just don't bang on about it.

They talk directly to developers and have had a forum running for years where people can communicate about it.

Go on the AMD developer forums to see : ]
Posted on Reply
#30
bucketface
BenetanegiaEh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.
all i was saying is if nvidia plans on seeing CUDA through the next 5 years or so they'll almost certainly have to open it up, i don't know the specifics of CUDA vs openCL but my understanding was that CUDA, as it stands is the more robust platform.
Posted on Reply
#31
St.Alia-Of-The-Knife
"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??
Posted on Reply
#32
seronx
1. The architecture explained in this diagram is the HD 7000

VLIW5 -> VLIW4 -> ACE or CU




www.realworldtech.com/forums/index.cfm?action=detail&id=120431&threadid=120411&roomid=2
Name: David Kanter 6/15/11

Dan Fay on 6/14/11 wrote:
---------------------------
>Hi David,
>
>If you're not already planning to do so, I'd be really curious how well this architecture
>is expected to perform with GPGPU.
>
>
>Thanks!

At a high level, it appears that their next architecture will exceed Fermi in a number of areas. This is to be expected, and I'll be most interested to see how Kepler pushes programmability forward.

Some of the GPGPU improvements include:
1. Real caches
2. Graphs of data parallel kernels
3. Exceptions, recursion, function calls
4. Better branching, predication, masking, control flow
5. No more VLIW, instead a scalar+vector arch with fewer scheduling rules and more regular code generation
6. Acquire/release consistency model
7. ECC support for some SKUs
8. Substantially better DP performance for some SKUs
9. Faster global atomics

I don't really want to give away too much, since I will be working on an article soon.


David
www.realworldtech.com/

^wait for the article dar

2. In 2 years you will see this GPU in the Z-series APU(The tablet APU)
St.Alia-Of-The-Knife"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??
In the cloud future yes, CPU will only need to command in the future
BenetanegiaFermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.
AMD GPUs have been GPGPU compatible since the high end GPUs could do DP

This architecture just allows a bigger jump(ahead of Kepler)
NC37Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.

Course this is much bigger. Saw this coming. Our CPUs are gonna be replaced by GPUs eventually. Those who laughed at AMD's purchase of ATI...heh. Nice move and I guess it makes more sense to ditch the ATI name if you are gonna eventually merge the tech even more. Oh well, I still won't ever call their discrete GPUs AMD.
Nvidia was very late, some late 200 series can do DX10.1 but not very well
BenetanegiaNvidia has been much more in contact with their GPGPU customers, asking what they needed and implementing it. And once it was inplemented and tested, by asking what's next and implementing that too. They have been getting the answers and now AMD only had to implement those. Nvidia has been investing a lot in universities to teach and promote GPGPU for a very long time too. Much sooner than anyone else thought about promoting the GPGPU route.

AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.



In fact yes. Entrepreneur companies constantly invest in products whose viability is still in question and with little markets. They create the market.

There's nothing wrong in being one of the followers, just give credit where credit is due. And IMO AMD deserves none.



They have had top performers in gaming. Other than that Nvdia has been way ahead in professional markets.

And AMD did not pioneer GPGPU. It was a group in Standford who did it and yes they used X1900 cards, and yes AMD collaborated, but that's far from pioneering it and was not really GPGPU, it mostly used DX and OpenGL for doing math. By the time that was happening Nividia had already been working on GPGPU on their architecture for years as can be seen with the launch of G80 only few monts after the introduction of X1900.



That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.

I already predicted this change in direction a few years ago anyway. When Fusion was frst promoted I knew they would eventually move into this direction and I also predcted that Fusion would represent a turning point in how aggressively would AMD promote GPGPU. And that's been the case. I have no love (neither hate) for AMD for this simple reason. I understand they are the underdog, and need some marketing on their side too, but they always sell themselves as the good company, but do nothing but downplay other's strategies until they are able to follow them and they do unltimately follow them. Just a few months ago (HD6000 introduction) VLIW was the only way to go, almost literally the godsend, while Fermi was mocked up as the wrong way to go. I knew it was all marketing BS, and now it's demostrated, but I guess people have short memories so it works for them. Oh well all these fancy new features are NOW the way to go. And it's true, except there's nothing new on them...
The reason they are changing is not because of the GPGPU issue its more on the scaling issue

Theoretical -> Realistic
Performance didn't scale correctly



It's all over the place, well scaling is a GPGPU issue but this architecture will at least allow for better scaling ^
Posted on Reply
#33
Neuromancer
Wow how times are turning backwards.

I got me a new math co-processor!
Posted on Reply
#34
Sapientwolf
St.Alia-Of-The-Knife"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??
The GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics). The CPU is much faster at sequential logic. The CPU has been tailored toward its area and the GPU to its own a well. However now we see the gray area between the two increasing more and more. So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.

One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.
Posted on Reply
#35
Hayder_Master
So they point to big improve in performance and only benchmarks can prove it.
Posted on Reply
#36
Disruptor4
Hayder_MasterSo they point to big improve in performance and only benchmarks can prove it.
Well probably not only benchmarks. You will see a decrease in the time it takes to process certain things. Similar in example to how decoding and recoding can be done by the GPU in certain programs.
Posted on Reply
#37
Thatguy
SapientwolfThe GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics). The CPU is much faster at sequential logic. The CPU has been tailored toward its area and the GPU to its own a well. However now we see the gray area between the two increasing more and more. So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.

One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.
the decoder will handle this job more then likely.
Posted on Reply
#38
xtremesv
The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?

These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?
Posted on Reply
#39
Wile E
Power User
You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

I'll get excited about this when it is actually being implemented by devs in products I can use.
Posted on Reply
#40
seronx
Well by 2013

The APU
with

Enhanced Bulldozer + Graphic Core Next

Will be perfect unison

and with

2013
FX+AMD Radeon 9900 series
Next-Gen Bulldozer + Next-Gen Graphic Core Next

and DDR4+PCI-e 3.0 will equal MAXIMUM POWUH!!!

:rockout::rockout::rockout: :rockout::rockout::rockout: :rockout::rockout::rockout:
Posted on Reply
#41
pantherx12
Wile EYou know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

I'll get excited about this when it is actually being implemented by devs in products I can use.
I know it's only one thing, but furture mark 11 does soft body simulation on the GPU on AMD cards and Nvidia cards.

Only one thing, but it does point to things to come I think.
Posted on Reply
#42
W1zzard
Wile EYou know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.
yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it
Benetanegiathat in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.
i agree, but why does amd waste their money with useless computation features that apparently have nowhere to go other than video encode and some hpc apps ?
if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?
Posted on Reply
#43
RejZoR
And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
They should invest those resources into more productive things instead of wasting them on such useless stuff.

Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.
Posted on Reply
#44
swaaye
ATI's support of GPGPU hasn't been as great as some say here. OpenCL support only goes back to HD4000 because older chips have limitations that make it basically infeasible. In other words HD3000 and 2000 are very poor GPGPU chips. X1900 isn't really even worth mentioning.

You can on the other hand run CUDA on old G80. NV has definitely been pushing GPGPU harder.

On the other, other hand however I can't say that GPGPU affects me whatsoever. I think AMD is mostly after that Tesla market and Photoshop filters. I won't be surprised if this architecture is less efficient for graphics. I sense a definite divergence from just making beefier graphics accelerators. NV's chips have proven with their size that GPGPU features don't really mesh with graphics speed.
Posted on Reply
#45
Thatguy
xtremesvThe future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?

These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?
even at light speed the latencys will kill ya, there is no way around client power, resist the cloud, its bullshit anyways.
Posted on Reply
#46
Thatguy
W1zzardyup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it



i agree, but why does amd waste their money with useless computation features that apparently have nowhere to go other than video encode and some hpc apps ?
if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?
Becuase soon enough the hardware will do the work anyways. Its not always about software. As to Nvidia, they painted themselves into a corner years ago.
Posted on Reply
#47
Thatguy
RejZoRAnd even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
They should invest those resources into more productive things instead of wasting them on such useless stuff.

Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.
they should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.
Posted on Reply
#48
W1zzard
Thatguythey should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.
the market is headed toward console games that are directx (xbox360) and that get recompiled with a few clicks for pc to maximize developer $$
Posted on Reply
#49
RejZoR
Exactly. If they will try to invent something new and not push it enough like they never really did for anything, they are just plain stupid. DirectX is the way to go at the moment, mostly because of what W1z said. Profit.
Posted on Reply
#50
Thatguy
W1zzardthe market is headed toward console games that are directx (xbox360) and that get recompiled with a few clicks for pc to maximize developer $$
If you say so, I think your off base here and the microsoft design will offer huge problems downstream. The company ready for tommorow, will be the winner tommorw.
Posted on Reply
Add your own comment
Apr 24th, 2024 18:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts