Friday, June 17th 2011

AMD Charts Path for Future of its GPU Architecture

The future of AMD's GPU architecture looks more open, broken from the shackles of a fixed-function, DirectX-driven evolution model, and that which increases the role of GPU in the PC's central processing a lot more than merely accelerating GPGPU applications. At the Fusion Developer Summit, AMD detailed its future GPU architecture, revealing that in the future, AMD's GPUs will have full support for C, C++, and other high-level languages. Integrated with Fusion APUs, these new number-crunching components will be called "scalar co-processors".

Scalar co-processors will combine elements of MIMD (multiple-instruction multiple-data,) SIMD (single-instruction multiple data), and SMT (simultaneous multithreading). AMD will ditch the VLIW (very long instruction word) model that has been in use for several of AMD's past GPU architectures. While AMD's GPU model will break from the shackles of development that is pegged to that of DirectX, it doesn't believe that APIs such as DirectX and OpenGL will be discarded. Game developers can continue to develop for these APIs, and C++ support is more for general purpose compute applications. That does, however, create a window for game developers to venture out of the API-based development model (specifically DirectX). With its next Fusion processors, the GPU and CPU components will make use of a truly common memory address space. Among other things, this eliminate the "glitching" players might sometimes experience when games load textures as they go over the crest of a hill.

Source: TechReport
Add your own comment

114 Comments on AMD Charts Path for Future of its GPU Architecture

#1
cadaveca
My name is Dave
Thatguy said:
If you say so, I think your off base here and the microsoft design will offer huge problems downstream. The company ready for tommorow, will be the winner tommorw.
You MUST MUST keep in mind that all of this is business, and as such, the future of technology is very unfluenced by the businesses behind it. the least amount of work that brings in the most dollars is what WILL happen, without a doubt, as this is the nature of business.


What needs to be done is for someone to effectively show why other options make more sense, not fro, a technical standpoint, but from a business standpoint.

And like mentioned, none of these technologies AMD/ATI introduced over the years really seem to make much business sense, and as such, they fail hard.


Amd's board now seems to realize this...Dirk was dumped, and Bulldozer "delayed", simply becuase that made the MOST business sense...they met the market demand, and rightly so, as market demnad for those products is so high that they have no choice but to delay the launch of Bulldozer.

Delaying a new product, because an existing one is in high demand, makes good business sense.
Posted on Reply
#2
swaaye
What I see is AMD selling all of their consumer CPUs under $200, even their 6 core chips. They need new CPU tech that they can get some better margins on. Intel charges 3-4x more for their 6 core chips because they have clear performance dominance.

Buying ATI was a good move because both AMD and NV are now obviously trying to bypass Intel's dominance by creating a new GPU compute sector. I'm not sure if that will ever benefit the common user though because of the limited types of computing that work well with GPUs.

Also, Llano and Brazos are redefining the low end in a way that Intel didn't bother to so that's interesting too.
Posted on Reply
#3
Wile E
Power User
Thatguy said:
Becuase soon enough the hardware will do the work anyways. Its not always about software. As to Nvidia, they painted themselves into a corner years ago.
The hardware need software to operate. This comment doesn't even make any sense.
Posted on Reply
#4
Thatguy
Wile E said:
The hardware need software to operate. This comment doesn't even make any sense.
Sure it does, what if the cpu schedulre and the cpu decoder know how to break the works loads across, int,fpu,vliw etc. If it get smart enough, and there no reason it can't be, then the OS just sees x86 emulated as plain x86, but the underlying micro handles alot of the heavy lifting, if you don't really see the guiness behind bulldozer, your looking in the wrong places. How hard would it be for amd to intorduce vliw like elements into that modular core design ? Not terrifically hard, better belive that this is the way forward. Tradition x86 is dead.
Posted on Reply
#5
bucketface
RejZoR said:
And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
They should invest those resources into more productive things instead of wasting them on such useless stuff.

Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.
most games these days use at least, parts of the Havok, Bullet or what ever libraries. resident evil 5 and company of heroes are 2 that mention use of havok on the box. bad company 2 used parts of havok or bullet? most physics come from these libraries. its alot easier for devs than writing their own.
(below in relpy to someone above, i'm not sure how relevant it is but it's true none the less)
the whole do what makes the most money now and we'll deal with the consequences later ideology, is why the american economy is in the state that it is. companies are like children, they want the candy & lots of it now but then they make themselves sick because they had too much. a responsible parent regulates them, it doesn't matter how big a tantrum they throw, because they know that cleaning up the resulting mess that occurs if they let them do as they please is much worse. just saying companies will do what makes the biggest short term gains regardless of the long term consequences doesn't help you or i see better games.
Posted on Reply
#6
Damn_Smooth
Speaking of AMD's graphics future this is a long, but interesting read.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute
Graphics Core Next (GCN) is the architectural basis for AMD’s future GPUs, both for discrete products and for GPUs integrated with CPUs as part of AMD’s APU products. AMD will be instituting a major overhaul of its traditional GPU architecture for future generation products in order to meet the direction of the market and where they want to go with their GPUs in the future.
Posted on Reply
#8
Wile E
Power User
Thatguy said:
Sure it does, what if the cpu schedulre and the cpu decoder know how to break the works loads across, int,fpu,vliw etc. If it get smart enough, and there no reason it can't be, then the OS just sees x86 emulated as plain x86, but the underlying micro handles alot of the heavy lifting, if you don't really see the guiness behind bulldozer, your looking in the wrong places. How hard would it be for amd to intorduce vliw like elements into that modular core design ? Not terrifically hard, better belive that this is the way forward. Tradition x86 is dead.
There is no way to do it transparently to the OS. You still need software to tell the scheduler what type of info is coming down the pipeline. It will require a driver at minimum.
Posted on Reply
#9
Thatguy
Wile E said:
There is no way to do it transparently to the OS. You still need software to tell the scheduler what type of info is coming down the pipeline. It will require a driver at minimum.
Why ? The dirver makes up for the lack of logic on the chip.
Posted on Reply
#10
Wile E
Power User
If they were capable of giving a chip that kind of logic at this point, we would have things like multi-GPU gfx cards that show up to the OS as a single gpu.

We aren't anywhere near the chips being able to independently determine data type and scheduling like that.
Posted on Reply
#11
Thatguy
Wile E said:
If they were capable of giving a chip that kind of logic at this point, we would have things like multi-GPU gfx cards that show up to the OS as a single gpu.

We aren't anywhere near the chips being able to independently determine data type and scheduling like that.
What do you think all this APU nonsense is about ? Popcorn on tuesdays ?
Posted on Reply
#12
jagd
Havoc is different from others , Intel bought Havoc and decided to use it as software physics api to advert intel cpus ,i dont think anyone would do anyting else what AMd done about this .noone would use a software api while you could do it on hardware.

Wile E said:
You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

I'll get excited about this when it is actually being implemented by devs in products I can use.
W1zzard said:
yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it
I see Cloud computing renamed version of old terminal pc-thin client /server concept ,with online gaming problem is connection more than hardware ,youll need rock-stable connection -something hard to find always http://en.wikipedia.org/wiki/Thin_client
xtremesv said:
The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?

These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?
Posted on Reply
#14
a_ump
Thatguy said:
What do you think all this APU nonsense is about ? Popcorn on tuesdays ?
its about getting a CPU and GPU into one package, once die, eventually one chip that'll be way more cost effective than 2 separate chips. Oh, and taking over the entry/low end of the market from Intel.

That's what that APU common sense is about :P


Damn_Smooth said:
Speaking of AMD's graphics future this is a long, but interesting read.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute
Very nice find sir, i want to read it all but i might have to bookmark it.:toast:,
Posted on Reply
#16
Thatguy
a_ump said:
its about getting a CPU and GPU into one package, once die, eventually one chip that'll be way more cost effective than 2 separate chips. Oh, and taking over the entry/low end of the market from Intel.

That's what that APU common sense is about :P
Long range its about comming to grips with serial processing and the lack of compute power you get from it.
Posted on Reply
#17
a_ump
Thatguy said:
Long range its about comming to grips with serial processing and the lack of compute power you get from it.
You're talking about GCN then. I was talking short range :P.

Honestly, I definitely think AMD is going to take a leap in innovation over Nvidia these next 5 years or so. I really do think AMD's experience with CPU's is going to pay off when it comes to integrating compute performance in their GPU...well APU. Nvidia has the lead right now, but i can see AMD loosening that grip.
Posted on Reply
#18
Thatguy
a_ump said:
You're talking about GCN then. I was talking short range :P.

Honestly, I definitely think AMD is going to take a leap in innovation over Nvidia these next 5 years or so. I really do think AMD's experience with CPU's is going to pay off when it comes to integrating compute performance in their GPU...well APU. Nvidia has the lead right now, but i can see AMD loosening that grip.
I don't think Nvidia is going to come much further then they have thus far. AMD is set to put a whoppin on Intel and nvidia in that area. given the limits of IPC and clock speed, its the only way to get where they need to go in the first place.
Posted on Reply
#19
Wile E
Power User
Thatguy said:
What do you think all this APU nonsense is about ? Popcorn on tuesdays ?
A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.

I would bet money this is not hardware based at all, and requires special software/drivers to work properly.
Posted on Reply
#20
pantherx12
Wile E said:
A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.

I would bet money this is not hardware based at all, and requires special software/drivers to work properly.
I bet it is hardware based, it's not just a fancy name though, it's gpu shaders in the cpu ( or next to in this case) meaning your cpu/gpu (apu) can handle all the physics and your gpu can focus on being a graphics card.

Or if all of AMDs cpus go this way, means people don't have to buy a gpu straight away which is also nice.
Posted on Reply
#21
Thatguy
Wile E said:
A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.

I would bet money this is not hardware based at all, and requires special software/drivers to work properly.
No, its about compute power, these first generation GPU's are about figuring out how to get the transistor and some of the basic technology figured out with How to make the transistors on the same piece of silica. The next step will be more transistors on both side cpu/gpu and the step beyond that will be a intergration of x86 cpu logic and gpu parellelism. Which will Give AMD a massive advantage over Nvidia and Intel in compute power and heavy workloads.

AMD got it right, 6 years ago when they started down this road, thats why bulldozer is modular.
Posted on Reply
#22
Wile E
Power User
pantherx12 said:
I bet it is hardware based, it's not just a fancy name though, it's gpu shaders in the cpu ( or next to in this case) meaning your cpu/gpu (apu) can handle all the physics and your gpu can focus on being a graphics card.

Or if all of AMDs cpus go this way, means people don't have to buy a gpu straight away which is also nice.
No it isn't. It's basically a gpu put on the same pcb as the cpu. The concept is exactly the same as current gpu accelerated programs. The only difference is the location of the gpu.
Thatguy said:
No, its about compute power, these first generation GPU's are about figuring out how to get the transistor and some of the basic technology figured out with How to make the transistors on the same piece of silica. The next step will be more transistors on both side cpu/gpu and the step beyond that will be a intergration of x86 cpu logic and gpu parellelism. Which will Give AMD a massive advantage over Nvidia and Intel in compute power and heavy workloads.

AMD got it right, 6 years ago when they started down this road, thats why bulldozer is modular.
WIll give an advantage =/= currently having an advantage.

Again, this is just gpgpu. Same thing we've had for ages. It is not transparent to the OS, and must specifically be coded for. Said coding is always where AMD ends up dropping the ball on this crap. I will not be excited until I see this actually being used extensively in the wild.
Posted on Reply
#23
pantherx12
No, it's on the same silicon man, there's no latency between the communication of CPU-GPU ( or very little)

It does have benefits.
Posted on Reply
#24
Thatguy
Wile E said:
No it isn't. It's basically a gpu put on the same pcb as the cpu. The concept is exactly the same as current gpu accelerated programs. The only difference is the location of the gpu.WIll give an advantage =/= currently having an advantage.

Again, this is just gpgpu. Same thing we've had for ages. It is not transparent to the OS, and must specifically be coded for. Said coding is always where AMD ends up dropping the ball on this crap. I will not be excited until I see this actually being used extensively in the wild.
Imagine the power of GPU with the programming front end of x86 or x87, which are widely supported instructions in compilers right now.

Thats where this is headed, INT + GPU the FPU is on borrowed time and thats likely why they shared it.
Posted on Reply
#25
cadaveca
My name is Dave
Wile E said:
It is not transparent to the OS, and must specifically be coded for.
You cna thanks nVidia for that. Had they actually adopted DX9 properly, and DX10, all the needed software would be part of the OS now. But due to them doing thier own thing, we the consumer got screwed.

I don't know why you even care if it uses software. All computing does....PC's are useless without software.
Posted on Reply
Add your own comment