Originally Posted by Aquinus
Except you can't process a regular application through a pipeline like a GPU has because GPU data is all the same where a computer program has multiple different instructions per clock cycle. A GPU is given a large set of data and told to do a single task to all of it, so it does it the same way. A CPU is instruction after instruction, there isn't a whole lot that represents what the GPU can do.
A shader is small because it has a limited number of instructions it can perform and has no control mechanism, no write back. There is no concept of threads in a GPU, it is an array of one or more sets of data that will have the same operation performed on the entire set. A shader is also SIMD, not MIMD as you're describing.
Where a CPU can carry out instructions like "move 10 bytes from memory location A to memory location B," A GPU does something more like "multiply every item in the array by 1.43."
If it is so simple, why hasn't anyone else figured it out, I'm sill convinced that you don't quite know what you're talking about.
I do have a bachelors degree in computer science not to mention I'm employed as a systems admin and a developer.
true, unless u can get the cpu to sort things out and let the gpu do what its best. meaning the cpu can fetch and decode, then execution will be determined whether more efficient on the cpu or gpu
i think thats the approach amd is taking with apu's in the future(HSA)