I'm saying that probably GPUs are NOT at 100% load nowadays...
There's only two situations where they aren't: 2D and vsync (depending on how it is limited). This is why when you try playing a game with anything CUDA running, the frame rates get hideous but, at the same time, why messing around on your desktop isn't a problem.
CPU based physics (Havok) haven't improved in almost 10 years and will never do, at least until Intel can do them better than anyone else. Still GPUs (guided by the CPU) are far better suited for physics calculations and have 20X the neat power. GPU physics >>>> CPU physics always. You have clearly stated you don't need or want better physics, but many do, and many who don't is because they don't understand what faster physics means or have never seen a good example of massive physics in action.
Because physics, in terms of gaming, is all about getting a "passing grade." That is, when you do something like jump, does it react in a way that is believable? Or when you fire a bullet, does the weapon recoil as you expect and does the bullet behave as expected/wanted in terms of trajectory? I can't name one game that actually had bad pseudo-physics. Really, what do gamers gain by having scientific-grade physics calculations? Cartoon physics are half the fun in a lot of games. For instance, in Nightfire, the fluidity of the physics allowed gun play to be more like an elegant salsa dance rather than a gritty, slow movement that Quantum of Solace has. Most people that still play Nightfire are repulsed by Quantum of Solace's attempt to be realistic.
Scientific-grade physics calculations really only have a home in simulation games which have been wanning in popularity over the years.
For more examples of why we should "steal" GPU clocks for other things than graphics, take COD4 or Bioshock (UT3, HL2, L4D), they still have good graphics and what's the difference of using a GF 7900/X1900 card or a GTX295 to play it? NONE really, resolution and AA levels, but nothing else, as all of them can have the details at MAX and play smooth already. Lower the details a bit, while the difference is not huge still, and you can play them even on a 7600GT. We are talking about cards with a power difference of 8x to 12x and that doesn't really make the games better. Something is wrong there...
Higher frame rate which means next to no hiccups. I can't name a single game in recent times that has zero hiccups but if I go back and play the oldies like Mafia, they run smooth as butter. I think some people are more sensitive to those hiccups than others. I for one, can't stand them. I'd rather it look like crap and play without hiccups than look brilliant and get them all the time.
I think you underestimate how much power it takes to get dozens of textures on screen with all the real-time rendering that's taking place. Real time ray-tracing is the direction NVIDIA needs to be going, not general processing. Why doesn't NVIDIA buddy up with Intel and stick a GT200 core on an Intel chip? Would that not be more useful?
Oh, and "The IBM T221-DG5 was discontinued in June 2005."
source
It cost like $2,000 USD so not many were willing to buy it. I believe higher DPI is the direction the industry will go when the costs of producing high DPI monitors comes down; however, it also requires an exponential increase in graphics capabilities too. One can't thrive without the other.
There is no demand for stream processing in mainstream computers. IBM would probably love the technology but because NVIDIA is too tight-lipped on everything, they'll just keep on building 100,000+ processor super computers. The benefit of the 100,000+ processor approach is they aren't only high in terms of FLOPs, they're also very high at arithmatic operations per second as well.
The Sims was put off on developement for over a decade because there wasn't enough processing power. Spore was put off by at least two decades for the same reason. There's lots of ideas out there for games that haven't been created because there still isn't enough power in computers. The next revolution I'm looking for is text-to-voice algorithms. Just like GPUs, that will probably require its own processor.