• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Physx uses x87 code?

That's my reasoning too. If PhysX on CPU is so crippled, how is it that it runs just as well as any other game using other APIs like Havok or Bullet? Before any claim is made all three APIs have to be compared and the CPU PhysX has to be used.




I agree that Crysis does a better use for physics than other games including those with PhysX, but it uses the CPU to run them and everybody knows how the fps drop when there's just a bunch of explosions and such.

Also: http://www.youtube.com/watch?v=YG5qDeWHNmk - Crysis PhysX 3000 barrels: read the description the actual framerate with 3000 barrels falling was 0.2 FPS or 1 frame every 5 seconds.

Or if you prefer Havok: http://www.youtube.com/watch?v=7f33GYOC2as

Compare compare those to: http://www.youtube.com/watch?v=s_2Klve_2VQ - PhysX screensaver running off a 9500GT doing both graphics and physics at smooth fps.

To date PhysX screensaver continues to be the best example of what could be done with PhysX. For example, if Bad Company 2 used GPU physx instead of what it uses, the buildings could be destryed more realistically and not the same crappy way every single time, just as the piles of bricks in the screensaver, the walls could be destroyed into realistic bricks. Everything would be the same as it is in BC2 except that every explosion would have different results and the bricks could be thrown away off the building and cause damage like real explosions do, where it's not the explosion what makes most damage, but the shrapnel which reahes a far greater radius than the blast.

It's good that you give examples but you also have to remember the fact that the physx screensaver uses sprites and still images for the environment and just a few low polygon objects, where as the Crysis example had to generate the 3d environment, simulate all those moving trees, simulate all those barrels, on top of that add environmental effects, the moving water, sound that's linked to run simultaneously with what's happening on screen and calculate how they all interact with one another. On top of that the scene was well over a few million polygons. I don't see how that's a fair comparison to be honest.

Edit: as long as we're giving examples, i like this one: http://www.youtube.com/watch?v=vGKXPhFUKFw&feature=related
 
Last edited:
It's good that you give examples but you also have to remember the fact that the physx screensaver uses sprites and still images for the environment and just a few low polygon objects, where as the Crysis example had to generate the 3d environment, simulate all those moving trees, simulate all those barrels, on top of that add environmental effects, the moving water, sound that's linked to run simultaneously with what's happening on screen and calculate how they all interact with one another. On top of that the scene was well over a few million polygons. I don't see how that's a fair comparison to be honest.

Edit: as long as we're giving examples, i like this one: http://www.youtube.com/watch?v=vGKXPhFUKFw&feature=related

It is a fair comparison. A normal Crysis scene despite all the polys and trees, and textures and effects, etc. runs at least at 24 fps on something like a 8800, 20 fps if you wish as that's better for comparison. Introduce 3000 barrels and the framerate is almost exactly the same. Activate the physics for those barrels and fps plummets to 0.2 fps a 100 fold decrease in performance.

On the other link we have a 9500GT (9500GT for god's sake!) doing both graphics and physics, being my point that if a 9500GT can handle those physx and do graphics* as well, any modern mainstream/performance card (which has 10/20x times more shader power) can do that with relatively modern graphics like those on COD6, BC2 AND THEN SOME!!

* The screensaver does have reflexions, and shaders to recreate the water (water is volumetric or made by metaparticles, call it as you wish) and every single object has its own shadow and some other effects. It doesn't look as amazing as modern games but that's because it lacks the proper textures. The performance penalty to using the effects it's there for the most part, especially the shadows which is still one of the things that most affect performance in games.

And regarding your link, yeah that's another example of how GPU accelerated physics are superior. Havok is as good or even better than PhysX, hard to say (each one has some features that are better than the other's), but in it's current form is far worse (on it's capabilitites) for the simple fact that it is not GPU accelerated and will never be.

And that video so really adds so much weight to my point, really. I mean in 2006 Havok could do that, of course with the help of a GPU, probably a 7900GTX, maybe a 8800GTX. And in 2010 the best that we can get from them is something like BC2 ad that with a heavily modified Havok binaries... Again, if that could be done on a CPU, why don't we have such a thing on every game using Havok?
 
Last edited:
Benetanegia, why are you so concerned with the promotion of proprietary software? I have to ask as, from the point of view of the consumer, it makes little sense.
 
It's good that you give examples but you also have to remember the fact that the physx screensaver uses sprites and still images for the environment and just a few low polygon objects, where as the Crysis example had to generate the 3d environment, simulate all those moving trees, simulate all those barrels, on top of that add environmental effects, the moving water, sound that's linked to run simultaneously with what's happening on screen and calculate how they all interact with one another. On top of that the scene was well over a few million polygons. I don't see how that's a fair comparison to be honest.

Edit: as long as we're giving examples, i like this one: http://www.youtube.com/watch?v=vGKXPhFUKFw&feature=related

I like that one too, but I was kinda interested in the question being asked right before the video ended. any part 2 to it?
 
Benetanegia, why are you so concerned with the promotion of proprietary software? I have to ask as, from the point of view of the consumer, it makes little sense.

We have discussed that many times. I am not concerned with promoting propietary software. I am concerned promoting innovative solutions. If there existed a free alternative I would promote that. But it does not exist a free alternative and it won't exist until AMD and Intel release their hybrid CPUs and even then we'll see. There are projects, yes, but they are nothing more than vaporware. I had high hopes for those alternatives, really high hopes. That was 2 years ago or 1 year after those alternatives were presented and hyped. After 3 years without seing nothing (not even demos), there's no hopes to have, no credibility.

If I have to choose between something that exists, something that works, something that offers me a benefit now and something that screams to be the next Duke Nukem Forever, I take the one thing that exists. The thing that was presented 4 yeasrs ago and that exists ever since. For me Nvidia/Ati is all the same. Unlike those who falsely claim to be impartial I do not care if I have to stick to Nvidia in order to have access to those advantages, the option is at my hands at purchase time. Impartial is that one who buys what bests suit his needs, not those ones who wish the demise of tech, because it doesn't work on their hardware and use the false and hypocritical claim of "it's the best for the consumer, the best for all". lol the best for the consumer is to at least have the option to pay for something they like, the mere existence of new tech is a much greater benefit than every option being equal. And the best for all is unatainable, but the best for the mayority is PhysX indeed since the market share af Nvidia is still close to 60% (down from 70%), meaning that the best for 2/3 of the consumers is that PhysX stays and evolves.

Stop being hypocrites.
 
We have discussed that many times. I am not concerned with promoting propietary software. I am concerned promoting innovative solutions. If there existed a free alternative I would promote that. But it does not exist a free alternative and it won't exist until AMD and Intel release their hybrid CPUs and even then we'll see. There are projects, yes, but they are nothing more than vaporware. I had high hopes for those alternatives, really high hopes. That was 2 years ago or 1 year after those alternatives were presented and hyped. After 3 years without seing nothing (not even demos), there's no hopes to have, no credibility.

If I have to choose between something that exists, something that works, something that offers me a benefit now and something that screams to be the next Duke Nukem Forever, I take the one thing that exists. The thing that was presented 4 yeasrs ago and that exists ever since. For me Nvidia/Ati is all the same. Unlike those who falsely claim to be impartial I do not care if I have to stick to Nvidia in order to have access to those advantages, the option is at my hands at purchase time. Impartial is that one who buys what bests suit his needs, not those ones who wish the demise of tech, because it doesn't work on their hardware and use the false and hypocritical claim of "it's the best for the consumer, the best for all". lol the best for the consumer is to at least have the option to pay for something they like, the mere existence of new tech is a much greater benefit than every option being equal. And the best for all is unatainable, but the best for the mayority is PhysX indeed since the market share af Nvidia is still close to 60% (down from 70%), meaning that the best for 2/3 of the consumers is that PhysX stays and evolves.

Stop being hypocrites.

Whilst I dsagree strongly, thank you for clarifying your stance.
 
As long as AMD continues slowing down the evolution of GPGPU, by innaction, I will continue supporting the only company that fights for the evolution.

And yes AMD is slowing it down seriously. They just released their 6 core CPUs. ANY task (or 99% of them) that benefits from the added CPU cores, is parallelized enough to benefit from GPU acceleration, with the exception that the GPU does that 4x times faster and sometimes much more. But does it make sense to show and promote GPGPU and inherently say "ey guys don't buy my new flashy $300 6 core CPU and $150 motherboard, because our $150 GPU rapes them badly, and ey what's more, the competition's GPUs are even better at these tasks!"
 
That's my reasoning too. If PhysX on CPU is so crippled, how is it that it runs just as well as any other game using other APIs like Havok or Bullet? Before any claim is made all three APIs have to be compared and the CPU PhysX has to be used.

Lets say this about CPU power for physX: If a game needs 100% to make the game run fine, it doesnt matter if a GPU can do 10,000,000% - they have to make the game run smooth at whatever is available in ALL systems (which means a weak CPU, for the lower end users).

If they'd coded it better and hadn't crippled it, it just means we'd be getting better effects, something more than broken glass, fog and non-interactive debris in our physX games.
 
As long as AMD continues slowing down the evolution of GPGPU, by innaction, I will continue supporting the only company that fights for the evolution.

And yes AMD is slowing it down seriously. They just released their 6 core CPUs. ANY task (or 99% of them) that benefits from the added CPU cores, is parallelized enough to benefit from GPU acceleration, with the exception that the GPU does that 4x times faster and sometimes much more. But does it make sense to show and promote GPGPU and inherently say "ey guys don't buy my new flashy $300 6 core CPU and $150 motherboard, because our $150 GPU rapes them badly, and ey what's more, the competition's GPUs are even better at these tasks!"

This must be an example of "inaction" or "slowing it down".

You're saying that a CPU's direct competitor is a GPU and not another CPU from another company?

There's a MAN truck with a 680 hp engine. Must be faster than an Impreza WRX STi with a 350 hp engine.
 
Last edited:
This must be an example of "inaction" or "slowing it down".

You're saying that a CPU's direct competitor is a GPU and not another CPU from another company?

There's a MAN truck with a 680 hp engine. Must be faster than an Impreza WRX STi with a 350 hp engine.

Indeed!

Fusion was first announced to be released in 2008 and because Fusion has not been available until "now" AMD has absolutely slowed down GPGPU in these 2-3 years. I bet it's going to be like night and day once that Fusion is released. AMD's GPGPU initiatives are so going to pop up out from nowhere in 2011... Mark my words.
 
Indeed!

Fusion was first announced to be released in 2008 and because Fusion has not been available until "now" AMD has absolutely slowed down GPGPU in these 2-3 years. I bet it's going to be like night and day once that Fusion is released. AMD's GPGPU initiatives are so going to pop up out from nowhere in 2011... Mark my words.

Tell you what, they're going fast enough for my wallet ;)
 
Despite protestations of impartiality from different forum members, it is clear that certain people will defend either company. However, the one realm of non contention is the disabling of Physx when a non NV gpu is detected.
I dont know enough to argue the case for/against x87 versus SSE however that point seems obvious and that is it is not in NV's interests to develop it so that it's advantage is diminished. Physx is after all a basic commandment of NV PR in so much as every review will state - Physx as a plus, even if the card is piss poor.

However, the disabling Physx when non NV card is present is anti-consumer as it shows NV's fear that people would float to ATI if they could buy a cheaper NV card for Physx (like you could have done with Agea Physx cards).

It's all so very obvious. NV doesn't need to share it's Physx baby. It uses it as a marketing crowbar. Fortunately, it doesn't make me want to buy games that use Physx. I still prefer games to be 'fun' and 'involved' - uber gfx is secondary to that.

And please dont say NV loves to innovate. Once it made the most excellent 8800 GTX it sats on it's hands for years and rehashed it over and over again without developing it too much.

Neither ATI or NV are saints (if ATI were saints they'd lower their prices FFS!). I wish we could all just grow up and see them both for what they are - BIG CORPORATE DOUCHEBAGS.

But yeah, NV suck ass when it comes to the whole Physx shambles - they had the foresight to buy a company and integrate that companies 'idea' into their 'own' image so nobody could use it without a green card. Then they pay, i mean help code games to gain market share.

Money talks.
 
Tell you what, they're going fast enough for my wallet ;)

GPGPU has nothing to do with how many cards are released. In fact if GPGPU really kicked off you would need to upgrade less parts than you do now and maybe only the GPU which is what AMD and mostly Intel are against.

They could make a platform where instead PCIe a far better interface was used so that GPUs could be used as co-processors for floating point without any performance penalty (something like Quick Path) and CPUs would be much "simpler" and optimiced for int, memory management, etc. That way idle power consumption would be far better on the system and when required FP performance would be excelent running on the GPU. That can be done NOW. It could be done 3 years ago in fact, but will they? No, because it's nt in their interest to stop selling the way way way overpriced CPUs (and chipsets). Take everything you know about chip manufacturing and tell me how is it posible that CPUs cost so much more than GPUs when in the price of the GPU you have PCB, memory, et all included, not to mention that the die size of CPUs is smaller, they own the fabs and don't have to outsource, etc etc etc. They force us to change complete platform every odd year and that's something they are not willing to conceed. They want GPUs to be only for gaming. If a GPU is all you need to make your PC faster they would not make (rob) as much money.
 
I agree that Crysis does a better use for physics than other games including those with PhysX, but it uses the CPU to run them and everybody knows how the fps drop when there's just a bunch of explosions and such.

Also: http://www.youtube.com/watch?v=YG5qDeWHNmk - Crysis PhysX 3000 barrels: read the description the actual framerate with 3000 barrels falling was 0.2 FPS or 1 frame every 5 seconds.

Or if you prefer Havok: http://www.youtube.com/watch?v=7f33GYOC2as

Compare compare those to: http://www.youtube.com/watch?v=s_2Klve_2VQ - PhysX screensaver running off a 9500GT doing both graphics and physics at smooth fps.

Like I said I realize my personal opinions about how Physx looks is relative. Even though I do agree it is more fluid than BC2, the programing still makes it feel blocky. Idk if you know what I mean. While BC2 isn't the best example, I still prefer the physics of that game over something like Crytosis.


For example, if Bad Company 2 used GPU physx instead of what it uses, the buildings could be destryed more realistically and not the same crappy way every single time, just as the piles of bricks in the screensaver, the walls could be destroyed into realistic bricks. Everything would be the same as it is in BC2 except that every explosion would have different results and the bricks could be thrown away off the building and cause damage like real explosions do, where it's not the explosion what makes most damage, but the shrapnel which reahes a far greater radius than the blast.

This is an unsubstantiated claim in which you can't prove unless the team programming BC2 decides to use Physx suddenly. Whether it would be different or even how is unprovable. In this case you are wrong as you don't know the outcome.
 
This is an unsubstantiated claim in which you can't prove unless the team programming BC2 decides to use Physx suddenly. Whether it would be different or even how is unprovable. In this case you are wrong as you don't know the outcome.

Of course I know how it could be. Physics are just physics (don't confuse it with PhysX which is the API) and follow the same rules in real life and in the computer. On the PC they are all mathematical aproximations (tricks) but the result is the same. When two bricks are next to each other in a wall, there are several forces acting upon them that makes them stay put: gravity, friction, cohesion (in the case of a wall the concrete between them), inertia (not a force, but still a property that affects other forces)... In real life an external force (explosion) has to be greater than the sum of all the others in order to destroy, make a hole, in that wall. In a PC it's all the same, when physics are used and not animations. BC2 uses precomputed keyframe animation and some sprites to make the effect, it's not real time, but with enough computing power it is more than posible to make it. There are plenty of examples, like the bricks in PhysX screensaver or the Havok FX one that Half A Hertz posted, except that on those examples bricks are not cohesioned. But that's just another force to take into account and PhysX and Havok both can do that. In fact Fluids and Cloth are all about that cohesion force.

Why did I write all this? Well I just wanted to make sure that you understand that there's no difference between physics in real life and physics on a PC, except that on a PC the realism is going to be limited by the available computational power. With the use of a GPU, which is orders of magnitude more powerful than a CPU (for this kind of task), that computing power is available and hence the physically correct model is feasible. But you do have to make the effort of coding it though.

Even that last sentence is only half a truth, because if they took the time to implement a completely realistic model once, the work would be done and that would save them lots of efforts afterwards. For instance they would not have to create an animation for every different building, tree, or whatever object you want to be destructible, it's just destructible because you are under a realistic and physically correct game world. Similar to what Crytek does with its editor here for vegetation, but for walls, where they would just need to especify a volume and tell the engine that the volume is a wall made of <this brick sample>. That is posible to make today thanks to the power of current GPUs, and the tools necessary are there in PhysX library and Havok library, it just needs to be done. And it must be done in the GPU, becasue this threads OP is about if a CPU can handle the PhysX ammount present in Cryostasis, and maybe it can, but that's because the ammount is is very very low (almost a joke) in comparison to what we see in the PhysX screensaver and universes away from the 1.000.000 interactive particles that Nvidia showed in real time in the Supersonic Sled demo. You don't need so many on the screen at the same time in order to make realistic buildings (you just need about 2500 per wall), but the power is there on a GTX480. That powerful are GPUs in comparison to CPUs in parallel works. You have it there 1 million objects at acceptable framerates versus 3000 barrels at 0.2 fps.
 
another "omg physx is a sham" thread, these have been around since nvidia bought ageia before that people were all about this new "physx" for games etc. It's just funny to me how nvidia buys it and people cal shenanagans on it. what if nvidia bought asus would people atart calling all nvasus products a sham?? probably. nvidia is the dude everyone loves to hate. personally physx cuda whatever as long as the gpu performs im happy
 
another "omg physx is a sham" thread, these have been around since nvidia bought ageia before that people were all about this new "physx" for games etc. It's just funny to me how nvidia buys it and people cal shenanagans on it. what if nvidia bought asus would people atart calling all nvasus products a sham?? probably. nvidia is the dude everyone loves to hate. personally physx cuda whatever as long as the gpu performs im happy

this what's funny to me about the argument, so they're saying because ageia offered it to all consumers regardless of their primary gpu adapter, nvidia should do the same.

okay let's apply that same logic to sli, or any other of the technologies nvidia has purchased over the years. Better yet, why not just take all that money they spent on buying the tech and give it to amd and intel? Then we could watch nvidia pay for everything and he others sit back and reap the benefits, might become an issue when nvidia runs out of money, but I'm sure intel could take a turn, give out x86 licenbsing for free and etc.

won't be long and our utopia will be complete!













come out of the clouds yet? If exxon mobile buys a new technology to improve the conversion rate of oil to gasoline, guess what they do with it? use it to make profit! If walmart finds a supplier that makes their product lines and stock cheaper guess what they do with it? Use it to make profit! I could go on and on and on.

guess what nvidia is a corporation, if they pay for something, their not going to share it for free. And if you think for one minute that amd would if the tables were swapped you're a fool.
 
If there existed a free alternative I would promote that. But it does not exist a free alternative and it won't exist until AMD and Intel release their hybrid CPUs and even then we'll see.

Well then you're in luck, their is a free alternative. Direct x11 has it built in, compute shaders. Now you can support that.
 
PhsyX is a sham, and after another crash from the Flash plug-in, it came to my mind that it's just like it. Although there are differences though, since Flash isn't discriminatory (even if it's proprietary too).

If Flash is to video streaming then PhsyX is to physics. In the future, we can still have streaming and physics without either, and THAT is innovation.

Indeed!

Fusion was first announced to be released in 2008 and because Fusion has not been available until "now" AMD has absolutely slowed down GPGPU in these 2-3 years. I bet it's going to be like night and day once that Fusion is released. AMD's GPGPU initiatives are so going to pop up out from nowhere in 2011... Mark my words.

You're still not addressing my other point though, of you comparing Nvidia GPUs with an AMD CPU. Heck even compared to an Intel CPU, the Nvidia GPU is apparently better. But wait, it's not a CPU, it's a GPU, so go ahead and build a system without a CPU from AMD or Intel since an Nvidia GPU would be better.
 
Of course I know how it could be..........

Exactly my point.

I understand your argument.

One could also make the argument that BC2 didn't follow things out like Crytek did without Physx so why expect them (BC2 developers) to do more with Physx?
 
Exactly my point.

I understand your argument.

One could also make the argument that BC2 didn't follow things out like Crytek did without Physx so why expect them (BC2 developers) to do more with Physx?

neither crysis nor bad company uses PhysX
 
I never said they did.

I did say that it is impossible to tell factually that BC2 plus nVidia Physx would be better as such an arrangement doesn't exist. He implies that the developers would spend equal amount of time on game physics. I said that (again) this is impossible to know unless you spent time developing the game as to what the goal was for physics in BC2.

It could have been that a certain level of in game physics was the goal. If nVidia's Physx allowed less time to do this then they may have opted not to develop more realistic physics but would instead have similar in game physics (which in this argument was the goal) just with less programming time spent.
 
GPGPU has nothing to do with how many cards are released. In fact if GPGPU really kicked off you would need to upgrade less parts than you do now and maybe only the GPU which is what AMD and mostly Intel are against.

In the case of proprietary technology, such as PhysX, I would argue that GPGPU has everything to do with how many cards are released and, more importantly, sold. Moreover, I do not believe that the pc conglomerate (intel, AMD, Nvidia, software developers) would ever collectively seek to bring about a situation where we are not encouraged to upgrade, less so where the situation benefitted one company (Nvidia via its PhysX) whilst adversely affecting everybody else. Indeed, the standards laid out in DirectX stand in direct opposition to such practices.

Whilst we're comparing CPU's and GPÙ's, how would off-loading work to the graphics card in any way benefit me? Wouldn't I eventually be required to go Sli or Crossfire simply because physics calculations had completely tied up one card? I only look at benches when I am in the market for new hardware, but as far as I'm aware games continue to tax the GPU much more than the CPU. I am unsure to what extent, if any, a Phenom II or a 775-socket quad would act as a bottleneck, but I have a feeling that much more work is expected of the graphics card and few games are able to take advantage of multiple threads. Doesn't that mean that even on quads that are a few years old, we have a couple of cores twiddling their thumbs in most games, whilst the graphics card is being increasingly taxed? Isn't it inefficient to off-load further tasks to the graphics card in these circumstances, assuming that my naive and rather simplistic appraisal of the situation is correct?


another "omg physx is a sham" thread, these have been around since nvidia bought ageia before that people were all about this new "physx" for games etc. It's just funny to me how nvidia buys it and people cal shenanagans on it. what if nvidia bought asus would people atart calling all nvasus products a sham?? probably. nvidia is the dude everyone loves to hate. personally physx cuda whatever as long as the gpu performs im happy

How is PhysX not a sham? At the last count I think it was relevant, and even that is open to argument, on 16 or 17 titles and they have ensured that their cards are incompatible where an ATI card is detected: is this commitment to ensuring wider adoption of PhysX?

The OP suggests that the antiquated coding provides further evidence of Nvidia's lack of commitment to advancing PhysX. i would argue two things: first, it doesn't really matter providing it gets the job done in a more or less efficient manner; secondly, Nvidia has no need to spend this area, providing they can convince people that PhysX is a good deal as it stands.

this what's funny to me about the argument, so they're saying because ageia offered it to all consumers regardless of their primary gpu adapter, nvidia should do the same.

Why should they not do the same? If they have studied sales of DirectX graphics cards they would probably be aware that ATI has a much larger market share. They themselves are denying these users the possibility of using PhysX in the form of a second card, which suggests that they are not really that commited to extending PhysX use, thereby reducing it to a species of "emperor's new clothes".

Well then you're in luck, their is a free alternative. Direct x11 has it built in, compute shaders. Now you can support that.

I agree, I would like to see further support for DirectX. Microsoft have sold enough copies of Windows 7 to warrant further commitment and whilst I do not see inactivity on their of ATI's part as enough reason to support PhysX, I do feel that they have both failed to deliver on past promises.

PhysX is a sham.

I agree, PhysX is a sham, but I attribute that to other factors, over and above its coding.
 
Well then you're in luck, their is a free alternative. Direct x11 has it built in, compute shaders. Now you can support that.

Compute shaders, first of all it's not free or open and most importantly it is NOT a physics engine or middleware. It's just like CUDA or OpenCL, it's just a tool that let's you code for the GPUs in addition to the CPU. You still have to write the code and most game developers are just not willing to write their own physics engine. That's why most games use Havok, PhysX, Bullet and similar middleware physics engines.

Compute Shaders is not different to x86 in which you just have a bunch instructions you can use to make your program, but you have to code that program. Compute shaders does nothing on it's own.

Whenever I see a physics engine which uses CS or OpenCL that is not vaporware and that is as good as PhysX/Havok I will support it. I'm not going to ditch something that works for something that does not exist yet. Or may I ask you to sell/throw away your GPU and stop playing games now because in a few months something better will be released? At least in that case a better GPU is a certainty, a middleware API based on OpenCL/Compute is a panacea right now. Like I said I've been waiting for alternatives for 3 years and no one has even shown a damn demo.

In the case of proprietary technology, such as PhysX, I would argue that GPGPU has everything to do with how many cards are released and, more importantly, sold. Moreover, I do not believe that the pc conglomerate (intel, AMD, Nvidia, software developers) would ever collectively seek to bring about a situation where we are not encouraged to upgrade, less so where the situation benefitted one company (Nvidia via its PhysX) whilst adversely affecting everybody else. Indeed, the standards laid out in DirectX stand in direct opposition to such practices.

Whilst we're comparing CPU's and GPÙ's, how would off-loading work to the graphics card in any way benefit me? Wouldn't I eventually be required to go Sli or Crossfire simply because physics calculations had completely tied up one card? I only look at benches when I am in the market for new hardware, but as far as I'm aware games continue to tax the GPU much more than the CPU. I am unsure to what extent, if any, a Phenom II or a 775-socket quad would act as a bottleneck, but I have a feeling that much more work is expected of the graphics card and few games are able to take advantage of multiple threads. Doesn't that mean that even on quads that are a few years old, we have a couple of cores twiddling their thumbs in most games, whilst the graphics card is being increasingly taxed? Isn't it inefficient to off-load further tasks to the graphics card in these circumstances, assuming that my naive and rather simplistic appraisal of the situation is correct?

You are not understanding my point. I'm talking about future developments, posible future developments. The scalar performance of CPUs has not increased almost one bit since 2005. Everything they have done is increase parallel and vectorized execution and make parallel tasks faster. That is good when you can parallelize your program, but my uncle uses some old programs which do not parallelize and guess what? His P4 3.8 GHz is WAY faster than his Quad core at 2.6 GHz.

Intel and AMD should try to increase scalar performance, putting all their efforts and silicon into making it faster in those tasks instead of keeping adding and adding more FPUs and cores. A 6 core CPU does not things faster than a dual core CPU, except for those tasks that are parallel and those tasks run much faster on a GPU.

So make a fast dual CPU or make a fast 6-8 core asymetrical CPU, which has 2 complete cores and 4-6 small cores which lack complex FPUs and vector units and can accelerate the execution of simple tasks and pass parallel and vectoriced FP performance to GPUs which are orders of magnitude faster doing so. Also improve the platform so that the GPU is not bottlenecked.

Take any review and you will see where quads are faster than duals and 6-cores are faster than quads and it's always 3dsmax, maya, photoshop, video conversion etc. A 4/6 core CPU is completely useless for anything that is not that kind of tasks and a GPU can do those tasks 10 times faster so why not pass all those tasks to the GPU and streamline the CPU so that it's faster in a linear fashion and much smaller and cheaper?

Same with games, the games that require you to have a quad are those which have lots of physics and AI running off the CPU, both tasks can be done faster and more efficiently on the GPU.

You think it's a problem to need SLI/Crossfire, but you are completely wrong. I don't know how much you paid for your i7, but I do know that I have a Q6600 and I would have to pay at least 500 euros to buy i7, and it wouldn't be faster than my quad (at similar Mhz) except on those tasks that would benefit running in a better parallel execution processor like a GPU. What if I could buy an especialized $50 dual core CPU like I said and a GPU in the lines of HD6850/GTX560 and that was much faster than a future 12 core CPU in video transcoding, 3dsmax rendering, game physics, AI, etc, etc, etc? What's left? Where would a 12 core CPU be faster than the option that I'm presenting? Nowhere. All we need is fast dual cores in order to speed up linear performance and the ability to move parallel data better to GPUs, so they can act like math co-processors. In the past moving the co-processor, the FPU, inside the CPU was a big move, but that's because that allowed for decreased latencies and higher clocks. Well, now we don't need faster clocks (we can't actually, we hit a physicsal wall) and the FSB is much faster than it was. The clock limit was reached 5 years ago, and the biggest limitation is die size and heat nowadays, there's no reason not to move things to different dies and there's no need to duplicate efforts, adding useless vectorial FP performance to CPUs (adding cores that only kick in vectorial tasks or with AVX or whatever means) when there's already a processor that does vectorial FP tasks much much faster than CPUs and that it's 90% of the time idling: the GPU. Even on games, not a single one game pushes GPUs to 100% shader utilization, because other parts like ROPs and textures are a bottleneck, so use that 10% of shader power. 10% of the FP performance in any modern GPU is way more than what even the fastest i7 is capable off, in parallel tasks.
 
Last edited:
Back
Top