• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Physx uses x87 code?

I'm not going to ditch something that works for something that does not exist yet.

Well that's just it, people really need to make an active choice to support the compute shader option and denounce the closed off physx route. If nobody makes a stand nothing will change. As of now physx will always be limited to one brand, but direct x11 and whatever comes after it will without doubt be in a position to be used on a greater range of systems than physx. In roughly 5 years after enough new pcs have been bought market penetration should be sufficient. Once that happens it seems like an obvious choice, unless you're getting paid under the table, it will always make more sense to pick the open solution that covers the most systems. Until then it's not even worth messing with physx, it will never have real full game integration because the exclusivity. And let's not forget about the eventual next console gens, they will most likely be direct x based. So you can imagine what an impact that will have on the choice of physics options, it will bleed through to the pc considering all games will be some sort of crappy console port. However tricky it is to use, compute shader physics seems to have no real long term challenger.
 
Well that's just it, people really need to make an active choice to support the compute shader option and denounce the closed off physx route. If nobody makes a stand nothing will change. As of now physx will always be limited to one brand, but direct x11 and whatever comes after it will without doubt be in a position to be used on a greater range of systems than physx. In roughly 5 years after enough new pcs have been bought market penetration should be sufficient. Once that happens it seems like an obvious choice, unless you're getting paid under the table, it will always make more sense to pick the open solution that covers the most systems. Until then it's not even worth messing with physx, it will never have real full game integration because the exclusivity. And let's not forget about the eventual next console gens, they will most likely be direct x based. So you can imagine what an impact that will have on the choice of physics options, it will bleed through to the pc considering all games will be some sort of crappy console port. However tricky it is to use, compute shader physics seems to have no real long term challenger.

On the contrary it makes ALL the sense to support PhysX until that free alternative comes. Do you think Compute Shaders or OpenCL would have ever seen the light of day if it was not because of the first Standford's Stream initiative on the X1900 and later on the same team's effort to make and promote CUDA? (one of the first things that AMD did after the purchase was abandoning that effort and that's why they turned to Nvidia and Nvidia made G80 and plans for Fermi) If some of us didn't support it, do you think anyone would have made the effort to create them? A standard is created as long as there is a market for it. And in order to create a market a product has to be released. Everybody should be grateful to Nvidia for taking something amazing that was falling and promoting it to the point that now everybody thinks and knows is the future of gaming. People want to ditch PhysX, but there's no doubt they want the benefits that PhysX offers. that's being a hipocrite.

Do you think we would have ever had GPUs or they would be something even close to what they are if it weren't for 3DFX and their closed API Glide? If you think so, you are anive or you have not been around for too long or you didn't pay attention. When the first 3D accelerator cards were released most people believed they were useless, because CPUs could make almost the same and future CPUs would be even better at the task so... Well we know history.
 
Benetanegia has a simple point: companies wont waste their time making an openCL/DC based physics engine, unless they think there is actually a demand for it. PhysX's current shortcomings (and non-popularity in certain circles) is going to make them think very hard before spending money on competing with it.
 
Yeah, exactly if PhysX is ditched before something else exist, no one will release something or in the best case scenario that thing will not be better than what PhysX is today but it will come 2 years later, supposing a period of stagnation and a waste of time.
 
Yeah, exactly if PhysX is ditched before something else exist, no one will release something or in the best case scenario that thing will not be better than what PhysX is today but it will come 2 years later, supposing a period of stagnation and a waste of time.

I see, so you are basically arguing that the best means of ensuring advancement in physics is to support PhysX and thereby stimulate the competitors into action? Something akin to ensuring the development of Firefox by using Internet Explorer? Fine, I'm willing to support PhysX, but wait, I can't, I use an ATI card as a primary display, who's fault is that?
 
I see, so you are basically arguing that the best means of ensuring advancement in physics is to support PhysX and thereby stimulate the competitors into action? Something akin to ensuring the development of Firefox by using Internet Explorer? Fine, I'm willing to support PhysX, but wait, I can't, I use an ATI card as a primary display, who's fault is that?

firefox/IE comparison doesnt work, because thats an existing market.

This is proving that a NEW market is worth investing time into - and yeah, nvidia is doing a bad job of it. ATI users cant run PhysX, so they're killing themselves, and the hardware accelerated physics market.
 
I see, so you are basically arguing that the best means of ensuring advancement in physics is to support PhysX and thereby stimulate the competitors into action? Something akin to ensuring the development of Firefox by using Internet Explorer?

No. Firefox exists and I use it, that0s how I tell them "this is what I like". What you guys do is akin to abandon, flame and bury IE before Firefox, or any other browser for widows, besides IE, ever existed. The only message that gives is: people don't like Internet.

Fine, I'm willing to support PhysX, but wait, I can't, I use an ATI card as a primary display, who's fault is that?

AMD's for refusing to adopt and support PhysX when it was offered for free, but mainly yours and only yours:

- It's your only choice which card you choose to buy. Each card has its pros and cons. If PhysX is a good enough to you, it may have enough weight in the decission. In any case the existence of PhysX does not change the choice you made, and most importantly, it doesn't change your gaming reality. If PhysX didn't exist, which is what you all are asking for, there would be no GPU accelerated games. Back to real world, PhysX exists but you chose Ati, you have no accelerated games, same exact result. How did affect you the existence of PhysX. it didn't.

- It's also your fault, because you are flaming it. Support it or at least don't flame it and do not buy the anti-Nvidia anti-PhysX PR crap from AMD and Intel and you might create enough pressure in AMD as to support PhysX or into creating an alternative that is free or that it works on your card.

EDIT: And seriously you are barking at the wrong tree regarding open source/propietary tech here. I use open source alternatives as much as I can. I'm a freelancer and while posible I use open source video and audio transcoding alternatives and believe me, there are free (yet not open source) alternatives that are far better. At 27 years old and my career pretty much stabilized, I'm trying to learn to use Blender and Gimp to the same level I know 3dsmax and photoshop, and I don't need to. My discussions pro OpenGL and anti DX are well known in TPU and there are many other examples. I always support open source alternatives as long as they ARE alternatives. You're never going to see me downplaying something that is the only one of it's kind though, it's as simple as that. I like innovation and I could care less who that innovation comes from. I support Nvidia, because it's promoting things that I like, and has been promoting them long before anyone did. Things like GPUGPU, accelerated physx and Stereo 3D.
 
Last edited:
No. Firefox exists and I use it. What you guys do is akin to abandon, flame and bury IE before Firefox, or any other browser for widows, besides IE, ever existed. The only message that gives is: people don't like Internet.

Who are "you guys", what have they done and why do you think I belong to this group?

- It's your only choice which card you choose to buy

That's precisely my point, it is my choice. If advanced physics options require me to use a single brand, well, that reeks too much of console for my liking: Nvidia are not content with offering consumers the possibility of additional physics options, rather they demand complete and exclusive allegiance to their brand. I am unwilling to profess such allegiance to any of the brands, I buy what best suits my needs at the time.

-- It's also your fault, because you are flaming it. Support it or at least don't flame it and do not buy the anti-Nvidia anti-PhysX PR crap from AMD and Intel and you might create enough pressure in AMD as to support PhysX or into creating an alternative that is free or that it works on your card.

I think you will find that you are confusing me with your good self. I have asked questions and raised points: one of your posts finished by identifying various people as "hypocrites".

I find your belief that supporting PhysX is the best form of advancing open-source physics to be curious, but you are free to defend yourself without resorting to petty accusations of flaming, which at no point has been my intention.

EDIT: And seriously you are barking at the wrong tree regarding open source/propietary tech here.

Then we at least have that much in common; however, it is impossible to discuss PhysX in isolation of propietary marketing.
 
Last edited:
Who are "you guys", what have they done and why do you think I belong to this group?



That's precisely my point, it is my choice. If advanced physics options require me to use a single brand, well, that reeks too much of console for my liking: Nvidia are not content with offering consumers the possibility of additional physics options, rather they demand complete and exclusive allegiance to their brand. I am unwilling to profess such allegiance to any of the brands, I buy what best suits my needs at the time.



I think you will find that you are confusing me with your good self. I have asked questions and raised points: your initial post finished by identifying various people as "hypocrites".

I find your belief that supporting PhysX is the best form of advancing open-source physics to be curious, but you are free to defend yourself without resorting to petty accusations of flaming, which at no point has been my intention.



Then we at least have that much in common; however, it is impossible to discuss PhysX in isolation of propietary marketing.

My posts are always writen for everybody in the thread. I was not inherently putting you on the same bag as others, but this is just another PhysX flaming thread and you have flamed it in others, don't try to say otherwise now. Sorry regarding this one thread anyway.

Regarding the support of PhysX as the one and only way to support accelerated physics, yes, it is and I have history behind to back it up. Every single successful technology has a propietary background, every single one. No consumer GPUs without the push that 3DFX and others did with their APIs. No downloadable music without mp3. In fact, no PC without MS DOS. The list is long. You can stay up in your clouds thinking a free open standard can succeed without the money and interest of someone behind*, I'm really fine here in the real world knowing that I'm right.

Another example: Linux. It has never been as successful as with Ubuntu, which although still open and free, has a rich person behind pushing its adoption, and spending absurdingly high ammounts of money for something that is free. He believes in the cause, I won't argue with that, but at the same time he is demostrating why propietary have to exist. That or we change the world and brainwash every single rich person in the world and make him a philantropist...

* In a market that does not exist yet. Future open physics engine will kick off because PhysX exists. Ogg kicked off because mp3 created the market and so on.

That's precisely my point, it is my choice. If advanced physics options require me to use a single brand, well, that reeks too much of console for my liking: Nvidia are not content with offering consumers the possibility of additional physics options, rather they demand complete and exclusive allegiance to their brand. I am unwilling to profess such allegiance to any of the brands, I buy what best suits my needs at the time.

That's how the market works. It's not something Nvidia or Ati or anyone has invented. If you want something, you will have to buy it.

I have told you that yes, PhysX for everybody is better than PhysX for half the people, but the OPTION to have it if you want it, is thousands of times better than not having the option to choose from. The demise of PhysX does not benefit anyone. Not at all. And Nvidia is doing no wrong promoting it. And it's not doing wrong keeping it for themselves NOW, now that they already tried to give it to AMD for free and was said no, now that they tried to support a guy who was making it posible on AMD GPUs and was later sadi no. Now that AMD will not collaborate with early hardware samples so that PhysX would work properly when i.e Northern Islands are released. Now that Intel and AMD have downp`layed PhysX so much that it has been relegated to a point where it costs so much more to support it for AMD+Nvidia configs than the market that the effort would generate.

It's amazing how the people in the internet see something obscure in Nvidia's decision to cut that support after 2 years trying to make that posible, but saw nothing wrong or worth a one liner post when AMD cut support for x1900 cards and older. in both cases they are just cutting costs... it's terrible...
 
Last edited:
You have flamed it in others, don't try to say otherwise now.

Horseshit: I have criticised it, expresed my views on it and advanced alternative approaches, but at no time have I attempted to provoke unrest for the sheer satisfaction of creating discontent. In short, I discuss, I do not flame.

firefox/IE comparison doesnt work, because thats an existing market.

No, it is not a good comparison on reflection.

This is proving that a NEW market is worth investing time into - and yeah, nvidia is doing a bad job of it. ATI users cant run PhysX, so they're killing themselves, and the hardware accelerated physics market.

Surely much of the onus also rests with ATI and above all Intel (coming back again to the number of copies of Win7 they have sold). Neverthless we entirely agree on the outcome of Nvidia's marketing practices.
 
and as far as ati goes its not free to run physx on there hardware nvidia would get a share of every ati gpu sold we all know that not to mention nvidias little logo on all the boxes free marketing from your competitor is always a nice thing :roll:
 
Folks keep in mind that there has been a lot of posts defending physx but none that actual refutes the claims that it's code is not as efficient as it should be. Let's put this to practice instead of making hypothetical posts about how good physx is. Lets use the up coming game Mafia II as an example. If physx is all that it suppose to be we should be able to play with a single video card and use no less then a dual core CPU to get all the physx effects for that game. Lets take a look at the PC system specs:

MINIMUM SYSTEM REQUIREMENTS
Operating System: Microsoft Windows XP (SP2 or later) / Windows Vista / Windows 7
Processor: Pentium D 3Ghz or AMD Athlon 64 X2 3600+ (Dual core) or higher
RAM: 1.5 GB
Video Card: nVidia GeForce 8600 / ATI HD2600 Pro or better
Hard Disc Space: 8 GB
Sound Card: 100% DirectX 9.0c compatible sound card
Peripherals: Keyboard and mouse or Windows compatible gamepad

With a higher end PC you should get better results. Now lets take a look at what kind of PC you will need in order to run Physx on high:
PHSYX/APEX ENHANCEMENTS SYSTEM REQUIREMENTS
Operating System: Microsoft Windows XP (SP2 or later) / Windows Vista / Windows 7 Minimum Processor: 2.4 GHz Quad Core processor
Recommended Processor: 2.66 GHz Core i7-920 RAM: 2 GB

Video Cards and resolution: APEX medium settings
Minimum: NVIDIA GeForce GTX 260 (or better) for Graphics and a dedicated NVIDIA 9800GTX (or better) for PhysX
Recommended: NVIDIA GeForce GTX 470 (or better)

Video Cards and resolution: APEX High settings
Minimum: NVIDIA GeForce GTX 470 (or better) and a dedicated NVIDIA 9800GTX (or better) for PhysX
Recommended: NVIDIA GeForce GTX 480 for Graphics and a dedicated NVIDIA GTX 285 (or better) for PhysX NVIDIA GPU driver: 197.13 or later.
NVIDIA PhysX driver: 10.04.02_9.10.0522. Included and automatically installed with the game.

As you can see, in practice, in order to use it you need 2 high end video cards (not one) and a much higher end CPU in order to run the game. The very thing that physx is suppose to prevent or be better at. Funny they recommend a GTX 285 because that card is EOL. Good luck trying to find one at a price you are willing to pay, lol.

Anyway, all the posts defending physx are clearly debunked because it code is clearly inefficient for todays games. Remember people, physx isn't on all the time in Mafia II. You can only see it when triggered (IE: breaking something, etc) and the game still utilizes CPU physics as well.

So in practice, when you actually see it in action clearly shows you that it's not as efficient or practical as it should be. Source of the system specs here
 
Last edited:
Just overglorified Math Coprocessor.
 
Folks keep in mind that there has been a lot of posts defending physx but none that actual refutes the claims that it's code is not as efficient as it should be. Let's put this to practice instead of making hypothetical posts about how good physx is. Lets use the up coming game Mafia II as an example. If physx is all that it suppose to be we should be able to play with a single video card and use no less then a dual core CPU to get all the physx effects for that game. Lets take a look at the PC system specs:



With a higher end PC you should get better results. Now lets take a look at what kind of PC you will need in order to run Physx on high:


As you can see, in practice, in order to use it you need 2 high end video cards (not one) and a much higher end CPU in order to run the game. The very thing that physx is suppose to prevent or be better at. Funny they recommend a GTX 285 because that card is EOL. Good luck trying to find one at a price you are willing to pay, lol.

Anyway, all the posts defending physx are clearly debunked because it code is clearly inefficient for todays games. Remember people, physx isn't on all the time in Mafia II. You can only see it when triggered (IE: breaking something, etc) and the game still utilizes CPU physics as well.

So in practice, when you actually see it in action clearly shows you that it's not as efficient or practical as it should be. Source of the system specs here

PhysX is as efficient as any other physcs engine on the market, that's all that matters. No game is as efficient as it could be so what are we complaining about here? The recommended specs are just marketing, everybody knows you don't need a highest end GPU to run the PhysX games, much less 2 GPUs. I've run every PhysX game with a single 8800GT maxed out except Metro2033 which is for me the best game overall this year but is an unoptimized piece of crap too.

Don't confuse marketing with actual value. just because marketing sucks, that doesn't make the tech any worse, besides half the people flaming PhysX in this thread (and any other forum) has never seen it in action, which is laughable at best...
 
PhysX is as efficient as any other physcs engine on the market, that's all that matters. No game is as efficient as it could be so what are we complaining about here? The recommended specs are just marketing, everybody knows you don't need a highest end GPU to run the PhysX games, much less 2 GPUs. I've run every PhysX game with a single 8800GT maxed out except Metro2033 which is for me the best game overall this year but is an unoptimized piece of crap too.

Don't confuse marketing with actual value. just because marketing sucks, that doesn't make the tech any worse, besides half the people flaming PhysX in this thread (and any other forum) has never seen it in action, which is laughable at best...
LOL, all you can do is make up hypothetical excuses. Unfortunate for you that you blur the line between fact and fiction that when shown a pratical example you can't rebut it even though you still try. This thread discusses the merit of physx efficiency. Per Mafia II it clearly shows that it's not a very efficient api.
 
PhysX is as efficient as any other physcs engine on the market, that's all that matters. No game is as efficient as it could be so what are we complaining about here? The recommended specs are just marketing, everybody knows you don't need a highest end GPU to run the PhysX games, much less 2 GPUs. I've run every PhysX game with a single 8800GT maxed out except Metro2033 which is for me the best game overall this year but is an unoptimized piece of crap too.

I call bullshit on that one. I had a GTX 260 (216) and that at the time was second only to the GTX 280 or GTX 285. I played Mirrors Edge hoping to see Physx in action and it was balls- so very slow and choppy. That was when NV didnt stipulate that you really ought to have a second card for Physx as only one of their cards really cant do it - even though you buy a card and it states - PHYSX!

I got a GTX 295 and it played it fine - because it has two gpu's - one for game and one for physx (idealogically). NV using the physx badge on gfx cards is a downright con, seeing as you really need a second 'dedicated' gpu for it (i.e. it might as well be the old Agea PPU card).

Fact is - if you go out and buy a GTX 2xx or low end GTX 4xx series card that states 'Physx' and then expect your single lovely card to play games at good res with physx, guess what? Not happening. NV should just start shipping GPU's for Physx only and that way start making money from ATI users too.
 
LOL, all you can do is make up hypothetical excuses. Unfortunate for you that you blur the line between fact and fiction that when shown a pratical example you can't rebut it even though you still try. This thread discusses the merit of physx efficiency. Per Mafia II it clearly shows that it's not a very efficient api.

All that I can tell is that other games which use Havok or their own engine don't run better or have imprved physics compared to the ones using CPU only PhysX. It's not any hypothetical excuse, like I said let's see if they make the same tests with other APIs and see what happens, there may be surprises. Like I've said before, if PhysX is so crippled because it uses x87 woooooah it must be the best physics engine by far as it's not in a disadvantage to the others at all when running from the CPU in games like UT3 (see the 25000 barrel video posted earlier), Mass Effect, Borderlands...

Like many have said, x87 is not deprecated at all and it's not as old and useless as the article pretends it to be. There are no proofs whatsoever that using SSE would make PhysX any faster just the wishful thinking of a guy which used to make better articles than this one. He used to do things cientifically and not like this poor escuse of a benchmark.

I call bullshit on that one. I had a GTX 260 (216) and that at the time was second only to the GTX 280 or GTX 285. I played Mirrors Edge hoping to see Physx in action and it was balls- so very slow and choppy. That was when NV didnt stipulate that you really ought to have a second card for Physx as only one of their cards really cant do it - even though you buy a card and it states - PHYSX!

I got a GTX 295 and it played it fine - because it has two gpu's - one for game and one for physx (idealogically). NV using the physx badge on gfx cards is a downright con, seeing as you really need a second 'dedicated' gpu for it (i.e. it might as well be the old Agea PPU card).

Fact is - if you go out and buy a GTX 2xx or low end GTX 4xx series card that states 'Physx' and then expect your single lovely card to play games at good res with physx, guess what? Not happening. NV should just start shipping GPU's for Physx only and that way start making money from ATI users too.

And I call BS on yours instead:

http://www.pcgameshardware.com/aid,...-plus-comparative-screenshots/Reviews/?page=3

ME_1920.PNG


I don't know what you did, but you can play ME easily on a GTX260 and I can tell you 100% that I played Mirror's Edge maxed out with a OCed 8800GT and as you can see it's posible.

Icall BS on needing a dedicated card as well as I heve never seen any significant improvement adding my 8800GT to my newer 9800GTX+ as a dedicated card and:

http://www.pugetsystems.com/articles.php?id=69

pic_disp.php


So you know what I think? You've never really seen PhysX and you are full of BS.

EDIT: I didn't play Cryostasis maxed out btw. I just remembered that I had to lower resolution to 1280x960 and 2xAA. But Cryostasis is another unoptimiced game coming from eastern europe. They make the best games in latest years IMO, but they are highly unoptized. Cryostasis just doesn't run well no matter you use accelerated PhysX or not. I tried disabling PhysX to see if that increased performance but it didn't at all (a couple fps difference) so I left it on.
 
Last edited:
I wonder if it would make any difference if an 8600 or 8400 was used with a 285 in that graph for all Physx Enabled games. TBH who cares what instruction set the cards use, x87 is almost as old as x86, majority of x87 instructions have been curbed by SSE2 but read this info here for quick reference. http://en.wikipedia.org/wiki/X87.

now what is not right is forcing people to get a Physx PPU or a NV card solely for that kind of processing.
 
http://arstechnica.com/gaming/news/...cpu-gaming-physics-library-to-spite-intel.ars

NVIDIA: the bottlenecks are elsewhere

We spent some time talking through the issue with NVIDIA's Ashutosh Rege, Senior Director of Content and Technology, and Mike Skolones, Product Manager for PhysX. The gist of the pair's argument is that PhysX games are typically written for a console first (usually the PS3), and then they're ported to the PC. And when the games go from console to PC, the PC runs them faster and better than the console without much, if any, optimization effort.

"It's fair to say we've got more room to improve on the CPU. But it's not fair to say, in the words of that article, that we're intentionally hobbling the CPU," Skolones told Ars. "The game content runs better on a PC than it does on a console, and that has been good enough."

NVIDIA told us that it has never really been asked by game developers to spend any effort making the math-intensive parts of PhysX faster—when it gets asked for optimization help, it's typically for data structures or in an area that's bandwidth- or memory-bound.

"Most of the developer feedback to us is all around console issues, and as you can image that's the number one consideration for a lot of developers," Skolones said.

Even after they made their case, we were fairly direct in asking them why they couldn't do a simple recompile and use SSE instead of x87. It's not that hard, so why keep on with the old, old, horrible x87 cruft?

The answer to this was twofold: first and least important, is the fact that all the AAA developers have access to the PhysX source and can (and do) compile it for x87 on their own.

The second answer was more important, and surprised me a bit: the PhysX 2.X code base is so ancient (it goes back to well before 2005, when x87 was deprecated), and it has such major problems elsewhere, that they were insisting that it was just kind of pointless to change over to SSE.

When you're talking about changing a compiler flag, which could have been done at any point revision, the combination of "nobody ever asked for it, and it wouldn't help real games anyway because the bottlenecks are elsewhere" is not quite convincing. It never occurred to anyone over the past five years to just make this tiny change to SSE? Really?

Of all the answers we got for why PhysX still uses x87, the most convincing ones were the ones rooted in game developer apathy towards the PC as a platform. Rege ultimately summed it up by arguing that if they weren't giving developers what they wanted, then devs would quit using PhysX; so they do give them what they want, and what they want are console optimizations. What nobody seems to care about are PC optimizations like non-crufty floating-point and (even better) vectorization.

"It's a creaky old codebase, there's no denying it," Skolones told Ars. "That's why we're eager to improve it with 3.0."

We are again at the same point we reach everytime Nvidia is accused of crippling something. And the answer is always that developers are lazy regarding the PC because the PC is just powerful enough and performance does not matter (or they are so strangled by their publisher, they have no time to make any better on the PC). The most important point here is IMO that developers do get the code and do compile it by themselves and they still use x87. The bottleneck is elsewhere which has already been mentioned and that's why we don't see any significant difference with other physics APIs, wich may or may not use SSE.
 
But still, when you boil it all down, we keep coming back to the point that it's so easy to switch from x87 to SSE, and x87 has been deprecated for so long, and it's so much to NVIDIA's advantage to be able to tout the GPU's superiority over the CPU for physics, that it's very hard to shake the feeling that there's some kind of malicious neglect going on...
source
This is from the same article. And if you read carefully they blame developers but also say that Physx needs a complete re-write :laugh:. Which should happen with 3.0 next year. A re-write that should have happen a long time ago by now. So there is no relation between the developers and the inefficiencies of Physx. As developers are not responsible for the maintenance, upkeep, etc of Physx. x87 code was used before the 386 CPU came out if I remember correctly.
 
Last edited:
Honestly I have yet to see a game that Physx needed a dedicated GPU to render correctly that wasn't a gimmick. Batman:AA needed one (dedicated Physx GPU) and it had HALF the physics of BC2. Its all smoke and mirrors on Nvidias end. IMHO.
 
Honestly I have yet to see a game that Physx needed a dedicated GPU to render correctly that wasn't a gimmick. Batman:AA needed one (dedicated Physx GPU) and it had HALF the physics of BC2. Its all smoke and mirrors on Nvidias end. IMHO.
Well it's smoke, sparks, flying paper and broken stuff. Oh and don't forget a wavey flag and other small stuff :rolleyes:. All of which needs 2 (high end) video cards to do.
 
Hi Ben. I shouldn't have implied you were talking BS. My experience of physx is very real though. Mirrors edge was poor. I played cryostasis though on my gtx 295 and that was fine but an incredibly shit game.
There's no way round the fact though that NV are being asses by their attitude to physx.
 
That's odd. Mirror's Edge runs great with PhysX enabled on my E2200/9800 GT.
 
what makes me laugh tho is if you look back at the cell factor demo when physx was first released they haven't really improved on any of that and oh whats funny is that particular demo with all those effects ran at the same speed on a single core cpu as it did on the PPU and your telling me all those effects which haven't changed for the most part now need 100x the power and need to be run on a gpu i call shenanigans. the fact in cell factor the ppu vs cpu was a 1-5 fps difference with mass sandbox physx. The fact that in 5 years nothings changed at all and the effects haven't gotten bigger or better they've stayed the same yet somehow the power to run it has to increase 20x again shenanigans
 
Back
Top