Tuesday, April 15th 2008
NVIDIA CUDA PhysX Engine Almost Complete
Although NVIDIA bought AGEIA Technologies only two months ago (on February 13, 2008), the GeForce creator informed recently that the conversion of AGEIA's PhysX API engine to CUDA programming language that interfaces with the GPUs is almost complete. Upong completeion of CUDA, owners of GeForce 8 and 9 series graphics cards will be able to play PhysX-enabled games without the need of an additional AGEIA PhysX PCI card. The big question here is, how much will this PhysX addition worse the frame rate in games. Well for now we only know that NVIDIA showed off a particle demo at its recent analysts day that was apparently similar to Intel's Nehalem physics demo from IDF 2008. For the record, the Nehalem demo managed 50,000 - 60,000 particles at 15-20 fps (without a GPU), while NVIDIA's demo on a GeForce 9800 card achieved the same level of particles at an amazing 300 fps, quite a boost. NVIDIA's next-gen parts (G100: GT100/200) in theory can double this score to top 600 fps. Manju Hegde, co-founder and former CEO of AGEIA added that in-game physics will be the "second biggest thing" in 2008.
Source:
TG Daily
53 Comments on NVIDIA CUDA PhysX Engine Almost Complete
You put it as if you were all for competition, but you rather like seeing AMD (ATI can't decide anymore really) following Intel's path, than a colaboratinon between AMD and Nvidia in physics.
Why would be better the scenario of AMD capitalizing to Intel's way of doing physics, than AMD collaborating with Nvidia when it comes to physics processing? They don't have to support Nvidia in this, but instead they have to give even more power to Intel?
IMHO the veil that you have in front of your eyes doesn't let you see the reality. What Nvidia is doing with physics is one of the best moves in the gaming industry, and everyone should be supporting it, or implementing their own physics on the GPU.
You base your thinking in the assumption that AMD couldn't compete with Nvidia if they tried to do their own physics engine, and you could be right, but the idea that AMD siding with Intel is better than siding with Nvidia, just because in taht case Nvidia could have some advantage in the market is stupid. And it only demostrates your hatred towards Nvidia.
Asides, although ATi might be owned by AMD, they're not under an iron rule in any sense - AMD has given them more than enough reign to do as they wish.
But, I don't remember anyone signing ATI's praises when they demonstrated their physics processing capabilites in the past. It was dropped in the mean time as there wasn't any way for it to reasonably go forward. TBH, do I think AMD/ATI would've purchase a physics company if they had the resources? Truly, I doubt they would've if they could. The market was too iffy, and no one was sure how game dev support would pan out.
I think you've totaly misconstrued what I was trying to say - I didn't claim that this move on nVidia's part isn't great for the gamin industry - quite the contrary. Physics have been growing in leaps and bounds over the last few years, and the move to a dedicated/partially dedicated processing unit is ineveitable.
I never said that I don't foress ATI and nVidia collaborating down the road if it means the betterment of the technology within the gaming market itself; they have collaborated on different issues before. That's nothing new. But as far as competition goes, not colloboration, I don't see the two working together in such a new market as this. I don't forsee ATi approaching nVidia, nor do I see nVidia approaching ATI; I do find it more reasonable that Intel would approach ATI, or ATI would approach Intel . . . now, on the other hand, should Intel push forward by themself, and should their solution end up being majorly better than nVidia's (which I doubt overall), then yes, nVidia partnering with ATi to compete against Intel is highly probable.
Again, this is all speculation at this point as well, but trying to read further into my previous posts for some kind of sublminally encoded message is rather pointless, IMO, as there isn't one. It's all speculation and my opinion. Not trying to start a flame war here, as lord knows we've seen more than enough in various threads, just correcting a misunderstanding.
Why I originally stated that I hope ATI stay out of it for the time being. I know they're more than capable of competing on the hardware aspect of it, but they can't afford to lose out to nVidia in a different league of game benchmarking, and ATI doesn't have the funding nor the marketing to help push their side at the moment.
I don't want Intel doing with GPUs what they did to the PPU (understanding the GPU as the complete system that Nvidia is developing right now). Intel tried very hard to demostrate that CPUs could do the same work that Ageia PPU could do. To do that they did 2 things, one was showing some multicore CPUs doing the same work as the PPU and the second was using their faster Quad to do, not the same work, but just "enough" of work so that a PPU wasn't worth it (with the poor use of physics in games, AKA poor support of Ageia, that was easy). They elaborated a big campaign around this, but they consciously forgoten the fact that, first the multicore was not releasing until at least 4 years later, and that the Quad required to get "enough" physics processing was $1500. Not to mention that the whole idea of a PPU was not doing just 2x the physics, but a lot more than that. Intel always tried to slow down the adoption or evolution of physics in games, so that those physics didn't get out of reach of their CPUs, and they are doing the same right now, not only about physics, but with graphics too. They are trying to establish the idea that a $1500 CPU is better than a $500 CPU + $500 GPU + $500 PPU or any other combination of those, and that will never be true. But they are trying hard to make the gaming industry move towards a model that would make the above easier to accomplish.
In the end, I think that it's only a matter of which side you choose to lead the future of gaming. The only feasible physics in the future are either software physics (Intel's way) and Ageia's or the like, Nvidia's way in this moment. And contrary to what Intel is trying to show, software (CPU) physics are not a revolution, but a linear evolution of the physics we have today. Ageia/Nvidia are aiming at 10x bigger physics. I really hope that in the case that Ati can't do their own physics they go towards Ageia and not towards Intel for this reason. I'm always amazed on how some people don't like Nvidia because of their supposed tactics (primarily based on rumors and outright lies about TWIMTBP), but they don't have any problem with Intel, the one that has litigations regarding unfair competition and monopolystic tactics and this time based on proofs...
EDIT: BTW there were numerous rumors about Ageia and DAAMIT negotiating a purchase before Nvidia even had thought about it. The same rumors say it was Ageia who aproached Nvidia after that negotiation failed.
I think they should implement both. Of course a dedicated card would do things faster, but you would need an SLI capable board. I don't want physics as a way to promote SLI, I want it as a feature that anybody could take benefits from. Also if someone is doing for example 9900 GTX SLI, they should have the option to use 1/8 of each card and have better visuals + better physics than another one with 9900 GTX for graphics + 9600 GT for physics, and not have to either give up on physics (both cards for graphics) or graphics (one of the 9900 GTX used for physics), or having to buy an extra card for physics. I think there should be an option in the drivers to use as much power as you want for physics. Because, following with the same example as above, someone with just one 9900 GTX could want to sacrifice a lot of visuals in exchange of great physics.
Anyway, probably the best and cheapest solution for the moment would be using the IGP in an Hybrid SLI solution for physics and dedicated cards for graphics. I don't see any reason for not doing all their chipsets with IGP and Hybrid SLI anymore, because of it's advantages. The possibility to use them for physics only adds one more advantage, because otherwise the IGP when used with a high end card would be useless. But I do prefer the posibility to use whatever the power you want for physics in the drivers, and that games would have scalable physics, as Crysis does, rather than a fixed solution.
+1 on implementing both, they'll really need to work on drivers for that sort of thing. but i'd love the advantage of selecting exactly how to allocate your power.
and +1 for the IGP idea, the IGP could give you nice entry-mid level physics and you'd have the whole gfx card for visuals, or with your ideas, anywhere in between virtually!
still for myself i have a 680i board so having 2 cards in sli with one in the middle for physics would be nice, or even just my 9800GTX for gfx and 2x8600GT for physics, or maybe even my 8800GT.... also my mAtx board has an open ended pci-e 4x slot, which should harness everything an 8600GT has to offer :D
so at least in my situation (P5N32-E SLi / G33M-DS2R and 9800GTX - 8800GT and 2x8600GT) ill be completely set for CUDA physX :pimp:
and i likes what what you says Darkmatter :cool:
rock on nvidia, rock on :rockout:
But, I stoutly agree that I would really not like to see Intel get involved with physics processing, for the specific points that you mentioned. Intel hasn't done anything "innovative," IMO, in the last 5-6 years. Their method of approach is typically a brute-force method. It's not a matter of being efficient or effective (which requires some serious thought during development), but more a matter of can they solve the problem with the current hardware at hand.
TBH, I'd truly love to see ATI and nVidia pioneer this new technology, instead of falling into competition. Both companies have key areas where their hardware accells as far as physics processing is concerned, and both are poised to benefit from a mutual agreement - but, they don't always see eye-to-eye, and it's easier for them to just fall to competition instead of collaboration. Time will tell.
As to the Aegia thing - I vaguelly remember hearing something like that, but I was no where near sure about it.
And as to my feelings towards nVidia: great hardware. drivers can be dodgy at times (but whose isn't now and then?); but will I ever purchase their products? No. After the way they treated us 3DFX owners back in the day, and with my dealings with their support divisions with some of my father's hardware and how they do business sometimes; I refuse to purchase their products. For me, it has nothing to do with their TWIMTBP campaign - on the contrary, I feel it was (and has been) a brilliant marketing move. But, I've been treated better by ATI, and will continue to stay loyal to them but will not resort to fanboishness (at least I try not to :p).
Intel - owned their products for years and years, but i don't always agree with their methods and tactics. But, I've never had any bad run-ins with their support or otherwise, I just don't always agree with their campaign maneuvers.
But havok physics are already ran on valve games with little performance penalty for the calculations. I see this as more of a attempt to sell more cards by Nvidia, not so much as a actual performance or breakthrough for the gaming community.
But that's exactly what I was saying. Havok physics are as good as Ageia's but they lacked the hardware acceleration and without it, the best you can do with a CPU, even a Quad, is what Crytek did in Crysis, which are IMO the best physics seen in a game to date, thanks to their own physics engine.
Havok did see they needed some kind of hardware acceleration and started to design GPU physics along with Ati and Nvidia. But before they coud achieve anything Intel adquired Havok, coincidence? Remember what I said about Intel trying to slow down physics adoption?
I guess, based on what you said, that physics are not for you. You might be OK with HL2's level of physics, but that's by no means close to what most gamers could expect from a game. If you can "survive" with HL2 physics, you could survive with HL2 graphics forever too. I was wanting some kind of physics and interactivity with the world since Duke 3D and got nothing relevant until Severance: Blade of Darkness in 1999. Since then until HL2 anything new, and since then until Crysis the same.
Now I want true fully destructible environments, like the one showed in the revamped Unreal Engine 3.5 demo, but with a ton more particles and realism. I want smoke or water or any fluid being displaced by the wind, NPC, objects, bullets (hell, I want to see a blue smoke displacing a red smoke in real time!! And at the same time realistically mixing with it of course.)... I want decent cloth applied to NPCs, curtains, posters on the walls, newspapers in the ground... I want realistically deformable objects based on their expected properties. And I want this to be applied to everyting in the gameworld without frames going down. Crysis does some better and more abundant physics than other games, but when lots of particles interact with each other, even Quads go down to one cypher frames. I want more and I want it to run smooth.
I never bought Ageia's physics, because the lack of support in games and the high price. An add in card would never work in the market, that was my opinion, but if two or three games would have shown anything really interesting, something that couldn't be emulated in the CPU, i waould have bought it. The problem with the games was that developers couldn't justify the expenses of doing 2 separate games, one with Ageia and the other for the CPU, and they couldn't make a game to only run with Ageia hardware, because the user base was minimal, so they ended up doing the same with some minor enhancements and using the hardware to liberate CPU cycles. The problem with this is that the game was propably using only 10% of its power for the physics, so it was pointless. Nvidia can do both price and support a lot better. Making a game that can only run with hardware accelerated Ageia physics makes sense now, considering that any 8 series and up cards could run it.
But, for example, 2004 - best game I can think of off the top of my head was Thief: Deadly Shadows making use of the Havok engine. In-game physics were really good, and if you had the chance to check out fan missions, or delve into designing a mission, the physics engine wasn't utilized by the game devs to it's furthest potential. I've seen some amazing things done with the fan missions though.
Next great physics implimentation, IMO, was FEAR in 2005, also making use of Havok. One could interact with 90% of the game world in some shape or form, and a lot of that interactivity was needed for those "FEAR moments"
My only issue with Havok games revolves around dead AI being way too rubbery (the "rag doll" effect).
Crysis set the bar higher - now everything could be interacted with, and in a more realistic way as well.
Sadly, though, most newer games I don't even find physics to be all that memorable; which is a sure fire sign for me that the game was lacking in that area. STALKER, for example, the physics just weren't that memorable.
HL2 was memorable in that alot of things are interactive in weight, reactions, fluid, dynamic. Yes not as much as today, but much more than we were previously used to. But a stream processing capable X1K series or a $64 HD2400 would well be worth the money. A $200 upgrade that necessitates another Graphics card upgrade to process the extra overhead is a loop of shit.
Games could have a dedicated thread running on a GPU through DirectX or through a API developed by the game makers or hardware vendor just as easily or more easily than the horseshit associated with a single company devolping a proprietary interface only supported by a few game makers. This was the reason Ageia failed, Nvidia bought them but still is looking to make it a green camp supported thing. There is a large chance they will fail.
When looking at hardware, in general, and distribution as well as hardware advancement you will see that most users do use Nvidia who play Steam games (something like 70%) but those with high end hardware represent a minority in the whole scheme of things. So in a market of 10 million gamers, 70% or 700,000 and only 20% with the hardware to support your new gadgetry, 14,000 and of those the percent that will purchase a additional card to support the extra overhead or to perform the task might be 50% or 7,000 people.
So in real life where companies have to make ventures profitable, this will probably be a failure just like Ageia was untill the market is ready to bear it, or untill it becomes a open standard supported both top dogs. But knowing Nvidia they will throw money at the problem and claim victory.
and as i understand it, games that leave the gpu with alot of excess power(hundreds of fps) would beable to use PART of the gpu power to run phsyx adding detail without causing undew slowdowns(nobody "needs" more then 80fps....or vsync for their monotor) ok everybody stop, throw away your ati and amd hardware, Dan says it sucks so we should all run out and grab intel+nvidia rigs right now....
Second an engine specifically made by the developer could (Crysis) or could not be better than an API like Ageia or Havok, because they are middleware and an specifically made solution is always better than using middleware. Though it could happen that the developer doesn't know how to make an efficient enough engine as to compete with them, despite the overhead associated to middleware. Most developers use Havok or Ageia for a reason. If you are talking about an API made by a developer to use it in all their games, it's the same, but in this case Ageia and Havok (or the likes) have even more chances to win. Change "game developer" by "hardware vendor" in the above and it applies too.
Finally, you talk about the current user base, but this is all about future. They could start making games that use heavy physics right now, because they know that in one or two years a lot of people (I would say al least half the gamers) could have Nvidia series 8 and above cards, regardles of the reason the people chosed to buy them. They couldn't do that with Ageia's card. It's a rolling wheel, developers can do games because they know that at least 50% of the graphics market share is going to be physics capable, and Nvidia (and Ati if they jump up) can enphatize more on those physics once they know that games are coming out.
EDIT: BTW in a market of 10 million, there would be 7 million, 140.000 and 70.000 according to your numbers. But there have been more than 100 milion discrete graphics cards sold only in 2007. There are way more than 10 million gamers in the world, believe me...
www.xbitlabs.com/news/video/display/20080404234228_Shipments_of_Discrete_Graphics_Cards_on_the_Rise_but_Prices_Down_Jon_Peddie_Research.html
EDIT2: I have further thought about this and I have realized that I (we) have always wrongly looked at this as if you had to own a Nvidia card doing the rendering for this to work and an SLI capable board if you were to use another card for physics, but that might not be true. They are doing physics through CUDA. Essentially you could use an Ati graphics card on an Intel board and use a cheap Nvidia card for physics. In theory this is possible, and it doubles the user base. A Quad is not enough by a great margin, Nehalem (with 16 threads IIRC) is not enough. How could the much slower Atom be enough then? It's enough for what we have today, might be enough for what Intel wants to be the future of physics, but it's not enough to compete with Ageia's hardware solutions, be it using their card or CUDA to run on Nvidia cards.
49K or 4.56% users with 8800, and 190K with lower end video cards.
Users wih a aditional card 12.4K 1.19% of total users.
Pie chart time.
No Nvidia 9XXX series cards are listed yet.