Discussion in 'Graphics Cards' started by EastCoasthandle, Feb 4, 2008.
This is the real question.
No it's more of "Buy GeForce, get PhysX as well". I relly don't see PhysX api running on a GPU doing wonders. In turn, it could slow down graphics.
So a smart buyer could be convinced into "PhysX api with GeForce, Havoc api with Intel" and end up with a NVidia + Intel system. Sure Phenom does Havoc too but people will choose a cheap Intel Q6600 to a Phenom.
Also another message would be "Buy ATI, lose PhysX features"
I hate NVidia now. Two good brands, ULi and 3DFX were killed and now this
One of the most disputed features of 3dfx was the "blur" feature. I forget the technical name but got a lot of flak back then.
ULi was a good chipset but I never used them so I really cannot comment on their quality.
However, in what way have we've seen ULi and 3dfx in G80 or G92 video card? I really don't see it.
Exactly. They bought them so that no one else could benefit.
The ULi chipsets made good third-party SBs for ATi motherboards, and their NBs were getting much better.
3dfx was bought out just mostly to get the features for themselves, but I can't remember what else I was going to add... urm... nope...
How bad ass are games going to be we need 3 or 4 gpu's and ppu's and a 4 core cpu's all I can say is they better look like live people floating in bowls of jello being shot at with bolts of lightning.
If games look any more realistic, we'll start viewing life as a fake.
Frankly, DX9c already has good looking games made with those specs, IMO.
Remember that computer hardware just evolves ever more, so we may need two graphics card, and a powerful CPU for Crysis, but in another year or two, we can use a mid-ranged, maybe even low-end system, and run Crysis like it was just another retro game.
Nvidia killed the companies. Their engineers now work for the GPU and chipset divisions of NVidia, most likely. Besides it would be interesting to see if the PhysX PPU survives at all. Even if the api is used on a NVidia GPU with its shaders, it will clearly hit the GPU performance for graphics processing.
I liked ULi for the fact they always provided Win9x drivers for their stuff, something nVidia, ATI, and later on VIA and SiS stopped doing. I guess they wanted all the edge they could get. They were on the way up for sure.
I know this post is long, sorry for that, but I think I have some good points worth of debating. Please read and tell me whatyou think, guys.
In what way has Nvidia used Uli and 3dfx? That's your question?
Well in G80 and G92 very few things if at all, I guess.
But IIRC before Nvidia bought Uli they only offered one chipset and it was an enthusiast one. Later on Nvidia offers the high-end chipset along with his smaller brother.
3dfx: They designed the GeForce FX series entirely. Any doubts on why we don't see their "touch" on later GPUs? I'm sure that there are some features in them developed by the people who belonged to 3dfx, it's just not apparent on the surface. Not sure, but I think that Lumenex engine has something to do with them. I think that I have read that somewhere, but can't find a link to it.
BUT if you want to see 3dfx into Nvidia you don't have to go too far. SLI is a 3dfx invention and it was first developed by 3dfx. It's there where 3dfx influence is most clear.
Now on topic. IMO they have bought Ageia because they had already integrated some kind of support for physics in their future designs, maybe in conjuntion with Havok and now without them, they had hardware support without any API or compiler to work with. This support would come in the shape of GPU's instruction set and maybe how the SPs work internally. Of course both GPU and PPU's internal units are no more than Floating Point Units, but the difference relies on how they opperate. As an example, think of AMD's Firestream stream processor based on RV670, it does double-precision (FP64) at 1/4 of the speed of single-precision. Or you can think of SSE extensions on the CPU.
There are some posts in this thread claiming that physics calculation on the GPU would cripple graphics performance. That's not exactly true. I'm of the opinion that G80 was bottlenecked by SPs, but this doesn't happen with G92, meaning that there's some spare shading power, specially on the GTS. Ageia Physx has a peak of 50 GFlops, current GPU are around 500 GFlops. You can see where I'm going. And if we ahve to believe leaked info about 9800GTX my point is much more feasible:
-384 SP and over 1 TFlop.
That translates to:
- Double the SP/TMU ratio compared to current gen: 384/96 = 4 Vs. 128/64 = 2
- 50% increase over G92 in SP/ROP ratio and 125% increase over G80: 384/32 = 12 Vs. 128/16 = 8 Vs. 128/24 = 5,33
- A hell of a lot higher memory bandwidth.
With those (if true) it's easy to conclude that Nvidia was expecting to see a high increase in SP usage and data traffic. Are games in the near future going to see such an increase in shading usage for it's graphics alone? I don't think so, because they would render current generation useless, a big base for their games. But what about a feature like Ageia's physics? I mean something that would be more like an added feature, a bonus, just like now happens with Physx?
Let me know what you think of this guys...
hmmn just read about this and i think that nvidia want to get the edge over there competition i actually think they want AMD to scrap their graphics division only one company can be in the lead and there must be a reason for that but when there is more than 2 company there is choice and it allows other company to not be in the lead but still be successful
in all honesty yeah it does make sense because we might see a graphics card with a ppu on it now in all honesty id like to see a graphics card that has multi core so we dont need sli
Just another possibility for the near future: Hybrid SLI.
I don't see why they wouldn't make all of their next chipsets with this feature, as it's wonderful for our electric bill, our PC's heat output and noise. What happens is that, as it is right now, it doesn't have any real benefit (performance wise) when paired with a high-end graphics card. But mix the abitity to do GPU physics with Hybrid SLI and voila!
What do you think?
DarkMatter, you have an interesting POV. It's really hard to say how this will turn out. But time will tell if we will actually see the use of the PPU in future video cards.
Now I'm confused.
Is it going to be that NVidia will continue production of PPU cards or will the PPU be scrapped and the API be implemented in a way that the shaders do physics processing? BTW, just for the know, a PhysX card at full load consumes 25W~30W. source
Some are speculating that it may be scraped (to prevent Intel from getting it). While other believe it maybe incorporated into future video cards. But if that happens it's possible it wouldn't be at the capacity we see today. As we discussed earlier we can't find any evidence of ULi and 3DFX in the G80 or G92. Maybe it's there and it's functionality is so minuet that we do not notice it/see it.
Future Forceware drivers with the PhysX API so even current cards run it? Afterall we're dealing with fully-programmable shaders? The NVidia press release talks about "millions of people being able to use the API" and there is a mention of the 8800 GT which is current?
Either way its not going to matter much for a while. Realistically speaking, aside from an occasional big-name title every now and then, the kind of physics done by Ageia are not widespread or used that often nowadays. Although, 5-10 years from now that will probably change. Heck, I wouldn’t be surprised if nVidia did absolutely nothing with it for years. They sat on SLI tech doing nothing with it for years before implementing it themselves (And yes, I know 3Dfx's SLI was not exactly the same as nVidia's implementation that followed.)
I would say both. They will probably continue selling PPU cards for a while and in the meantime create GPUs with physics shader capabilities, instead of creating an API that runs on existing shaders.
Current GPUs COULD do physics, but they'd do it in software, in drivers, because they lack the specific hardware paths or Shaders as we call them. And once when we start needing the CPU too much, it's when GPU physics start to fade off as a desirable solution. PPUs are better suited for this because they have those hardware paths, just as GPUs have theirs for T&L, color blending, etc. It's just as how you can use a knife to take out screws, but a screwdriver is much better suited for this.
Implementing those "physics shaders" into GPUs is easy. So what's the problem then? Even if you implement those shaders, you need an API and a compiler. You could do your own API, but that would mean to add another one in a market that has just started, has very few followers, and where you are the underdog. It's easier to use one of the existing ones (seems they first took Havok). You have still 2 problems with this: licensing and the compiler. The first is clear. The compiler is another thing, but it's tied with livensing too: they'll let you create the compiler according to your hardware, they create it for you or they create one common to all companies and ask you to change your hardware accordingly? I guess in this situation the best you can do is buy the company before it takes off.
That picture could change in only one PhysX capable GeForce generation. There are few games using PhysX hardware because there are probably only thousands of owners, not even hundreds of thousands I'd dare to say. But we are talking about millions of graphic cards sold every year. That's a huge user base. It wouldn't be any different to EAX, they could make a hardware version and a software version. If they can stand out over other developers by using it, they will use it.
Huh! I made another long post... I will never learn.
Sure, millions with a decent CPU can use the API we've seen it already. If you can do it with a CPU, I'm sure a dual core GPU solution could do the same (or some variant). However, PhysX API won't work unless the game is made to use it, AKA GRAW/GRAW2. We have seen very few games modeled to use PhysX and honestly don't see that changing for now. The best games that use it are Graw series and UT3 through mod pack
Before Ageia released their product it was assumed that PhysX could take any existing games and enhance it in one way or another. But once released that turn out to not be true at all. Then we all hoped that certain games would get some sort of mod pack. However, so far, only UT3 received such a pack then Nvidia bought them. So, it's really hard to say how things will turn out now.
Are DMM and Euphoria technologies in themselves? Who owns the rights to them?
EDIT: Are they both owned by Lucas Arts?
how would you push the claculations of a ppu and gpu through an pci x16 slot
They are both owned by Lucas Arts. Right now, they are releasing this on console. Listen to this (Force Unleash is the game talked about)
DMM and Euphoria are CPU intensive. Which means that a PPU really isn't needed. Read here for more information about it. It looks like we have until Sept 2008 before PC gamers can be licenses to use DMM and Euphoria. I haven't the slightly clue why they did that. IMHO DMM and Euphoria should offer the best Phsyics solutions to PC games to date. Make sure you watch the video.
I have seen those videos and there's nothing impressive about them. Ageia has a lot more impressive demos (that run on the CPU too, but using almost 80% of my CPU) than that DMM thing, not to mention that the whole PPU thing is not about the effect you can create, but about how many you can create. With a CPU you can create a wood breaking in 30 pieces, with the same CPU utilisation + PPU or a physics capable GPU you could break it in thousands of pieces and all of them would follow Newtons laws.
The second one is similar to some demos from Meqon (bought by Ageia) that I saw back in 2003-2004, featuring a similar physics-ragdoll-AI thing they show there, and by no means better IMHO.
Also Crysis has somthing really similar in nature to DMM, but it's obvious that when many particles or objects are affected by physics the CPU struggles to handle them, unless you have a quad core.
It's commonly known that you can run great physics on the CPU, indeed a general purpose CPU can do physics a lot better than a PPU or GPU. But it will never be able to catch up to what those can do in terms of number of effects. Intel's 80 core monstrosity boasted 1 Teraflop, you can achieve that with 2 GPUs today, a single 9800GTX is supposed to have close to 1,5 TFlops. PhysX processor has only 50GFlops in comparison, but you need the fastest Quad Core2 to come close to that number. You would need 2 of them to equate a system with the Ageia processor if it was fully used. PhysX card is $100, fastest Quad is $1400, a 500GFlops GPU is $200 and this difference will always be there. Imagine what they could do if you could use one of the GPU's on an SLI configuration only for physics.
Now physics will always need CPU power to run, as stream processors of any nature can't handle the kind of operations required for gameplay physics (physics that affect gameplay).
But I expect (I want them) great advances in effects physics in the near future. Imagine a game like FEAR, when you shoot at the walls the chamber gets full of smoke, but that smoke rather than being 20 (whatever, they are few) big particles, it's formed by thousands of smaller particles that move when someone passes through them or when you shoot... That would add to the gameplay (while not being gameplay physics), because on FEAR you couldn't see anything when you filled rooms with smoke, but that way you could "see" them. It would be the same difference as Farcry/Crysis jungle fights, in FarCry you couldn't see them on the jungle, on Crysis you can if you see some leaves are moving.
I never had faith in Ageia and thought they would never succeed, unless they did find a way to integrate it in motherboards or graphics cards or sell it for <$50. But I never questioned the need of some sort of dedicated solution for advanced physics, and I'm not talking about science classes... Well let's see what comes now.
I've seen what Ageia can do and was never impressed with it. Specially when you have to buy a PPU to see those effects. With DMM/Euphoria you don't have to buy a PPU which give it a better advantage in my book.
How do you do with GPU only ones? It's the same. Plus current cards don't desperately need more than 8x, they benefit a bit by going 16x, a 5% increase or so, but 16x it's not fully used, only a bit more than 8x is needed. And taking into account that PCI Express 2.0 has double the bandwidth than PCI 1.1, we have almost 4x the bandwidth we need available in stores.
Come on, Ageia's cloth simulations are by far more impressive than that breaking wood. Specially the cloth that you can crumble (?? google translator sorry). And it runs on CPU too. 30% of max utilisation of my 4800+.
EDIT: For the record: I don't own an Ageia card, I never had one and I never will.
EDIT2: Anyway how do you know DMM and Euphoria don't belong to creators Pixelux Entertainment and NaturalMotion Ltd, and that Lucas Arts owns them?
Separate names with a comma.