• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Nvidia Demo Engineer explains Fermi architecture using Rocket Sled Demo

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
I wouldn't say it doesnt' mean squat.
As far as I know, PhysX for nVidia GPUs is written in CUDA, and OpenCL is basically +90% CUDA.

Therefore, if future physics engines are made using OpenCL, they'll probably do the same kind of calculations as PhysX and the GPU performance should be at least equivalent.

As a comparison, a GPU that's very fast rendering a scene in DirectX should also be very fast when rendering the same scene in OpenGL. There are differences in performance because of driver optimizations, but thosr are usually very small.


Of course, if a generic physics engine is well coded (as in, not TWIMTBP infected), it shouldn't be as Geforce-focused as PhysX. And maybe if all those extra theoretical flops from Cypress are actually used, the ATI's implementation could turn out faster.

if they make it openCL, it wont matter. game developers will not make a game that wont run on 95% of their target markets systems - my point is that fermis extra physX potential is going to get wasted.
 

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.79/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
if they make it openCL, it wont matter. game developers will not make a game that wont run on 95% of their target markets systems - my point is that fermis extra physX potential is going to get wasted.

No, because it still translates into more overall GPGPU performance as well, regardless of api. They just chose their own internal Physx engine to display it for the obvious marketing reasons. Why would they demo a generic GPU physics implementation when they already have a fully documented and suported in-house implementation? Of course they are going to push 2 agendas simultaneously if they can.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
No, because it still translates into more overall GPGPU performance as well, regardless of api. They just chose their own internal Physx engine to display it for the obvious marketing reasons. Why would they demo a generic GPU physics implementation when they already have a fully documented and suported in-house implementation? Of course they are going to push 2 agendas simultaneously if they can.

i'm not saying the performance of the GPU will go to waste. Whats not used for physX is used for FPS. i'm just saying talking specifically about physX, and how boosting performance for it in fermi, doesnt mean its actually going to be USED in any games/
 

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.79/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
i'm not saying the performance of the GPU will go to waste. Whats not used for physX is used for FPS. i'm just saying talking specifically about physX, and how boosting performance for it in fermi, doesnt mean its actually going to be USED in any games/

No, but you are focusing on Physx itself, when you shouldn't be. You should be focusing on the fact that it's GPGPU capabilities are increased overall. Physx was just the vessel used for showing that.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
No, but you are focusing on Physx itself, when you shouldn't be. You should be focusing on the fact that it's GPGPU capabilities are increased overall. Physx was just the vessel used for showing that.

well, they used a physX example. they shoulda used some video encoding example and had it shit H264 video streams like it was going out of fashion.
 

Wile E

Power User
Joined
Oct 1, 2006
Messages
24,318 (3.79/day)
System Name The ClusterF**k
Processor 980X @ 4Ghz
Motherboard Gigabyte GA-EX58-UD5 BIOS F12
Cooling MCR-320, DDC-1 pump w/Bitspower res top (1/2" fittings), Koolance CPU-360
Memory 3x2GB Mushkin Redlines 1600Mhz 6-8-6-24 1T
Video Card(s) Evga GTX 580
Storage Corsair Neutron GTX 240GB, 2xSeagate 320GB RAID0; 2xSeagate 3TB; 2xSamsung 2TB; Samsung 1.5TB
Display(s) HP LP2475w 24" 1920x1200 IPS
Case Technofront Bench Station
Audio Device(s) Auzentech X-Fi Forte into Onkyo SR606 and Polk TSi200's + RM6750
Power Supply ENERMAX Galaxy EVO EGX1250EWT 1250W
Software Win7 Ultimate N x64, OSX 10.8.4
well, they used a physX example. they shoulda used some video encoding example and had it shit H264 video streams like it was going out of fashion.

Physx is easier for people to relate to. Only those that encode a lot could appreciate the difference in speed in encoding h.264 streams. I'm willing to bet the number of people that take encoding performance seriously number even less than those that take gaming seriously. I fall into the encoder category, and I'm always way outnumbered by gamers on tech sites.

Not only that, you are missing the point that it still benefits them to push the Physx API instead of using some generic bench or test.
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.64/day)
Location
Leesburg, FL
well, they used a physX example. they shoulda used some video encoding example and had it shit H264 video streams like it was going out of fashion.

plus they are probbaly trying to salvage physx.. and not demostrating doesn't help.

but on the other hand, an encoding example would be bad ass too.
 
Joined
Nov 4, 2005
Messages
11,689 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
A encoding example would show the cards relative performance and how it stacks up against the competition in a unbiased way. So of course they will not do that, it will show their hand.

I thought about the 1 million particles last night more. So you would ahve to start with a proximity algorithm, vertex information of each particle, veloicity of each (the reason for being the same is as there is no extra mass/velocity calculation for each interaction) then a simple trajectory vertex. Unless you used some form of a liquid sold algorithm and applied it to the particles, use something like wireframe points for each of the particles and use the line distance as a simple distance calculator, then you could either give it a run through the z buffer to cull the ones the user will not see for the frame, or make it immune to the z buffer and calculate each individually.

So if, distance of a/b is less than (X)
and get vector or a, b.
and velocity a, b.
then "proprietary algorithm"


This calculation would become simpler if a or b were a solid surface, and a simple gravity bouce equation were applied as a rule instead of being independantly calculated.

By limiting the particle to one perticular size, shape, and mass you limit the calculations needed for semi-accurate physics rendering.

I'm not calling the rendering overhead, easy or cheap. But really it doesn't add much to a game for me to see little pebbles fly off of somethign when I shoot it, especially when there is a enemy trying to decapitate, infect, mate with my game character.
 

shevanel

New Member
Joined
Jul 27, 2009
Messages
3,464 (0.64/day)
Location
Leesburg, FL
Yeah true.. most people on 360 or ps3 wouldnt care about the pebbles.. but on Pc it can be a little more immersive to have environmental physics, especially if you have the hardware to manage it.

just adds to the realism and immersion of being in the game and interacting with an environment full of life.

imo of course. i like to play games and i like to experience the role of the character being played and having all these extra DX11 features (and physx ) just makes that experience much more memorable.
 
Joined
Jul 2, 2008
Messages
3,638 (0.63/day)
Location
California
People should check out this video too

old as shit

Havok/Euphoria http://www.youtube.com/watch?v=3bKphYfUk-M


OMG!

Havok have made super realistic cloth
http://www.youtube.com/watch?v=daZoXzBGea0&NR=1
Can't wait for that in games ( no seriously) one of my pet peaves is that clothes don't move as they should.

I read a few of your posts, and I saw that you missed the points. PhysX is just like any other physics engine, the different is, it runs on both CPUs and GPUs, allow more realistic physics at playable frame rate that not require a 4GHz or 5GHz CPU to do it.

Yup, this is the point, GPU acceleration physic engine.

There are many physics engine out there, and all of them can run on CPU. NVIDIA only added GPUs to its engine that already supported CPUs, and I have no ideas why people hating it.

PS3 use PhysX too, it's just not on GPU.

If consoles allowed to have exclusive games then why NVIDIA can't make their PhysX exclusive for their vid cards?

I got to buy a new console to play an exclusive games, but let's think about it. Do we need to buy a NVIDIA card to play a PhysX games? No, we only need to turn off the PhysX, or lower the physics level and let it runs on CPU. PhysX is just merely a feature, you don't have to turn it on to play a game.

MS and Sony are more an asshole than NVIDIA. Why the fuck everyone kept releasing new consoles? It's fucking pissing me off! Why can't I play Xbox games on PS3?! :banghead:

BS.

I heard people complaining about benchmarks that not related to gameplay (AMD cpu vs Intel), and now I'm hearing people wanting to see them.
 

3volvedcombat

New Member
Joined
May 10, 2009
Messages
1,514 (0.28/day)
Location
South California, The desert.
System Name My Computer
Processor Core 2 Q9550 4Ghz 1.23volts
Motherboard Gigabyte
Cooling Corsair
Memory OCZ
Video Card(s) Galaxy
Storage Western Digital
Display(s) Acer
Case Lian li
Audio Device(s) Asus
Power Supply Corsiar
Software Microsoft
Benchmark Scores 25,000 3dmark06 at 4.35Ghz processor, 835core card!
I notice threw all that demo, it looked like it was being run at 100 fps. The original latest video post.

Second of all everything was moving, interacting on the ship, particals were extremely productive that my sli 260's would have crashed with all those physx, and the shading and physics of the ship itself was perfect and could easily be better. You also have to put into account that the screen is moving so fast, that all the ground and detail was litteraly like a crysis bench moving 50 times faster then it ussualy does, and thats basicly going to frame and crash video cards ATM.


And there still working on the cards bios'es(probably) right now, i remember seeing that GTX 280 OEM first models with the first bios installed didnt overclock that high and when benched was up to 40-50% slower then a GTX 280 of current time with a new bios.

So i bet the single fermi is going to be beast. But anustly i want a new card from nvidia to compete, and im not paying 420 dollars+ and scrounging to find a HD 5870 just for dx11 and it being overpriced 50 dollars because so much demand and its the only card in its market.

I expect a fermi to be 30-50% faster then my GTX 260s, Wont be 12 inches long, Use less power, Have dx11 support, have load temputures less then 80c-85c, and have other technologies supported ontop of that to!.

Right now there are almost no games that im going to sell my GTX 260's to get a dx11 card ATM for, and i want a good folding card for me needs to. :D
There are many physics engine out there, and all of them can run on CPU. NVIDIA only added GPUs to its engine that already supported CPUs, and I have no ideas why people hating it.
^^^^ Why are we hateing physx from nvidia right now? Its because there shuving there "Physhit" down techy enthusiast throats and saying that we should buy there slower more power hoging cards for that support alone, and there marketing physx wrong now and showing that there weak by trying to desperatly take 1 little technology and making it look big, while the facts show that its almost piontless and isnt needed for the most popularly played games out there today.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
A encoding example would show the cards relative performance and how it stacks up against the competition in a unbiased way. So of course they will not do that, it will show their hand.

I thought about the 1 million particles last night more. So you would ahve to start with a proximity algorithm, vertex information of each particle, veloicity of each (the reason for being the same is as there is no extra mass/velocity calculation for each interaction) then a simple trajectory vertex. Unless you used some form of a liquid sold algorithm and applied it to the particles, use something like wireframe points for each of the particles and use the line distance as a simple distance calculator, then you could either give it a run through the z buffer to cull the ones the user will not see for the frame, or make it immune to the z buffer and calculate each individually.

So if, distance of a/b is less than (X)
and get vector or a, b.
and velocity a, b.
then "proprietary algorithm"


This calculation would become simpler if a or b were a solid surface, and a simple gravity bouce equation were applied as a rule instead of being independantly calculated.

By limiting the particle to one perticular size, shape, and mass you limit the calculations needed for semi-accurate physics rendering.

I'm not calling the rendering overhead, easy or cheap. But really it doesn't add much to a game for me to see little pebbles fly off of somethign when I shoot it, especially when there is a enemy trying to decapitate, infect, mate with my game character.

You are describing effect physics and that's not what Ageia is, what has ever been from the start. Ageia has always been about physically correct simulations and that's what they are showing there. Every particle has its own calculations and interacts with each other based on its geometry and surrounding geometry and forces. In that situation the particles being the same is irrelevant, it's not a pre-baked simulation like the one in the Toyshop demo so how a "particle" will interact is not known beforehand and has to be calculated. You accuse them of faking that so it's clear that once again it's just a matter of believing Nvidia is lying or not, and I have seen enough PhysX demos running perfectly even in my uncle's 8400 GS (with like 1/100th GPGPU performance) to believe it's true.

You are correct in the sense of pure geometry and rendering workload, but if you are just talking about the rendering part of the ecuation, then you are missing the point by far.

^^^^ Why are we hateing physx from nvidia right now? Its because there shuving there "Physhit" down techy enthusiast throats and saying that we should buy there slower more power hoging cards for that support alone, and there marketing physx wrong now and showing that there weak by trying to desperatly take 1 little technology and making it look big, while the facts show that its almost piontless and isnt needed for the most popularly played game out there today.

Physx is nice, but anustly i see no piont into it, and its helorios to see a game like badcompany 2 without physics looker better then any physics enabled game ive ever seen.

It's not different from AMD promoting DX11 or DX10.1 or Nvidia promoting DX10 or DX9.0C before that or Ati promoting DX9 before that and so on.

Nvidia is not forcing you to buy any of their cards to enjoy the improved physics. You can always turn GPU PhysX off and enjoy the same physics that you find in any other game, be it Havok, Bullet or developer's own engine. That you'd have enjoyed forever if PhysX had never existed. What every PhysX hater always forgets is that without PhysX there would have never been better physics in games. Physics have been almost stagnated since the first games using them in the 90's and it didn't start to trully change until Ageia demostrated there was something more out there. Now you want those extra physics, but you want them for free, don't worry, human nature, wrong nature though. (I'm not saying wanting something for free is wrong, mind you, but enervating and hating because you want something for free is)

Like kid said you can play every game and that's the exception rather than the norm in todays industry.
 
Last edited:
Joined
Aug 17, 2008
Messages
2,190 (0.38/day)
Location
Minnesota, USA
System Name TaichiTig
Processor i7 6800K
Motherboard ASRock X99 Taichi
Memory 32GB DDR4 3200
Video Card(s) RTX 4070
Storage SSD + Misc. HDDs in DrivePool
Display(s) BenQ PD3200U, Samsung C32HG70
Case Antec Twelve Hundred
Audio Device(s) Behringer UMC404HD, LSR308, Shure SRH840, AKG K612, Etymotic HF5
Power Supply Corsair 750TX
Mouse Logitech G502
Keyboard Deck Legend Ice Tactile
Software Win10
If you had a watercooled tri-SLI machine would you take the two cards out, or would you just disable extra GPUs? :shadedshu

To show off what a single card can do? I'd definitely take the extra cards out. :wtf:
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
To show off what a single card can do? I'd definitely take the extra cards out. :wtf:

Yeah you have 1 day to show off everything and you are going to spend the time in situ (the time that you paid for, if I'm not mistaken) dealing with a water loop. Sure, it makes a lot of sense. First and foremost, that was a showcase and the purpose is to show a cool thing. I can think of very few good looking things than what they showed.
 

3volvedcombat

New Member
Joined
May 10, 2009
Messages
1,514 (0.28/day)
Location
South California, The desert.
System Name My Computer
Processor Core 2 Q9550 4Ghz 1.23volts
Motherboard Gigabyte
Cooling Corsair
Memory OCZ
Video Card(s) Galaxy
Storage Western Digital
Display(s) Acer
Case Lian li
Audio Device(s) Asus
Power Supply Corsiar
Software Microsoft
Benchmark Scores 25,000 3dmark06 at 4.35Ghz processor, 835core card!
It's not different from AMD promoting DX11 or DX10.1 or Nvidia promoting DX10 or DX9.0C before that or Ati promoting DX9 before that and so on.

^^^It is It is^^^, When a company promotes a whole new DX experience thats a whole other story instead of just marketing processing of geometrical calculations on few supported games on the graphics card itself. When a company markets and advertises a DX refresh and a new DX that is a BIG deal, and thats ussualy when a whole new series of VIDEO CARDS come out that blow the past cards by doubling up on performance per every new series of cards, according to moores law of technology progression ;).

I mean when a company is promoting a little technology like "physics" for 3-4 years, and there are only a handfull of "popular" games that support nvidia physx, and there marketing for there nvidia physx gets so WIDE its all a company's supporters and ceo's are talking about in interviews, and ontop of that haveing everybody finnaly understand that it isnt that big of a deal and died less then a year after it came out is a shame.


BUT when a company advertises a new DX thats like a revolution in gaming, graphics, and a whole new genre were it's supposed to change what we see in games for the rest of our lives, WELL THEY can promote that as much as they want. I mean your comparing a change of graphics card industry with a little technology like physics thats been and is almost piontless because all you need is the right engine and your set for the day when it comes to "physics". :D.


***EDIT*** When it comes to the norm of physics today, it did originate from AGEIA Technologies, and not nvidia's own team, so Nvidia Physx really didnt change the outcome in games today buy the simplefact that AGEIA Technologies would have gotten physics out to video games with anyway possible and people were going to notice what physics was and how it changed gaming itself. DX revisions and refreshes would have allowed physics to be in games as they are today, but being Pacman(nvidia) They Chomped the technology up and are still marketing it today. So i stand with that physics would have still would have been in games today if nvidia was here or not. :D
 
Last edited:
Joined
Nov 4, 2005
Messages
11,689 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
You are describing effect physics and that's not what Ageia is, what has ever been from the start. Ageia has always been about physically correct simulations and that's what they are showing there. Every particle has its own calculations and interacts with each other based on its geometry and surrounding geometry and forces. In that situation the particles being the same is irrelevant, it's not a pre-baked simulation like the one in the Toyshop demo so how a "particle" will interact is not known beforehand and has to be calculated. You accuse them of faking that so it's clear that once again it's just a matter of believing Nvidia is lying or not, and I have seen enough PhysX demos running perfectly even in my uncle's 8400 GS (with like 1/100th GPGPU performance) to believe it's true.

You are correct in the sense of pure geometry and rendering workload, but if you are just talking about the rendering part of the ecuation, then you are missing the point by far.



It's not different from AMD promoting DX11 or DX10.1 or Nvidia promoting DX10 or DX9.0C before that or Ati promoting DX9 before that and so on.

Nvidia is not forcing you to buy any of their cards to enjoy the improved physics. You can always turn GPU PhysX off and enjoy the same physics that you find in any other game, be it Havok, Bullet or developer's own engine. That you'd have enjoyed forever if PhysX had never existed. What every PhysX hater always forgets is that without PhysX there would have never been better physics in games. Physics have been almost stagnated since the first games using them in the 90's and it didn't start to trully change until Ageia demostrated there was something more out there. Now you want those extra physics, but you want them for free, don't worry, human nature, wrong nature though. (I'm not saying wanting something for free is wrong, mind you, but enervating and hating because you want something for free is)

Like kid said you can play every game and that's the exception rather than the norm in todays industry.

Perhaps you need to read up on Physx?

You may not add angular torque, you specify the time each interaction has, and limit the number of iterations a interaction has to limit your processing workload, you may predefine each interaction beforehand, mass/velocity is calculated and then stored to be accesed later or can be predefined.

From the Nvidia developer site for PhysX.

#1 If you do not need to manually generate the mass of your object, I suggest you guys uses NxActor::updateMassFromShapes( density, totalMass ) which take in EITHER density OR totalMass and recompute mass properties like inertia tensor, mass frame and center of mass based on your input.

Fromthe way it looks, there are rigid and soft joint, cloth, fluid, and particle interactions, and at the most basic real time game level most of your interactions are predefined to eleminate the rendering workload. The true real time interaction on all three axis of more than one flat planar surface causes current generation cards, and even the high end cards to crawl.


Sorry to say, Physx in form today is compiled of alot of predefined rules, datasets, and calculations and predefined mass objects interacting in a limited number of functions.


So that cloth, single layer object you are "interacting" with is just alot of crap. it is mostly predefined, prerendred, and limited. PhysX is still FAIL.


It does however allow people who want to run things through many interations for more accuracy run them at crawling speed.

"I'm unclear from the docs whether it is possible to have a triangle mesh that is dynamic (i.e. not static and does not have to be dynamically created) interact with fluids. I can get static triangle meshes to do so, but need dynamic ones working as well.

From the docs (Fluids Interaction with Rigid Bodies) it mentions pre-cooking, but is this required or optional, ie can the SDK handle dynamic TM's (albiet slower than pre-processed)?

The following is from the PhysX dSDK documentation.


QUOTE
Fluids contacting meshes (i.e. triangle meshes, heightfields or convexes) require special processing of the triangles for fluid collision. Instead of relying on the SDK performing these calculations automatically, you can pre-cook areas you know will contact fluids using the NxScene::cookFluidMeshHotspot method. This way you can avoid performance spikes."


"I want to know , how to do physx particle fusion together , I can't to do for UDK , anyway , the particle can't fusion together , UDK maybe can't to do this , I mean is UDK can't create dynamic fluid particle (the point to point fusion together effect ) .


Hmm, you can probably get quite good effects with sprites or a screen space effect(eg similar to the blobs dx sdk sample). I dont think the fluid surface stuff is available for PhysX anymore, ie marching cubes like mesh generation (plus udk wouldnt have it integrated even if it was).

David"



A code showing predetermined gravity interaction


"struct ParticleUpdateSDK
{
NxVec3 force;
NxU32 flags;
};
////////////////////////////////////////////////////////////////////
std::vector<ParticleUpdateSDK> gUpdates;
////////////////////////////////////////////////////////////////////
void SampleCollision::update()
{
NxVec3 defaultGravity(0,-9.8,0);

if (!gMyFluids.size())
return;

//Get particles from MyFluid and compute forces.
MyFluid* fluid = gMyFluids[0];
const ParticleSDK* particles = fluid->getParticles();
unsigned particlesNum = fluid->getParticlesNum();

gUpdates.resize(particlesNum);

for (unsigned i=0; i<particlesNum; i++)
{
gUpdates.force= -defaultGravity;
gUpdates.flags=0;
}

NxParticleUpdateData updateData;
updateData.forceMode = NX_FORCE;
updateData.bufferForce = &gUpdates[0].force.x;
updateData.bufferForceByteStride = sizeof(ParticleUpdateSDK);
updateData.bufferFlag = &gUpdates[0].flags;
updateData.bufferFlagByteStride = sizeof(ParticleUpdateSDK);

fluid->getNxFluid()->updateParticles(updateData);
"


This is a snippet of code to generate a set of particles to enter into a game interaction.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
^^^It is It is^^^, When a company promotes a whole new DX experience thats a whole other story instead of just marketing processing of geometrical calculations on few supported games on the graphics card itself. When a company markets and advertises a DX refresh and a new DX that is a BIG deal, and thats ussualy when a whole new series of VIDEO CARDS come out that blow the past cards by doubling up on performance per every new series of cards, according to moores law of technology progression ;).

I mean when a company is promoting a little technology like "physics" for 3-4 years, and there are only a handfull of "popular" games that support nvidia physx, and there marketing for there nvidia physx gets so WIDE its all a company's supporters and ceo's are talking about in interviews, and ontop of that haveing everybody finnaly understand that it isnt that big of a deal and died less then a year after it came out is a shame.


BUT when a company advertises a new DX thats like a revolution in gaming, graphics, and a whole new genre were it's supposed to change what we see in games for the rest of our lives, WELL THEY can promote that as much as they want. I mean your comparing a change of graphics card industry with a little technology like physics thats been and is almost piontless because all you need is the right engine and your set for the day when it comes to "physics". :D.

That's how you see it and nothing more, an opinion. A new DX is in no way more relevant than better physics, you should have learnt that from DX9 and DX10. There's absolutely nothing you can't do in DX9 that you do in 11. Nothing and that includes Tesselation. Obviously you need the hardware to be able to do it (actually it's just a matter of performance, it can be done without dedicated hardware, same situationas with PhysX), but there's nothing preventing it from being included in DX9.

If game developers don't use PhysX is because Intel (AMD and M$ too) has been using their influence from the beginning, I'm talking about Ageia times especially. Physics can be GPU accelerated TODAY and can offer much much better things than what we see in games. CPU physics are not better because the CPUs just can't handle more. Nothing prevents GPU physics from simulating entire maps brick by brick. Imagine if you could destroy every building down to their most elemntal pieces. Well, technically that can be done since 2006, but it won't because powerful companies don't want that and they have been directly lying to everybody with promises for the future.

If M$ wanted HW physics to take off, they would have included them in DX long time ago, they would have included them in DX11 at least! But did they? No, because there are interests there: once again Intel and AMD. It's so obvious they don't want HW physics that it's not even funny. How long since AMD announced GPU accelerated Havok? Seriously, it's not that hard, Ageia created everything from the scratch in 2 years and Nvidia ported it in less than 3 months. It's been almost 2 years since thy fisrt mentioned collaboration (AMD-Havok) (soon after Ageia's adquisition by Nvidia) and a full year since "they already had it"... Where is it? If that was the good thing because it was open (pff :laugh:), where the hell is it?

And the same can be said about Bullet. It's been long since they were suposedly porting it, but I don't see it. Honestly, I refuse to believe that Nvidia is able to do something in 2 months and that AMD and Intel need 2+ years.

***EDIT*** When it comes to the norm of physics today, it did originate from AGEIA Technologies, and not nvidia's own team, so Nvidia Physx really didnt change the outcome in games today buy the simplefact that AGEIA Technologies would have gotten physics out to video games with anyway possible and people were going to notice what physics was and how it changed gaming itself. DX revisions and refreshes would have allowed physics to be in games as they are today, but being Pacman(nvidia) They Chomped the technology up and are still marketing it today. So i stand with that physics would have still would have been in games today if nvidia was here or not. :D

Who talked about Nvidia at all? :laugh: Some people are soooooo confused. :slap:

I'm defending HW accelerated physics, and I defend PhysX because it's the only one! The only reason that Ageia never took off was Intel's and AMD's pressure at the time. There was no other reason and that has not changed. Intel was so desperate about the matter, in fact, that they bought Havok to ensure HW accelerated physics never would take off. M$ was in the "conspiration" too for reasons I can't understand, but sure they have them ($$).

Your last 2 sentences are just hilarious. Read above, if M$ or anyone wanted better physics they would have included them long time ago. Nvidia had nothing to do with it. Intel and AMD are the only ones to blame. Because they influenced against Ageia back then, for refusing to let PhysX run on their hardware when Nvidia offered it...

@Steevo

Yeah, you can do a lot of things in PhysX, but not ALL of it is what you are describing. That's a workaround, an optimization and not the only way of doing physics under the API. In any case, that's irrelevant. They are actively saying what's being done in the demo and it's not that, so, once again, it all boils down to:

- You believe what they say they are doing.
- You don't.

You've made clear what you think. Move along...
 
Last edited:
Joined
Nov 4, 2005
Messages
11,689 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
That's how you see it and nothing more, an opinion. A new DX is in no way more relevant than better physics, you should have learnt that from DX9 and DX10. There's absolutely nothing you can't do in DX9 that you do in 11. Nothing and that includes Tesselation. Obviously you need the hardware to be able to do it (actually it's just a matter of performance, it can be done without dedicated hardware, same situationas with PhysX), but there's nothing preventing it from being included in DX9.

If game developers don't use PhysX is because Intel (AMD and M$ too) has been using their influence from the beginning, I'm talking about Ageia times especially. Physics can be GPU accelerated TODAY and can offer much much better things than what we see in games. CPU physics are not better because the CPUs just can't handle more. Nothing prevents GPU physics from simulating entire maps brick by brick. Imagine if you could destroy every building down to their most elemntal pieces. Well, technically that can be done since 2006, but it won't because powerful companies don't want that and they have been directly lying to everybody with promises for the future.

If M$ wanted HW physics to take off, they would have included them in DX long time ago, they would have included them in DX11 at least! But did they? No, because there are interests there: once again Intel and AMD. It's so obvious they don't want HW physics that it's not even funny. How long since AMD announced GPU accelerated Havok? Seriously, it's not that hard, Ageia created everything from the scratch in 2 years and Nvidia ported it in less than 3 months. It's been almost 2 years since thy fisrt mentioned collaboration (AMD-Havok) (soon after Ageia's adquisition by Nvidia) and a full year since "they already had it"... Where is it? If that was the good thing because it was open (pff :laugh:), where the hell is it?

And the same can be said about Bullet. It's been long since they were suposedly porting it, but I don't see it. Honestly, I refuse to believe that Nvidia is able to do something in 2 months and that AMD and Intel need 2+ years.



Who talked about Nvidia at all? :laugh: Some people are soooooo confused. :slap:

I'm defending HW accelerated physics, and I defend PhysX because it's the only one! The only reason that Ageia never took off was Intel's and AMD's pressure at the time. There was no other reason and that has not changed. Intel was so desperate about the matter, in fact, that they bought Havok to ensure HW accelerated physics never would take off. M$ was in the "conspiration" too for reasons I can't understand, but sure they have them ($$).

Your last 2 sentences are just hilarious. Read above, if M$ or anyone wanted better physics they would have included them long time ago. Nvidia had nothing to do with it. Intel and AMD are the only ones to blame. Because they influenced against Ageia back then, for refusing to let PhysX run on their hardware when Nvidia offered it...

@Steevo

Yeah, you can do a lot of things in PhysX, but not ALL of it is what you are describing. That's a workaround, an optimization and not the only way of doing physics under the API. In any case, that's irrelevant. They are actively saying what's being done in the demo and it's not that, so, once again, it all boils down to:

- You believe what they say they are doing.
- You don't.

You've made clear what you think. Move along...

I won't move along. But thanks for telling me to. Nvidia has a "Cook" option for all of the pretties in the game to be prerendered, the same thing you accuse the Toyshop demo of doing in a condescending way.

Agiea never took off as it required you to buy a dedicated PhysX card from them, to play only a few games to see pebbles fly off a wooden crate, or so the cloth looks more realistic.

DX standards are just that, standards. This is so a GPU maker can make a GPU to run that standard in hardware instead of in software or by having extra steps to alter code when a 3D real time application is running. Before this we didn't have enough GPU power to really run a multi-threaded aware application that runs on the CPU and GPU simultaneously. If DX9 can do tesselation then all the cards with DX9 compliance will no longer be compliant so what is your point again? I'm aware that throughthe pure rendering power of a modern GPU you cam make a DX9 application look like a DX10 application. But you do so at a huge performance hit. Cinematic Mod 10 is a good example.

AMD/Nvidia were probably to blame at first Ageia not takign off more than MS is. Why would MS care about Physics in a game when the standard gamer is also your worst client in terms of piracy? really people need to get off the MS is evil bandwagon, saying it does not make any other part of your argument any more valid. Also, ATI is holding hands with MS on some things, but it took how many years of there being a tesselation unit in their GPU's before it became a DX standard? I fail to see any valid points in your ATI/MS argument.

Yes, Nvidia accelerated it by using their GPU for it. They have done nothign that ATI hasn't already. nothign that a smaller company didn't do before them, then bought said company, and shit on its users.

As far as what I believe.

I believe Nvidia has GPU accelerated PhysX technology. I believe they were not the first. I believe the standardization of it in a common platform is better than a closed platform. I believe the Physix simulation of certain objects can add depth to a game. I believe we are still at a point that 90% of users, and therefore games will not use it. I believe the video I saw was unimpressive as wood and metal do not become pebbles in real life when destroyed. I believe it was unimpressive as there was no AI present, no other characters, no gameplay, nothing showing the actual rendering work being done on a real life game to help users grasp the true advantage beyond the sales pitch.

CPU's can handle alot more physics than they used to be able to. Running Windows 98 on a 550Mhz CPU with 512MB of RAM. Everyone had a soundcard, if there were onboard it used part of the CPU time, and 5% of a single core 550 was a horrific amount. Now with quad cores and 3Ghz becoming the norm.... lets relate this to CPU physics, HL2 when released and used Physics unlike we had seen at the time 2004, still single core, still 9600/9800 cards running it. The average user was running a 2.4Ghz Intel, or AMD equivilant at the time, and we have gained 5 times the CPU processing power, why not use it? Seems silly to me not to.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I won't move along. But thanks for telling me to. Nvidia has a "Cook" option for all of the pretties in the game to be prerendered, the same thing you accuse the Toyshop demo of doing in a condescending way.

Agiea never took off as it required you to buy a dedicated PhysX card from them, to play only a few games to see pebbles fly off a wooden crate, or so the cloth looks more realistic.

DX standards are just that, standards. This is so a GPU maker can make a GPU to run that standard in hardware instead of in software or by having extra steps to alter code when a 3D real time application is running. Before this we didn't have enough GPU power to really run a multi-threaded aware application that runs on the CPU and GPU simultaneously. If DX9 can do tesselation then all the cards with DX9 compliance will no longer be compliant so what is your point again? I'm aware that throughthe pure rendering power of a modern GPU you cam make a DX9 application look like a DX10 application. But you do so at a huge performance hit. Cinematic Mod 10 is a good example.

AMD/Nvidia were probably to blame at first Ageia not takign off more than MS is. Why would MS care about Physics in a game when the standard gamer is also your worst client in terms of piracy? really people need to get off the MS is evil bandwagon, saying it does not make any other part of your argument any more valid. Also, ATI is holding hands with MS on some things, but it took how many years of there being a tesselation unit in their GPU's before it became a DX standard? I fail to see any valid points in your ATI/MS argument.

Yes, Nvidia accelerated it by using their GPU for it. They have done nothign that ATI hasn't already. nothign that a smaller company didn't do before them, then bought said company, and shit on its users.

As far as what I believe.

I believe Nvidia has GPU accelerated PhysX technology. I believe they were not the first. I believe the standardization of it in a common platform is better than a closed platform. I believe the Physix simulation of certain objects can add depth to a game. I believe we are still at a point that 90% of users, and therefore games will not use it. I believe the video I saw was unimpressive as wood and metal do not become pebbles in real life when destroyed. I believe it was unimpressive as there was no AI present, no other characters, no gameplay, nothing showing the actual rendering work being done on a real life game to help users grasp the true advantage beyond the sales pitch.

CPU's can handle alot more physics than they used to be able to. Running Windows 98 on a 550Mhz CPU with 512MB of RAM. Everyone had a soundcard, if there were onboard it used part of the CPU time, and 5% of a single core 550 was a horrific amount. Now with quad cores and 3Ghz becoming the norm.... lets relate this to CPU physics, HL2 when released and used Physics unlike we had seen at the time 2004, still single core, still 9600/9800 cards running it. The average user was running a 2.4Ghz Intel, or AMD equivilant at the time, and we have gained 5 times the CPU processing power, why not use it? Seems silly to me not to.

- First of all, DX is not a standard, there's no real working group behind it. It's a forced way of doing things, but it's a long way from being an standard since only M$ is in the charge of making it and only M$ has power over it.

- The old way of doing things under DX was way better for the consumer. It has a performance hit, sometimes huge, but, tesselation aside, it's moronic to have to upgrade to a DX11 card in order to enjoy the "new features" when your card (i.e GTX295 or HD4870 X2) is more than capable of handling it. It's moronic to have to pay 150 euros for a HD5750 when a 120-140 euros GTX260/275 HD4870/4890 is much faster, and would be faster even when the performance hit took place.

- I said better for the consumer. Well I don't care about the developers anymore. If they want $60 from me, they better work for it. I could care less if their life is easier or not. the easier the API is, the crappier their games are. In fact I want a return to the days when making a game required skills and not anyone can make a game wannabee and pretend to sell it for $60.

- CPUs. The top-end ones have 5 times the power, the lower end ones don't. In the past the difference between top and low end was a few mhz, which supopsed the slow one being 30-40% slower (i.e P4 from 2 Ghz to 3 Ghz). Nowadays the low end CPUs are way slower. In short, they ARE using all the power they have. The same argument that is used for Ageia not being a viable option is the same for the fast CPUs: there's just not enough of them to justify it.

-
Also, ATI is holding hands with MS on some things, but it took how many years of there being a tesselation unit in their GPU's before it became a DX standard? I fail to see any valid points in your ATI/MS argument.

It took what it had to take until it wasn't pointless and a waste of time. HD5770 is unable to run at playable settings when tesselation is enabled, and it has nothing to do with the tesselator unit, which is the exact same as in HD58xx, but with the power of the card. Only the HD5870 and faster cards can tesselate at common settings 1680x1050 4xAA and above at decent framerates so it's obvious that tesselation in HD4xxx and HD3xxx cards was a waste of time. And so is tesselation in the XB360 which has never been used fir obvious reasons too.
 
Last edited:

TVman

New Member
Joined
Dec 29, 2009
Messages
308 (0.06/day)
Processor E8400 3.6 ghz
Memory 4 GB
Video Card(s) HD4850 512 MB
Storage 250 GB
Software win 7 64bit
i like those little flame wars,they are so fun do watch:p keep at it boys
 
Joined
Jun 17, 2007
Messages
7,335 (1.19/day)
Location
C:\Program Files (x86)\Aphexdreamer\
System Name Unknown
Processor AMD Bulldozer FX8320 @ 4.4Ghz
Motherboard Asus Crosshair V
Cooling XSPC Raystorm 750 EX240 for CPU
Memory 8 GB CORSAIR Vengeance Red DDR3 RAM 1922mhz (10-11-9-27)
Video Card(s) XFX R9 290
Storage Samsung SSD 254GB and Western Digital Caviar Black 1TB 64MB Cache SATA 6.0Gb/s
Display(s) AOC 23" @ 1920x1080 + Asus 27" 1440p
Case HAF X
Audio Device(s) X Fi Titanium 5.1 Surround Sound
Power Supply 750 Watt PP&C Silencer Black
Software Windows 8.1 Pro 64-bit
Joined
Nov 4, 2005
Messages
11,689 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
No one forces you to buy a game, no one forces you to use Windows and DX. Open GL, Linux.

- First of all, DX is not a standard, there's no real working group behind it. It's a forced way of doing things, but it's a long way from being an standard since only M$ is in the charge of making it and only M$ has power over it.

http://blogs.msdn.com/directx/
http://developer.nvidia.com/page/directx.html
" The API was developed jointly between Microsoft and Nvidia, who developed the custom graphics hardware used by the original Xbox"
http://www.nvidia.com/object/windows_7_overview.html

"“We expect Windows 7, coupled with NVIDIA’s graphics technologies, to deliver the fundamental performance our mutual customers have come to expect. NVIDIA’s graphics technologies enable applications to take advantage of the visual capabilities in Windows 7 such as manipulating photos, video, and 3D graphics. With the expertise NVIDIA has in visual computing, they have been a key contributor to the development of Windows 7.”

Mike Nash
Corporate Vice President of Windows Product Management
Microsoft Corporation"


Whoops, your beloved Nvidia has sometign to DIRECTly do with Direct X and the API's created? NO NAVVARRR!!!
DX is a standard, MS has a team that devlops it, woks with hardware MFG's, and then releases it.

None of the Nvidia has a tesselator, so how could it be incorperated? DX standard have to be cut off at some point. It would be like building a customer a computer and haveing them come back and demanding free upgrades forever. There is a point at which you have to say , this is your spec, deal with it. All the 4000 3000 series ATI cards have a tesselator, but due to the way it was made they are invalid for DX11 specifications.

Sure, things like Intel integrated solutions are crap. No one is debating that, we are talking about the average GAMER, not office person running a celeron and integrated Intel graphics.

So far as the 5770 being unplayable? So is this http://forums.techpowerup.com/showthread.php?t=115132. But yet they still sell it for the price they set. Again, if buying a 5770 bothers you that mcuh, then tell teh man holding the gun to your head no.

CPU VS Ageia, the market has spoken.

http://store.steampowered.com/hwsurvey
 
Joined
Feb 21, 2008
Messages
4,985 (0.84/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor i9-9900k
Motherboard Gigabyte Z370XP SLI, BIOS 15a
Cooling Corsair H100i, Panaflo's on case
Memory XPG GAMMIX D30 2x16GB DDR4 3200 CL16
Video Card(s) EVGA RTX 2080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G400s Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
Why are you always in a bad mood Steevo? Its clear that you are always just trying to offend the people you debate with. It really needs to stop. Benetanegia is not trying to be offensive from what I can see so far in the thread.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
No one forces you to buy a game, no one forces you to use Windows and DX. Open GL, Linux.

- First of all, DX is not a standard, there's no real working group behind it. It's a forced way of doing things, but it's a long way from being an standard since only M$ is in the charge of making it and only M$ has power over it.

http://blogs.msdn.com/directx/
http://developer.nvidia.com/page/directx.html
" The API was developed jointly between Microsoft and Nvidia, who developed the custom graphics hardware used by the original Xbox"
http://www.nvidia.com/object/windows_7_overview.html

"“We expect Windows 7, coupled with NVIDIA’s graphics technologies, to deliver the fundamental performance our mutual customers have come to expect. NVIDIA’s graphics technologies enable applications to take advantage of the visual capabilities in Windows 7 such as manipulating photos, video, and 3D graphics. With the expertise NVIDIA has in visual computing, they have been a key contributor to the development of Windows 7.”

Mike Nash
Corporate Vice President of Windows Product Management
Microsoft Corporation"


Whoops, your beloved Nvidia has sometign to DIRECTly do with Direct X and the API's created? NO NAVVARRR!!!
DX is a standard, MS has a team that devlops it, woks with hardware MFG's, and then releases it.

None of the Nvidia has a tesselator, so how could it be incorperated? DX standard have to be cut off at some point. It would be like building a customer a computer and haveing them come back and demanding free upgrades forever. There is a point at which you have to say , this is your spec, deal with it. All the 4000 3000 series ATI cards have a tesselator, but due to the way it was made they are invalid for DX11 specifications.

Sure, things like Intel integrated solutions are crap. No one is debating that, we are talking about the average GAMER, not office person running a celeron and integrated Intel graphics.

So far as the 5770 being unplayable? So is this http://forums.techpowerup.com/showthread.php?t=115132. But yet they still sell it for the price they set. Again, if buying a 5770 bothers you that mcuh, then tell teh man holding the gun to your head no.

CPU VS Ageia, the market has spoken.

http://store.steampowered.com/hwsurvey

You are lost man. As Daedalus said above, you just know you want to argue and be offensive and appart from that you don't know what you are talking about 99% of the time. And I'm not saying you lack knowledge, I'm just saying that you just bable all athe time and change the subject or the perspective as you see fit with no real objective, except bothering people, apparently.

1- I know it's hard to understand for you that everyone that disagrees with your hate towards that company is not a fanboy, but don't worry, you will eventually get to undestand these things.

2- Nvidia and AMD taking part on the development has nothing to do with being an standard. A spokesperson from M$ saying it, has even less to do, it's just bussiness. "It was great working with bla bla...", "Their contribution blablabla..." it just means they want to have them happy. Bussiness.;) But returning to the point, Nvidia or AMD taking part means nothing, how many game developers take part in the process of making DX? And Matrox? Via? How much power have these companies in reality? Even AMD or Nvidia? None. They just play the role of counselors, some with more success than others and that's all. M$ does and always did whatever they wanted with DX and the rest of the people just had to abide or avoid, and it's not different from many other "standards" made and mantained by M$. They make sure you can't avoid them, so in the end everyone abides, but that doesn't make it a standard.

3- Tesselation could be made in the SPs at a big performance cost. (Although I'm not so sure about it would be that big in the end, being that the HD5000 series seems to be more crippled by the hull and domain shaders which are performed in the SPs anyway). Anyway, in my claim I specifically excluded tesselation and was talking about the rest. In DX9 and before, there were many vendor specific workarounds in order to achieve the features present in the various DX and many features that developrs wanted and hence Ati/Nvidia implemented in their hardware. That's mainly how the variants 9.0a/b/c were created. In the case of DX10, for example, Nvidia's GPUs supported most of the DX10.1 features, but since you need to support 100% of them and in the exact same way that M$ specified and Nvidia didn't reach that 100%, everyone was left without any of the DX10.1 features, because it's not profitable to implement something for 5% of the market. In the past both Nvidia and Ati would have created (in fact they did many times) their own workarounds and as a result consumers had access to those features. Not all of them always, but most of them. i.e the X800 series would have been in a serious disadvantage if M$ used the same policy as they do today, but they didn't and Ati users got many features that they wouldn't have had otherwise. That was far better for the consumer, although it sometimes meant more wirk and QA on the developers end. Since apparently making that QA easier has resulted in no QA at all for most developers, I'd gladly return to the old days. Less time doing QA has not resulted in more time doing content, but starting sooner on the next POS, so seriously I don't care, I want them to be forced to do QA and maybe then they might even say: "now that we are at it, now that we have to ensure this feture works, we might also ensure the game works and everything! What the hell, now that we have to spend at least 3-6 months doing QA, we might even spend 3 extra months making a good game"

4- I don't know what you mean with that link when you talk about the HD5770. My point is simple and has nothing to do with the power of that card in particular. Those cards are for what they are, and no one should expect more from them. I'm not bothered with the HD5770, but fact is it can't play the games (and benches) when tesselation is on and that makes very clear that a HD3850 just couldn't do it neither. It was far from being able to do it, in fact. Period. Stay away from your paranoia that everyone is attacking the products of your beloved company.

5- Once again I don't know what you mean with the link to Steam survey. The only thing that caught my attention is that there's far more people using DX10 Nvidia cards (and hence PhysX capable) than there is people with Quads (24% Quads, numer of DX10 cards= 48.94 + 27.21 = 76.15%, 65% of Nvidia makes 49.5% of PhysX capable cards). Furthermore it's a 76% of PhysX-like (GPU accelerated) capable cards, because Ati cards can too. As a game developer it makes much more sense to use the GPU than using a Quad. If that's the point you were trying to make, ey!
 
Last edited:
Joined
Aug 17, 2008
Messages
2,190 (0.38/day)
Location
Minnesota, USA
System Name TaichiTig
Processor i7 6800K
Motherboard ASRock X99 Taichi
Memory 32GB DDR4 3200
Video Card(s) RTX 4070
Storage SSD + Misc. HDDs in DrivePool
Display(s) BenQ PD3200U, Samsung C32HG70
Case Antec Twelve Hundred
Audio Device(s) Behringer UMC404HD, LSR308, Shure SRH840, AKG K612, Etymotic HF5
Power Supply Corsair 750TX
Mouse Logitech G502
Keyboard Deck Legend Ice Tactile
Software Win10
Yeah you have 1 day to show off everything and you are going to spend the time in situ (the time that you paid for, if I'm not mistaken) dealing with a water loop. Sure, it makes a lot of sense. First and foremost, that was a showcase and the purpose is to show a cool thing. I can think of very few good looking things than what they showed.

If I wanted to show off both multi-card and single-card configurations, I'd bring a couple different rigs, ready to go. That's just me. Doesn't really matter though, I'll let others do the arguing in this thread. :laugh:
 
Top