Discussion in 'Games' started by AphexDreamer, Mar 11, 2010.
i dont think it matters if any of us buy into it... it only matters if Nv or ATI does
is pretty close
not quite the same, but...
anyway, the tech is plausible, but i don't see it running at any high resolution without some form of acceleration (or a whole lot of cpu cores at its disposal)
by "buy into" i mean believe is the next big thing...
I don't think ATI or nVidia would "buy into it" since by its nature, this would hurt business for them pretty badly.
If this can bring us photorealistic gaming within two years, awesome. If it's vaporware, well that wouldn't really be a surprise. Truth be told, I'd almost prefer that it be vaporware, because while it'd be nice to have this kind of rendering available without the GPU power, I like the fact that rendering complicated scenes is computing-intensive simply for the effect it has on driving the progress of technology. The need for more and more GPU power to make sure games get increasingly visually stunning year by year has produced fantastic progress that has an effect even outside of gaming in the present (i.e. Folding@home), and which is a part of making new kinds of virtual experiences possible in the future.
so did i
This was posted in Jan 30 2009
Yeah but not everyone knew about it here.
We could floating point 1920X1200 pixels as 32bit color depth.
2304000 pixels per frame
73728000 raw bits with no overhead per frame
4423680000 raw bits 60FPS
We still have no order to the pixels, just pixels.
So how deep do we want to calculate for a field of view? I know it will be a relative value, but how far? How far do you want to see for a sniper shot? a field of view equivalency of 20X
Lets see what a 16 bit stack will get us. from -32768 to 32767 apply those to pixels of depth, that allows for 65535 Z bit draw depth. So each pixel must have 32 bits of color information, 16 bits of position information, and then we have to add vectoring for motion calculation to any moving items,
Face it, we are well beyond what a CPU is capable of approaching it this way. Yes there are programs out there that are 64Kb and have pretty fractal patterns and techno sounds, but fuew or none that you interact with on the same level as a 90's game.
Not ment to sound harsh... I just did some googling since this thing has surfaced in many forums, and found that.
well then it that case it does no matter what ati or nvidia thinks about this tech. if it doesnt exist and this guy is just making up a load of crap then the point is moot. if the tech does exist and is as awesome as the guy says it is then any number of companies would grab onto it even if ati/nvidia did not. intel has their own graphics solution, matrox is still out there and god only knows the amount of partners out there looking for a piece of the pie. and not even they would buy into it unless they saw some real world examples.
Remember this is just a video for us, he would have an actual live demonstration if he had a meeting with a company.
Still don't think ATI/NV will buy into it as they've invested so much in the polygon system.
Since everyone thinks short term instead of longterm these days it probably will fall flat on its face even if its 100% real and could work with animation etc also.
But, if he manages to get funding from another source, as people mentioned this would be insane for medical use so he should try in places like that.
as far as rendering what you see im sorry to say in 3d industry in general thats the way it works so that aspect of rendering only whats on the screen has been around awhile and in theory this will work it as of now? not a chance given 5 years to 6 years time aka 2 more cpu cycles and 3-4 gpu cycles it will become a viable method. sigh* i still remember when this was just speculation and theory in college in my Computer animation bachelors courses. Eitherway i can say it is a viable alternative this guys implementation i doubt will work but you always need a base to start with
ageia made physx nvidia bought it adapted it etc same will happen to this or as has been stated it will dissappear but not entirely. The reason why this wont dissappear is because in the movie industry itself you have smaller pieces of this kind tech at work already and has been for years its just evolution of 1 way of doing things
thanks for the interesting post.
I dont get it, if hes running unlimited blah-blah-blah data. Why arent there unlimited models in all the videos? Instead of 13 pyramids of fugly looking things, umlimited pyramids of fugly looking things....??
coding for something and using it effectively are two very very different things example i have a mel script coder make my tools in Maya 2010 for me but he has no idea how to use them or why i need them to do what they do.. he writes it i test it same could be applied here he was able to write the code but as he has written it dosent mean he understands how to harness it a good example of this is solar energy we have the know how but we dont exploit it? same applies here as i said this isnt something new the core aspects of this tech have been around for awhile and in some cases already implemented just very few know how to utilize it in any effective way
this hopefully will get better.
might be more of a technical person then a creative person???
Which he mentions... He's all like I'm not an artist so you could just imagine what could be done with the work of a good artiest.
maybe he needs one. hopefully we can see some real good examples of this. we would eventually need tech like this. I am remaining neutral and always pack my salt block when it comes to everything. but I am ready and willing to accept new tech if it is seriously innovative.
Have any of you heard about any progress concerning this technology?
I've been reviewing it and find it really fascinating. It would indeed be incredible if this technology to increase the performance of GPU processing in the magnitude of a thousand times.
Surely at least AMD, who claim to want open standards and the progression of technology even if it doesn't necessarily mean personal gain, should see promise in this and act upon it?
Point cloud data is allready use in LIDAR tech
LIDAR and other scanning tech is using FRICKIN LAZOR BEAMS to scan everything.
Makes 3d point cloud models of objects you scan. Scan in a sword, you've got a very realistic model of it and at natrual scale.
on a HUGE MOFO SCALE:
NAVTEQ is using this to literally make a 3d model of cities. Some of this is allready used on Bing Maps Silverlight edition. But to a wayyyyy lesser extent. They want to map the world like this and it would be awesome. Microsoft is helping them with this tech as well. This is also good to tell exact distances of key attributes. Like bridge heights, exact distance between buildings, it's all there. They had an example of a guitar on a storefront and they were able to tell you the exact dimensions of it and if it would fit on another building's storefront.
Some 3d artists are using this to make models, but using triditional polys instead of all-out point cloud data. pretty cool stuff. You ca buy one of those scanners but they cost an arm and a leg and your first born. I want one!
The head of UD Tech did mention in his videos that point cloud data is in use of laser scanning objects into 3D images, so I assume that he was talking about the likes of LIDAR?
When you say several times that they're "using this," do you mean the actual UD Tech or just a point cloud data system?
Point cloud data, sorry for the mix-up.
There's several ways to do the point-cloud data. Lidar is one of the many ways. There are way more scanning tech available for smaller models. I've seen it. Do want!
Those are just scanning tech to make it happen.
Imagine getting LIDAR scanning data for a real size city like NYC and making a game take place in it it would be an exact copy of that city. That would be neat.
Same with any size objects. Gamers are demanding more realism from games and this would be the easiest way to do it for some objects. a 15 minute scanning and texturing session VS 6+ hours of modeling time. Much more cost effective.
You can just use the data from the point-cloud scanning session and import it into the UDTech's engine. it would be treated as any other map/model format just like importing 3ds files into a regular game engine.
That does sound great, but that would take UDTech becoming a legitimate player in the gaming industry and judging by the timescale, ie when this was all talked about, it seems dead in the water.
I'm actually thinking about harassing AMD into acknowledging and progressing this technology. We need to get a petition or something going!
The reality is that we always going to need/want more power, regardless of UDTech or not. I believe that at most this could be a minor dent to graphics card companies initially, until people realise, "If my weak computer can do this, what will a powerful one do?!"
Come on guys, let's do something to push AMD to make good on it's claims to promote furthering of technology!
Mabye AMD and Nvidia are trying to silence this because they know it would be too new for them and they would be scurred! They are scared shitless.
They are clinging to thier polys. and Milking them too!
No doubt they are. I however think (or would like to think) that the software could be incorporated somewhat on a hardware or driver level, thus benefiting one or both of them. I think if it's employed on a OS level that they could be debunked.
Also if Intel with graphics we all hate so much gets a hold of this and makes it their own, then trully the GPU makers could find themselves crying rivers.
I think only a preemptive strike, ie, some form of implementation in their drivers/hardware/whatever by AMD/nV could quell this potential upset for them; if it ever was/is to be a threat that is.
Don't you think that it could be possible to petition AMD (I keep saying them because of their claims that I mentioned) into it, or at least into looking further into the feasibility of injecting it into their... GPU technology suite?
Separate names with a comma.