• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Radeon HD4800 Series Supports a 100% Ray-Traced Pipeline

@Weer nvidia bullshit gpu can't do a shit on ray-tracing due to lack of a tessellation unit. Read the article

this guys gonna be great here:toast:
 
Ray-Tracing doesn't explicitly require hardware tessellation, didn't you read this part:

JulesWorld’s technology also works on Nvidia GeForce 8800 cards and above, but the lack of a tessellation unit causes a bit more work on the ray-tracer side.

It is possible, but the performance takes a beating. This is probably why they chose HD2900 XT over NVIDIA's offerings last year to work on their technologies.
 
Does anyone know how they did the CGI in Juressic Park back in 1992-1993. Because I still don't understand how they managed to create such huge and realistic animals with the computers from that time.

Don't think desktop computers, think of a computer that takes up the space of a small bedroom.

They had a SGi super computer at Amoco when I was going up(Dad worked there)about the time Jurassic Park came out. Of course they used them to make 3/d models of oil under the ground, but there was a few demo games on it:eek: Lets just say those demo games looked close to what games look like today. I saw the future and I knew it when I was playing a racing game on a SGI supercomputer as a kid in the early 90s.
 
@Weer nvidia bullshit gpu can't do a shit on ray-tracing due to lack of a tessellation unit. Read the article

i officially anoint u the counter defense force against the militia of General Green Camp a la Weer in these forums =) .
 
AMD is seriously kicking some butt this go around. Id still like an uber high card from them though.
 
Don't think desktop computers, think of a computer that takes up the space of a small bedroom.


yeah, that sounds about right;

and in comparison as to how far CGI has come - (2001) Final Fantasy: The Spirits Within, a very detailed, all CGI movie used massive amounts of computing power; taken from wiki:

Square accumulated four SGI Origin 2000 series servers, four Onyx2 systems, and 167 Octane workstations. The basic movie was rendered at a home-made render farm which consisted of 960 Pentium III-933MHz workstations. The render farm was made by Square Pictures located in Hawaii. The film had cost overruns during the end of production.

If anyone had seen any of the behind the scenes, making ofs, etc - I remember a bit where the developers were showcasing just how much was rendered for each model; each character was rendered down to fingerprints - if needed, they could zoom in towards a finger on a rendered character, and you'd be able to see the ridges on the fingertip that gives you a fingerprint.

here a pic of an example of the rendering that was done for the main character in the film:

realisticphotos.jpg


A bit of overkill, IMO - but the capability to do that wasn't possible 10 years prior, if comparing to what we saw in Jurassic Park, for example.






Anyhow, back OT - I think this is wonderful news to hear what all the HD4000 series are capable of. This kind of news speaks very well of the new hardware, and should truly help both AMD and ATI regain a very strong foothold.
 
I've no clue as to why the raytracing performance doubled from 2900XT to 3870. All specs and synthestic benchmarks give no significant advantage to RV670 http://www.digit-life.com/articles3/video/rv670-part2-page1.html except ONE: exposure of tessellator under DX10.1

So my guess is one or more of the following:

1./ Proprietary tessellation routines couldnt access the R600 tessellator for acceleration, due to hardware tessellator not being exposed under DX9 API/compiler
2./ Under DX10.1 the hardware tessellator is exposured, resulting in double performance
3./ There may have been other changes to the proprietary algorithms they coded for AA and didnt notice they were testing different code on the different hardware.

Dark... could you give us a 3 sentence summary of how the tesselator works.

Any change in the tesselator would definately affect the performance on ray-tracing, as tesselation is vital for propper ray-tracing.

Think that ray-tracing is a physically correct rendering method so the geometry has to be correct for each rendered pixel. This is especially true for curved surfaces. Under rasterization shaders take care of making a sucession of adjacent flat poligons look like a curved surface, but on ray-tracing you need each pixel have a different physically correct polygon or it will render a sucession of flat surfces, which is what the model really is.

bigballskm8.jpg


1- This is how a shaded sphere looks like under rasterization. Notice how the shader makes the surface appear smooth even though the geometry is a lot less detailed as you an see on the borders.

2- This is the actual geometry. This is also how the model would look like when using ray-tracing. Because the rays casted from the light to the surface and then to the camera encounter the same angle each polygon is shaded as a flat surface. The bright spot has gone too, as none of the polygons have the required angle to reflect the light to the camera.

3- One iteration of tesselation aplied to the model. Definately not enough, but at least 2 of the polys are bright and the geometry is more detailed as you can see on the borders.

4- Four iterations of tesselation. Now the sphere is much more detailed and looks better overall than the rasterized one except for the bright accent. To get the proper look you need each pixel render one different polygon. In essence you can "fake" the proper look as in rastezation, but what would be the point of going to ray-tracing then?
 
Last edited:
Holy crap. Ray tracing!?!?!?!? lol looks like intel is screwed when they release the larabee :p That was one of the touted features of it, now the HD4850 already has it!


Erm the difference being, Larrabee is integrated, ATi cards are not :/
 
Erm the difference being, Larrabee is integrated, ATi cards are not :/

Larrabe was going to be both integrated and discrete last time I heard. Anyway these really treatens Intel as ray-tracing was the only advantage they had in the graphics, that we know. IMO there's no way they are going to be able to compete in performance with an x86 design. GPUs have evolved to what they are for a reason, but I suppose we can wait for this one and see how it ends up. But everytime I hear of Larrabee I can't help but think of Cell and the rumors that it was first developed with the idea of doing the graphics too and how it failed. More so if Larrabee is only integrated it already failed. Forget about ray-tracing because you have to already forget about (high-end) gaming on a full integrated chip. CPU+high-end GPU+NB in the same chip? We will remember how ideal the yields of GT200 was if that happens. :p That's my opinion anyway.
 
So R6xx and R7xx "tessellators" are the same as ATi "truform"? (old terminology). But how would that help raytracing? That's just geometry instancing/scaling. Cant see how that would boost raytrace performance. And thats the question.
I still dont see how the tessellators are improving the ray-tracing FPS benchmarks between 2900XT and 3870. I see that by using the tessellators you get a better quality picture. Or is the issue as follows:

1./ With 2900XT you need to do gazzillions of polys on the CPU and send this data to the GPU. The CPU is the bottleneck
2./ With the 3870 DX10.1 the tesselator is exposed meaning you can take CPU/geometry shortcuts and the 3870 will do the geometry much faster than the CPU could.

BUT if the same geometry info is sent from the CPU to the GPU, i STILL CANNOT SEE how/why the 3870 is do much faster than 2900XT.

THEREFORE, if the geometry was precalculated, then 2900XT and 3870 would have very similar performance at raytracing. However, if the geometry is calculated on the fly, [CPU+3870] is much faster than [CPU+2900XT] due to 3870 assisting with geometry instancing.

Is that the story?
 
Back
Top