Monday, August 20th 2018

NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics

NVIDIA at its Gamescom-imbued presentation finally took the lid off its long-awaited refresh to its GeForce lineup - and there's more than a thousand increase (and a consonant change) to it. At the Palladium venue in Koln, Germany (which was choke-full with press and NVIDIA-invited attendees), Jensen Huang went on stage to present a video on the advancements of graphics simulation that brought about animations such as Tron, the first Star Wars, the original Tomb Raider, Multi-Texturing on RIVA TNT, and passed through special effects in Hollywood... Every incarnation of the pixels and triangles we've been accustomed to.

We already know the juicy tidbits - the three models being released, when, and their pricing (with a hike to boot on the 2070 graphics card, which sees its price increased by $100 compared to last gen's 1070). We know the cooling solution official NVIDIA cards will sport, and how the company will be pairing efforts with game developers to ensure the extra hardware they've invested time, money, and a name change into will bear fruits. But what's behind this change? What brought us to this point in time? What powered the company's impressive Sol Demo?
It's been a long road for NVIDIA ever since its contributor Turner Whitted worked on Multi-bounce Recursive Ray-tracing started way back in 1978. Jensen Huang says that GPU development and improvement has been moving at ten times what was being demanded by Moore's Law to CPUs - 1000 times every ten years. But ray-tracing is - or was - expected to require Petaflops of computing power. Yet another step that would take some 10 years to achieve.
NVIDIA, naturally, didn't want any of that. According to Jensen Huang, that meant the company had to achieve an improvement equivalent to 1000 more performance - ten years earlier. The answer to that performance conundrum is RTX - a simultaneous hardware, software, SDK and library push, united in a single platform. RTX hybrid rendering unifies rasterization and ray tracing, with a first rasterization pass (highly parallel) and a second ray tracing pass that only acts upon the rendered pixels, but allows for materialization of effects and reflections and light sources that would be outside of the scene - and thus, virtually inexistent with pre-ray-tracing rendering techniques. Now, RT cores can work in tandem with rasterization compute solutions to achieve reasonable rendering times for ray-traced scenes that would, according to Jensen Huang, take ten times more to render in Pascal-based hardware.
(NVIDIA CEO Jensen Huang quipped that for gamers to be able to achieve ray-tracing before the RT cores were added in the silicon and architecture design mix, they'd have to pay $68,000 dollars for the DGX with four Tesla V100 graphics cards. He even offered to do so in 3,000 facile $19.95 payments.)
Turing has been ten years in the making, and Jensen Huang says this architecture and its RT Cores are the greatest jump in graphics computing for the company - and he likely meant the industry as well - since CUDA. The pairing of the three new or revised processing engines inside each Turing piece of silicon brings about this jump. The Turing SM, which allows for 14 TFLOPS and 14 TIPS (Integer Operations) of concurrent FP and INT Execution; the Tensor cores with their 110 TFLOPs of FP16, 220 TFLOPS if INT8, and a doubling again at 440 TFLOPS of INT4 performance; and the RT Core, with its 10 Giga Rays/sec (which Jensen Huang loves saying). For comparison, the 1080 Ti would be able to achieve, in peak conditions, 1.21 Giga Rays per second - almost 10 times lower performance.
And the overall effect on performance is nothing short of breathtaking, at least in the terms put out by Jensen Huang: a single Turing chip replaces the 4 V100 GPUs found within the DGX - and with lowered render times of just 45 ms against the V100's 55 ms for rendering a ray-traced scene. Pascal, on the other hand, would take 308 ms to render the same scene - in its 1080 Ti rendition no less.
A New Standard of Performance
Ray Tracing is being done all the time within 1 Turing Frame; this happens at the same time as part of the FP32 shading process - without RT cores, the green Ray tracing bar would be ten times larger. Now, it can be done completely within FP32 shading, followed by INT shading. And there are resources enough to add in some DNN (Deep Neural Network) processing to boot - NVIDIA is looking to generate Artificially-designed pixels with its DNN processing - essentially, the 110 TFLOPS powered by Tensor Cores, which in Turing render some 10x 1080 Ti equivalent performance, will be used to fill in some pixels - true to life - as if they had been actually rendered. Perhaps some Super Resolution applications will be found - this might well be a way of increasing pixel density by filling in additional pixels to an image.
Perhaps one of the least "sexy" tidbits out of NVIDIA's new generation launch is one of the most telling. The change from GTX to RTX speaks to years of history being paid respects to, but left behind, unapollogeticaly, for a full push towards ray-tracing. It speaks of leaving behind years upon years of pixel rasterization improvement in search of that which was only theoretically possible not that long ago - real-time ray-tracing of lighting across multiple, physically-based bodies.

The move from GTX to RTX means NVIDIA is putting its full weight behind the importance of its RTX platform for product iterations and the future of graphics computing. It manifests in a re-imagined pipeline for graphics production, where costly, intricate, but ultimately faked solutions gave way to steady improvements to graphics quality. And it speaks of a dream where AIs can write software themselves (and maybe themselves), and the perfect, Ground Truth Image is generated via DLSS in deep-learning powered networks away from your local computing power, sent your way, and we see true cloud-assisted rendering - of sorts. It's bold, and it's been emblazoned on NVIDIA's vision, professional and gamer alike. We'll be here to see where it leads - with actual ray-traced graphics, of course.
Sources: Ray Tracing and Global Illumination, NVIDIA Blogs, Image Inpainting
Add your own comment

65 Comments on NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics

#51
Fluffmeister
Yeah sadly for 7nm Vega, poor Volta is already powering the worlds most powerful supercomputer.
Posted on Reply
#52
ViperXTR
efikkanRaytracing is no more a "gimmick" than DirectX 12 was a "gimmick". We're still waiting for the good games to use it natively, yet many jumped on the hype train and bought cards that were supposed to be more "future proof". Most of those buyers have already or are about to replace their hardware anyway…
This, most likely ill just pick up whatever RTX 3070 comes, after i replace my ancient ivybridge build.
I remember years a go that majority of the games would be using DX12 and Vulkan by now, turns out that DX11 is still the dominant API, nobody wants to spend too much time optimizing for for DX12 when they could just let nvidia or amd to optimize their drivers for them.
Doom/Wolfenstein2 vulkan is amazing though
Posted on Reply
#53
Fluffmeister
Turing brings mixed precision too, with Vega being on the market for a while, presumably loads of games support rapid packed math now?
Posted on Reply
#54
Prima.Vera
lexluthermiesterDo understand just how useful ray-tracing is? When every movie studio on the planet uses it for their CGI, it's big and has been for decades. And now's it's come to the consumer. If you fail to understand what ray-tracing has done for the world and it's potential for gaming, that is only your failure.
lexluthermiesterHow about every major Hollywood production that use CGI? Welcome to the real world. The future is now.
RaunhoferPeople here calling ray-tracing a scam.
After commenting they go back to watch the new Netflix movie filled with CGI they don't even know is CGI... because of ray-tracing. Ignorance never goes away, does it?
You guys really fail over and over to understand something. The CGI made for Hollywood's movies it's NOT real-time, but it's pre-rendered and post-processed over multiple layers, during very long periods of time. And yet, it's not very realistic for living characters, like Thanos from Avengers which was so fake and unrealistic, I thought I was watching a CGI cartoon. Extremely poor rendering compared to the Autobots from Transformers , for example, where the CGI was exemplary. - Sure it's not all raytracing, but...
Posted on Reply
#55
ViperXTR
videocardz.com/newz/nvidia-geforce-rtx-2070-does-not-have-nvlink

so it doesn't support dual gpu, big dea-
The story continues. The GeForce RTX 2070 might not even be using TU104 GPU, but a mid-range TU106 instead. The card clearly has a different board and different Device ID. It seems that the whole GPU segmentation has shifted and we are now paying more for the same GPU-classes than before.
wait what!? are we in a time where a low end codename now costs 500usd? (106 is used was used for low end models right before, 104 for midrange and 100/102 for high ends)
Posted on Reply
#56
Xzibit
PCWorld reported that RTX demos still exhibited noise. During their Full Nerd show.
RTX also relies on a fancy new denoising module. Denoising is very important in ray tracing because you can only cast a limited number of rays from each pixel in the virtual camera. So unless you leave your ray tracer running long enough to fill in the scene, you have a lot of unpleasant-looking “bald spots” or “noise.” There are some great research projects that help optimize which rays are cast, but you still wind up with a lot of noise. If that noise can be reduced separately, you can produce quality output much faster than if you need to address the issue by sending out that many more rays. Nvidia uses this technique to help it create frames more quickly.
Posted on Reply
#57
ViperXTR
isn't that supposed to be resolved by the tensor cores performing denoise algorithm or is it still not enough?
Posted on Reply
#58
BluesFanUK
lexluthermiesterYour comments will fall on deaf ears here. We don't care about consoles. Until consoles have the versatility and configurability of a PC, they will never have what it takes to compete.
Except for the millions of customers they have...

PC gaming has always been the luxury buyers choice, the graphical power stomps all over consoles, no debate there, and you can customise till your hearts content.

The difference IS shrinking with each generation though. Put the average gamer in a room with a PS4/XB1 and PC w/ 1070+ and they'd be hard pressed to find the enormous differences you'd have gotten a decade ago.

A PS5/XB2 is going full 4K next gen, the newest Xbox is already capable of it. It's pure 'fanboyism' to think otherwise. If Nvidia continue this trend with overpriced cards then there's only going to be one winner.
Posted on Reply
#59
Raunhofer
Prima.VeraYou guys really fail over and over to understand something. The CGI made for Hollywood's movies it's NOT real-time, but it's pre-rendered and post-processed over multiple layers, during very long periods of time. And yet, it's not very realistic for living characters, like Thanos from Avengers which was so fake and unrealistic, I thought I was watching a CGI cartoon. Extremely poor rendering compared to the Autobots from Transformers , for example, where the CGI was exemplary. - Sure it's not all raytracing, but...
I've done my fair share of realistic 3D-modeling with ray-tracing and without. I am well aware that it used to take many hours to render a scene. You know why it takes so long? Usually because of ray-tracing aka lighting! And Nvidia does it in real-time! Post-processing is easy and requires next to none performance, nearly all games are post-processed to some extent.

The reason Thanos looks unrealistic is because of the 3D-model, texturing and especially animation. Ray-tracing does nothing to those things. Ray-tracing doesn't mean photorealism. It is a key ingredient to achieve photorealism, but it alone is not it. Honestly, when was the last time you watched a modern hollywood movie and shouted "that shadow is unrealistic!", never, because it is ray-traced. I said something about ignorance, and this is what I meant. People don't even realize how amazeballs it is to have real-time ray-traced lights.
Posted on Reply
#60
Prima.Vera
RaunhoferPeople don't even realize how amazeballs it is to have real-time ray-traced lights.
It's no so amazing anymore when you have to pay something like 1300€ for ~30Fps@1080p ... :laugh::laugh::laugh::laugh::banghead:
Posted on Reply
#61
lexluthermiester
BluesFanUKExcept for the millions of customers they have...

PC gaming has always been the luxury buyers choice, the graphical power stomps all over consoles, no debate there, and you can customise till your hearts content.
I am one of those people. I have 8 Nintendo consoles, 5 Nintendo portables, 3 Sony consoles, 2 Sony portables and a number of classic gaming consoles. I am well aware of how popular and well loved console gaming is. What I meant by the above comment was that consoles will never compete with a well made mid-range PC in raw power.
BluesFanUKThe difference IS shrinking with each generation though. Put the average gamer in a room with a PS4/XB1 and PC w/ 1070+ and they'd be hard pressed to find the enormous differences you'd have gotten a decade ago.
My current PC is made with a 7 year old CPU/Mobo/Ram with other parts that are up to 12 years old(sound card), the newest part is the video card which is a 1080. Yet even with my older GTX 770, this system still kicks the XBOX and PS4Pro(which I own) in the nads performance-wise. Sure, a well made PC is more pricey, but you get more bang with it too. And you will always be able to do more with a PC than just game.
Posted on Reply
#62
R0H1T
lexluthermiesterI am one of those people. I have 8 Nintendo consoles, 5 Nintendo portables, 3 Sony consoles, 2 Sony portables and a number of classic gaming consoles. I am well aware of how popular and well loved console gaming is. What I meant by the above comment was that consoles will never compete with a well made mid-range PC in raw power.

My current PC is made with a 7 year old CPU/Mobo/Ram with other parts that are up to 12 years old(sound card), the newest part is the video card which is a 1080. Yet even with my older GTX 770, this system still kicks the XBOX and PS4Pro(which I own) in the nads performance-wise. Sure, a well made PC is more pricey, but you get more bang with it too. And you will always be able to do more with a PC than just game.
Mid range PC topping out at $1000 i.e. excluding the monitor? Don't you think that's an unfair comparison, what's the latest price of these consoles?
Posted on Reply
#63
lexluthermiester
R0H1TMid range PC topping out at $1000 i.e. excluding the monitor? Don't you think that's an unfair comparison, what's the latest price of these consoles?
The last mid-range PC I built for a client came in a $676 for the tower alone for all new parts. I don't count monitor because when comparing to a console, you still have to buy a display/TV and those prices can range all over the place, unless you already have one which is often the case.
Posted on Reply
#64
Prima.Vera
"Relative Shader Performance" ?? nGreedia's leather guy never fails to amaze with those bug words saying absolutely nothing...
Jeegarayyyzzz!!!
Posted on Reply
Add your own comment
Apr 23rd, 2024 19:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts