• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Announces Partnerships With Multiple Studios to Bring RTX Tech to Gamers

Seems like we may still be a long way from games using it at high levels of detail, though maybe a cartoon-style game could pull it off?
That's not the way raytracing works. Light source -> bounces -> eye. What it bounces off of (be a highly detailed model or a matte colored wall) doesn't matter in terms of compute. All it changes is the information the ray carries.


I see what NVIDIA is trying to do: end the use of traditional lighting methods. But this is a problem because of backwards compatibility. They need a crapload of shaders to keep older games running while they also soak a crapload of transistors into RT and Tensor cores to raytrace and denoise it. Turing could be a lot better at either one. NVIDIA isn't wrong that we're at a branching point in computer graphics. Either we keep faking it until we make it or we start raytracing.
 
Last edited:
We have PCSS/HTFS and screen space reflections, what nvidia showed with ray traced shadows and reflections wan't that astonishing. The global imllumination though, that was some next level shit.
 
I think when the dust settles, raytracing isn't going to become mainstream (at least not now) because midrange cards (where the bulk of the gamers are at) do such a terrible job at it. However, I hope that games do implement raytracing code in their games so that graphics cards 10 years (or more) from now can go back and raytrace the classics without patching.
 
I think when the dust settles, raytracing isn't going to become mainstream (at least not now) because midrange cards (where the bulk of the gamers are at) do such a terrible job at it. However, I hope that games do implement raytracing code in their games so that graphics cards 10 years (or more) from now can go back and raytrace the classics without patching.
That ain't happening, either it is adopted or isn't. That's why nvidia has to follow with introducing 7nm gtx 3060 with at least rtx 2070 level of performance.
 
AMD is going to make that costly for NVIDIA because they're not wasting transistors on RT/Tensor. AMD can manage Mrays with GCN already using OpenCL on their shaders. The only way the transition will be smooth is if shaders can be upgraded (using few transistors) to accelerate raytracing instead of being relegated to rendering. Since it obviously takes in the Grays to do RTRT and make it look acceptable. current hardware tech requires dedicated hardware for it. This is a no-win situation. Either NVIDIA commits to it and keeps wasting silicon on hardware there's no software to use or NVIDIA acts like AMD and keeps charging forward with packing in more shader performance. Intel isn't likely to implement dedicated raytracing hardware either because (excepting dedicated) none of their GPUs can get remotely close to doing it in real time.

RTX 2070 is too slow. We're basically looking at 20 million transistors and >10 Grays to RTRT. That's literally $1000 right now and 250w worth of power consumption. Traditional lighting methods, corners can be cut and the game still very playable (think grayscale--scales to hardware). Reducing rays in raytracing can make the game unplayable, either by ridiculously low framerate or by grainy, confusing visuals (think black and white: either it is acceptable or it isn't).
 
Last edited:
No way they can do anywhere near what rt cores+tensor acceleration does.
Whatever happens now,I hope industry will follow, it's a step towards much better visual quality.
 
Last edited:
Tensor cores are just 4x4x4 matrix. Yes, it can denoise, but traditional rendering methods have no noise--noise is injected via AA to soften edges.

The machine in this video has 4xTesla V100
https://www.pcworld.com/article/328...nvidia-rtx-real-time-ray-tracing-demo-e3.html

NVIDIA provides this graph (presumably RTX 2080 Ti):
1KmoD9QXdYiP1JCm.jpg


They don't say what resolution the demo is running at. It's also panning very slowly which reduces workload per frame. Author noticed frame rate dip with raytracing enabled. Remedy (developer of demo) likely didn't put the effort in to make traditional methods look almost as good as ray traced (e.g. they didn't even try to do reflections in water which I just saw that in Dishonored: Death of the Outsider...it was pretty...without ray tracing).



Pre-rendered on Linux server farms.

That demo was showcased with the DGX Station at 1080p 24fps. If Turing is capable of 45ms we are still talking below 30fps with dedicated hardware.
 
In other words it is, in fact, a bridge to no where. NVIDIA jumped the gun and is launching a feature set that is quite useless to gamers. Turing at best can do 10 Grays/sec. Realistically, it needs to do over 1000. Should put into context how out of place this is. It's eye candy for eye candy sake. Not practical for gaming whatsoever.
 
They need a crapload of shaders to keep older games running while they also soak a crapload of transistors into RT and Tensor cores to raytrace and denoise it.

Those RT cores probably don't occupy that much die space, we are talking about what's effectively a fixed function DSP at the end of the day. This isn't as impressive as they make it out to be, the reason no one had done it before is because it's nearly impossible to effectively integrate such a thing into an era where everything is programmable.
 
https://www.zhihu.com/question/290167656/answer/470311731
Yubo Zhang said:
The RT core essentially adds a dedicated pipeline (ASIC) to the SM to calculate the ray and triangle intersection. It can access the BVH and configure some L0 buffers to reduce the delay of BVH and triangle data access. The request is made by SM. The instruction is issued, and the result is returned to the SM's local register. The interleaved instruction and other arithmetic or memory io instructions can be concurrent. Because it is an ASIC-specific circuit logic, performance/mm2 can be increased by an order of magnitude compared to the use of shader code for intersection calculation. Although I have left the NV, I was involved in the design of the Turing architecture. I was responsible for variable rate coloring. I am excited to see the release now.

Tensors are fairly large because they include logic and storage. Not finding similar insight into them like above.
 
You should realize that your opinion is the opposite of objective.It's subjectively mediocre,it's mediocre in your opinion.

No man, the game is mediocre objectively, it something almost everyone forgot, it didn't leave a mark in any way, and it was a technical mess at launch, it's a mediocre game and it's not like i'm trying to convince you of anything.
 
Bah, throwing facts at a perfectly good rant. This should be punishable by ban :D

You see ray tracing every single time you watch a movie with CGI in it. Ever wondered why that is?

Because they have the luxury spend days, weeks, months or however long they need to churn out fixed perspective cgi scenes on a render farm, unlike games where they need to be generated immediately, and the perspective is variable significantly lesser hardware.
 
Because they have the luxury spend days, weeks, months or however long they need to churn out fixed perspective cgi scenes on a render farm, unlike games where they need to be generated immediately, and the perspective is variable significantly lesser hardware.
Could it be it's a technique that's worth spending days, weeks, months and building server farms for?
 
Sidenote: Is Quantum Break good? I didn't even realize it was Remedy. Now I feel bad.
Very underrated. It was serviceable in the mechanics of play, had a good premise backed by decent story, excellent voice acting (and acting by real, known actors), and was technologically a beautiful game. Overall I give it a B, almost a B+. Now that prices are reduced, I give a buy recommendation.
 
Last edited:
Everspace is the newest one on there. Played it without an NVIDIA GPU, works fine. Xaled may be referring to games that actually require it, like Crazy Machines 2: Fluid. Try to run it without an NVIDIA GPU and it runs like trash.
 
Everspace is the newest one on there. Played it without an NVIDIA GPU, works fine. Xaled may be referring to games that actually require it, like Crazy Machines 2: Fluid. Try to run it without an NVIDIA GPU and it runs like trash.
Yep, but he was saying only one game SUPPORTED PHYSX after it came out.
 
Could it be it's a technique that's worth spending days, weeks, months and building server farms for?
I'm not saying it isn't when it comes to movies but in games it's not worth it nor the power to do it properly on such a scale is available. Nvidia knows that but their aim here is to build the RTX brand and to be first with a pro-consumer raytracing solution regardless of how half-assed it is.
 
I'm not saying it isn't when it comes to movies but in games it's not worth it nor the power to do it properly on such a scale is available. Nvidia knows that but their aim here is to build the RTX brand and to be first with a pro-consumer raytracing solution regardless of how half-assed it is.
Didn't everything start "half-assed"? I'm assuming this will be the same (though we don't really know at this point). But games with movie-quality lighting? I can't understand how so many people can say that's a gimmick.
 
:roll:Wow! The Raytracing must seriously be a resource hog if the top line 2080Ti can only manage those frames on 1080p.
I know this is piss poor, but I expected it could be even worse, in 30s, whereas the avg. here is probably more than 40. Not worth the perfomance hit though, that's for sure. I'm eyeing 2080s but only if the general performance per dollar is better than 1080ti, they're not that much more costly here, about 10% over 1080Ti price, and that's for preorders. Only way I can swallow the cost of that card is if it does as good as 1080Ti at same price but lower TDP. RTX stuff I'm gonna treat more like an extra novelty, run a few scenes, snap a couple of incredible ansel shots, that's it,go back to RTX off.

Didn't everything start "half-assed"? I'm assuming this will be the same (though we don't really know at this point). But games with movie-quality lighting? I can't understand how so many people can say that's a gimmick.
It's always high performance hit for better visual quality with stuff like that.Look at first dx11 card, gtx 480, in metro dx11

metro_2033_1680_1050.gif


metro_2033_1920_1200.gif


And frankly that didn't feature as huge a leap as real time RT is now.
 
:roll:Wow! The Raytracing must seriously be a resource hog if the top line 2080Ti can only manage those frames on 1080p.
That's actually playable territory on first-gen hardware. Still, for the asking money, I'd rather not have to choose between RTX and 4k.
 
RTX 2070 is too weak though, unless you lower some other settings first.Plus I think tomb raider was the only one that fetured two rtx technologies at the same time, lighting and shadows, both extremely taxing. Games running no ray traced shadows,lighting only,may see a better chance to actually do better. And I think lighting is probably the feature that we can immediately see have an impact on visual quality, unlike shadows and reflections,which get lost in fast paced gameplay.I think Metro is the one that implements lighting only,and given the character of the game, it may be quite a big visual difference.Plus something tells me that tomb raider isn't gonna have stellar performance without those features to begin with. Look at the case of rotr on 980ti and its performance
rise-of-the-tomb-raider-nvidia-geforce-gtx-900-series-performance.png
rise-of-the-tomb-raider-ambient-occlusion-performance-with-nvidia-vxao.png
rise-of-the-tomb-raider-purehair-gameplay-performance.png
rise-of-the-tomb-raider-shadow-quality-performance.png
 
Last edited:
Back
Top