• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Develops Tile-based Multi-GPU Rendering Technique Called CFR

AA would probably be one of the postprocessing methods done at the end of rendering a frame.

You can't get off with shared memory like that. You are still going to need a sizable part of assets accessible by both/all GPUs. Any memory far away from GPU is evil and even a fast interconnect like NVLink won't replace local memory. GPUs are very bandwidth-constrained so sharing memory access through something like Zen2's IO die is not likely to work on GPUs at this time. With big HBM cache for each GPU, maybe, but that is effectively still each GPU having its own VRAM :)

Chiplet design has been the end goal for a while and all the GPU makers have been trying their hand on this. So far, unsuccessfully. As @Apocalypsee already noted - even tiled distribution of work is not new.
I know NVlink won't replace memory, it's merely the protocol for interdie communication.


I am saying the IO die could handle memory interleaving between two sets of 6GB vram and assign shared and dedicated memory and resources, it's already the same sort of memory management used, but with the ability to share resources with multiple dies, which would also make them a good shared workstation card, allow hardware management of user and resources allocation.
 
Here's a crazy idea: Why not work with M$/AMD to optimize DX12/Vulkan? Hell Vulkan has an open source SDK, it does not even need special cooperations with anyone.
Also, back when DX12 was launched there was a lot of hype on how good it would perform with multi-GPU setups using async technologies (indipendent chips & manufacturers) https://wccftech.com/dx12-nvidia-amd-asynchronous-multigpu/
Seems like everyone forgot about it...

The devs did a really nice job with DX12 mGPU in Tomb Raider & Rise of the Tomb Raider (what a beautiful game) - I haven't played the 3rd one yet though. I was quite impressed with how well it ran. Not even a hint of stuttering for me in 4k / 60fps.
 
The devs did a really nice job with DX12 mGPU in Tomb Raider & Rise of the Tomb Raider (what a beautiful game) - I haven't played the 3rd one yet though. I was quite impressed with how well it ran. Not even a hint of stuttering for me in 4k / 60fps.

Yup but it is still up to game devs to support it. Most dont bother
 
Yup but it is still up to game devs to support it. Most dont bother

Ya, with the ones who don't bother, I likewise don't bother with paying for their games. I enjoy rewarding devs when they do a nice job, though and not only buy their games, but usually will recommend them and write good reviews for them on Steam & Metacritic to try to help them out.
 
What about RT and this method? SLI for classic rasterization is hit and miss, but maybe it turns out very compeling for RT.
 
Ya, with the ones who don't bother, I likewise don't bother with paying for their games.

As a dev explicit mGPU is a major PITA to support. Some smaller devs may never be able to fully support it. It's not an answer to just boycott games that don't use it, as that will really limit you to like 10% of games tops for eternity.

Not a fun reality, I realize, but mGPU in DX12 was not an answer, more a copout and passing of the issue to devs.
 
Btarun, why is segment about CFR in RT deleted from original post?
 
As a dev explicit mGPU is a major PITA to support. Some smaller devs may never be able to fully support it. It's not an answer to just boycott games that don't use it, as that will really limit you to like 10% of games tops for eternity.

Not a fun reality, I realize, but mGPU in DX12 was not an answer, more a copout and passing of the issue to devs.

Oh I still play them, don't worry, no missing out here. At the end of the day, IMO at least, it's a feature that is of great help to customers and should be supported on AAA titles. The big companies out there paying their execs millions can afford to implement mGPU support. I'll never forget how well Rise of the Tomb Raider ran and how gorgeous it looked in 4k / 60fps with everything cranked. I was happy to pay for that game and equally glad to write glowing reviews on both Steam & Metacritic for them.
 
Back
Top