• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Avatar: Frontiers of Pandora Performance Benchmark

I bet none of you would even have been able to tell that this game used software RT if it wasn't stated explicitly lol.

And by the way you are wrong anyway, the game will use hardware RT if available.
I wouldn't but you can tell with Lumen though. It's very, very noisy with all the RT noise. The denoising hardly seems to work. I'm generally an advocate of RT but software Lumen has some drawbacks which sometimes make it look worse than raster. Epic is putting more focus on hardware RT with future verions of UE5.

Clearly this game does utllize something from RT hardware as the 4070Ti is as fast as a 7900xtx when it isn't being handicapped by its bandwidth.
 
HW RT is much more accurate and capable of much more than SW tricks.

That's what I always thought, and that's why I'm wondering if there's a visual difference in this game on cards without hardware RT. I hope someone looks at it and shows some comparisons.
 
I bet none of you would even have been able to tell that this game used software RT if it wasn't stated explicitly lol.

And by the way you are wrong anyway, the game will use hardware RT if available.
You can tell it has that sw lummeny look.

At sw quality levels.

The reality is hardware RT is not preferred in the current environment of consoles and GPUs that are not 40 series.
Doesn't mean it couldn't scale.

Lame hidden setting as well.

Also you don't need 4k series

3k does it not as well but it does it.

That's what I always thought, and that's why I'm wondering if there's a visual difference in this game on cards without hardware RT. I hope someone looks at it and shows some comparison

I've heard there isn't mate.

I think the fact you can't change the quality of the gi says it all.

Not sure why people are ok with amd limiting scalability on sponsored titles just so their hw isn't shown up.

They are bad for PC gaming in my opinion and if they have their way everything is going to be self imposed with fake sdf tricks.
 
You can clearly see the lighting looks fake with limited bounce.

That's why it looks so flat at times
 
Since they have the same amont of compute units and bus-width etc.
This doesn't matter. What matters is $/perf. And 7800 XT is better than any RDNA2 GPU in this department.
AMD marketing has a habit of shooting themselves in the foot.
Totally agree.
 
Beyond how very beautiful the game appears to be, this point stood out. This game seems to be the first very clear indication of the need for more that 8GB on video cards. (If someone has already brought this up, TLDR.)
Should I tell him he didn’t read the article or does someone else want to tell him he couldn’t be more wrong?
 
Even 12GB will start to suffer with the relevant consoles for PC both having 16GB VRAM buffers that will be the default size in about 1 to 2 years.
Consoles don't have 16GB of VRAM, they have 16GB of total unified system memory that is shared between your game, the OS and the GPU
 
Last edited:
This again.. consoles don't have 16GB of VRAM. Consoles have 16GB of total unified system memory that is shared both between the OS and the GPU
I understand what you are saying but a console is more unified by the CPU and GPU. The VRAM (due to speed) will be allocated to whichever processor is demanding the load. Let's keep in mind that it is GDDR6 SDRAM. That is not DDR RAM so you essentially have the best of both worlds. I guess it is like Rebar on AMD systems but even then X86 still needs DDR memory for the CPU. There is no hurdle like that in these consoles. It probably explains how Sony optimized NVME to run optimized on the PS5 vs x86.
 
When only a handful GPU's can run a game on 1440p ultra with 60fps on a high-end machine with very fast CPU and RAM, please just test a game on medium. Good to know the performance for my 2000$pc. But how many gamers have that?

You need a 7800X3D 7900XT combo to play this on 1440p smoothly. How many gamers actually own this.

These benchmarks are useful for the 0,01% of all gamers now. Steam hardware survey can easily confirm this statement.

I think you underestimate the positive press you will get by publishing the next game review on medium settings because all websites test ultra. Be a platform for gamers, not for the richest gamers on the planet.

You can argument that based on these charts one can estimate how the game would run on lower settings but in reality people with a 3060 will more likely just go to YouTube to see benchmarks from untrustworthy sources.
 
Last edited:
I have tried this on both PS5 and PC.

On PC, initially it seemed ok but after a while of playing it the draw distance with texture pop in is pretty bad (10 gig RTX 3080), on the PS5 its beautiful with much better draw distance.

I have decided to buy it on the PS5 as a result even though it costs a little more.
 
I think the fact the consoles can run rt without falling off a cliff should tell you how weak the rt is in this game.

Especially the rtgi quality.
 
I understand what you are saying but a console is more unified by the CPU and GPU. The VRAM (due to speed) will be allocated to whichever processor is demanding the load. Let's keep in mind that it is GDDR6 SDRAM. That is not DDR RAM so you essentially have the best of both worlds. I guess it is like Rebar on AMD systems but even then X86 still needs DDR memory for the CPU. There is no hurdle like that in these consoles. It probably explains how Sony optimized NVME to run optimized on the PS5 vs x86.
The unified memory is allocated specifically and is not shared dynamically, at least in the case of Xbox Series X. Sony doesn't specify. But 10GB of that 16 is used as VRAM and the other 6GB is for system and software. 2.5GB is reserved for the system leaving 3.5GB for software. In the Xbox case the 10GB and remaining 6GB are actually on different buses with different bandwidths, and system/software always uses the slower 6GB. Sonys is all on one bus but is likely divided up in a similar way. They are not fully dynamic like Apples Unified memory system, and they have 10GB of maximum VRAM. The PS5 actually has an additional 512MB of DDR4 for the system as well.

And by "NVME to run optimized on the PS5 vs x86", do you mean Direct Storage?
 
The unified memory is allocated specifically and is not shared dynamically, at least in the case of Xbox Series X. Sony doesn't specify. But 10GB of that 16 is used as VRAM and the other 6GB is for system and software. 2.5GB is reserved for the system leaving 3.5GB for software. In the Xbox case the 10GB and remaining 6GB are actually on different buses with different bandwidths, and system/software always uses the slower 6GB. Sonys is all on one bus but is likely divided up in a similar way. They are not fully dynamic like Apples Unified memory system, and they have 10GB of maximum VRAM. The PS5 actually has an additional 512MB of DDR4 for the system as well.

And by "NVME to run optimized on the PS5 vs x86", do you mean Direct Storage?
Thanks for expanding on the conversation. It is still using the same GDDR6 RAM. Is the RAM separated physically in the board? If it is wired into the CPU, GPU and NVME on the board it still is unified to me and therefore faster than X86 at that PCie specification. I know what you mean though about not unified in the way I saw it before.

I was talking about the faster sequential speeds on the NVME drives in the PS5. Apparently (Story) Sony was the first to go over 7000 mb/s
 
What did I miss? I did skim through some parts of it.

The Snowdrop engine is optimized to use as much VRAM as possible and only evict assets from VRAM once that is getting full. That's why we're seeing these numbers during testing with the 24 GB RTX 4090. It makes a lot of sense, because unused VRAM doesn't do anything for you, so it's better to keep stuff on the GPU, once it's loaded. Our performance results show that there is no significant performance difference between RTX 4060 Ti 8 GB and 16 GB, which means that 8 GB of VRAM is perfectly fine, even at 4K. I've tested several cards with 8 GB and there is no stuttering or similar, just some objects coming in from a distance will have a little bit more texture pop-in, which is an acceptable compromise in my opinion.

Copied from the end of the article.
Basically it doesn't matter much in this game cause it will always fill up Vram cause it can/it was designed for it but it doesn't affect performance and general texture resolution in a negative way. '8 and 16 GB 4060 Ti being the same even on higher res settings'
 
Thanks for expanding on the conversation. It is still using the same GDDR6 RAM. Is the RAM separated physically in the board? If it is wired into the CPU, GPU and NVME on the board it still is unified to me and therefore faster than X86 at that PCie specification. I know what you mean though about not unified in the way I saw it before.

I was talking about the faster sequential speeds on the NVME drives in the PS5. Apparently (Story) Sony was the first to go over 7000 mb/s
The RAM is arranged as many chips surrounding the die just like a GPU. Makes sense as that's basically what it is, an APU surrounded in GDDR6. With the Xbox, they are physically separated as the 10 GB of it is on a 320-bit bus and the remaining 6 GB is on a 192-bit bus. But looking at it with your eyes you wouldn't know that. In PS5 all 16GB is on the same 256bit bus. So in the way I think you're asking, no, they're not separated. That'd be done in software/firmware mostly. And you're right in that a unified memory system is much faster than your typical slotted DIMM solution on desktop. Since the RAM is so much closer there is less latency, less signal loss so you can run higher speeds, and the ability to have much wider buses which also contributes hugely to speed. A regular stick of RAM is only 64bit wide. Though thinking about it now, it's technically not unified as the RAM is not physically onboard the die as a full SOC like smart phone chips and Apples M Series.

I see what you mean by the SSD now. PS5 has a PCIe Gen4 SSD, and they starting showing up in 2019 at about 5000 mb/s so it makes sense that Sonys custom built one was the first to 7. 7000MB/s actually being the minimum for Direct Storage according to Microsoft, so that all adds up nicely!

Sorry for my original comment btw. I realized it had a bad tone so I edited it but it looks like you replied before I did
 
Last edited:
That's why we're seeing these numbers during testing with the 24 GB RTX 4090.
I wasn't referring to that. I was referring to the numbers shown the clearly indicate the even in a more optimized form, 11GB was still needed to run the game well.
 
I wasn't referring to that. I was referring to the numbers shown the clearly indicate the even in a more optimized form, 11GB was still needed to run the game well.
Clearly the opposite actually. Look at perf numbers for 4060 Ti 8 GB vs 16 GB, also what @Sithaer quoted
 
I've tested several cards with 8 GB and there is no stuttering or similar, just some objects coming in from a distance will have a little bit more texture pop-in, which is an acceptable compromise in my opinion.
Some people disagree. Some people, myself included, do not see that as acceptable.

Clearly the opposite actually. Look at perf numbers for 4060 Ti 8 GB vs 16 GB, also what @Sithaer quoted
Wait, what? I look at the following and see a clear pattern..

Given your conclusion, while the game doesn't critically "require" more than 8GB to function, it does use more when available and seems to use such to buffer assets in VRAM for upcoming use, which is an excellent optimization given that buffering GFX data in VRAM is much more efficient as opposed to buffering in system RAM. This is the whole point of having large VRAM cache.

To me, this represents the point where the limits of 8GB have been exceeded. And while it's not the first gaming example to exceed that limit, it is a clear one.
 
Last edited:
Looks very pretty and that VRAM usage.....yikes, not a game I would normally play but on sale might be worth a try and just think the FPS could be slightly gimped in this early review as the 7800X3D most likely will give a few more FPS.
 
I was expecting some full blown neon-like glow like Cyberpunk night city but its kinda disappointing. I guess our tech is not yet ready for that, as to render the entire forest to glow like that would be expensive.
 
Back
Top