• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Godfall Benchmark Test & Performance Analysis

This benchmark dosnt looks right, RTX 2080Ti about 3 fps ahead of RTX 3080, then an RTX 3090, a card that is 10% faster than the 3080, has a 40% performance leap over it ??!

Am pressing X on this one
 
Thats too bad, I was kinda interested in this game 'no I'm not a kid' cause I like looter games like the mentioned Borderlands.

But this I rather not try on my 4gb 570+Ultrawide res, I guess I will check it out sometime next year with a new GPU and when the game is discounted/patched maybe.
 
I've seen screenshots and IMO graphics in this game is subpar and performance optimizations are just not there.

It's shiny and bright - I'll give it that.
 
RTX 3090 is 20 % difference in terms of sm's and all accompanying specs and memory bandwidth at 23%.
It must be vram limitations at play here, cause you just don't get positive scaling when adding more sm's....
 
Weird how the 3080 to 3090 delta is so high once a game uses over 8GB....when the 3080 has 10GB. I'm getting GTX 970 3.5GB vibes here...

Can someone who has a 3080 test to see if benchmarks suddenly see a huge drop in memory bandwidth and performance after more than 8GB of VRAM is used? Either that or the game is too VRAM bandwidth reliant.
 
I think PCIe Gen 3 is the problem for TPU rare results like 3080 and 3070 behind 2080Ti on 1080p.
Nah, I don’t think we are at “end of times” for PCIe 3.0 yet.
 
Weird how the 3080 to 3090 delta is so high once a game uses over 8GB....when the 3080 has 10GB. I'm getting GTX 970 3.5GB vibes here...

Can someone who has a 3080 test to see if benchmarks suddenly see a huge drop in memory bandwidth and performance after more than 8GB of VRAM is used? Either that or the game is too VRAM bandwidth reliant.

The GTX970 3.5GB thing was way over blown by vast majority of people that don't understand how this sort of stuff works.
 
Any idea why the 3080 -> 3090 delta is so high in this game? Maybe vram?

It seems a bit like the 3080 is either notoriously shit in this specific game or was forgotten when releasing the driver. It even ends up below a 2080ti at 1080p, that just can't be right.

Or its right and its truly a shit GPU :D But thát shit? Impossibru!

Also... I will reiterate 10GB is already looking like a very tight balance. On the first PS5 'native' launch title...

And in much the same vein, it seems the 2080ti is more of a 4K card than the 3070 will ever be... because of VRAM caps. Its Nvidia as Nvidia does. You never get the full deal when they kick things down the stack. I'm certainly going to wait and see where these 4K VRAM caps land for the near future before I jump on another 8GB GPU.


BTW... it seems we can forget that 12GB VRAM required 'rumor' / 'leak' / 'Tuber nonsense' / (insert whatever you like) for this game.

The GTX970 3.5GB thing was way over blown by vast majority of people that don't understand how this sort of stuff works.

And yet Nvidia had to pay out and lost a case on it. What value you attribute to that is truly irrelevant, the complaints were real and the problems just the same. The 970 didn't perform as well as a 4GB version of it should. Driver trickery was needed to keep smart allocation on the first 3.5GB (I remember Far Cry 3... holy shit what a mess), ergo, effectively you really did miss out on half a gig. In addition, the card was that much less useful in SLI, where frametime variance was quite a lot worse than on dual 980's for example.

None of this is overblown, its just being nuanced down to nothingness by those who feel whatever way they feel about it. But the reality doesn't change: you got lied to and you turn it a blind eye by saying 'its not so bad'. Effectively that spells 'I'm fine with this' to a company. Not really the message a responsible consumer would want to convey - if you care a little bit about your next GPU purchase.

And lo and behold... Pascal had all symmetrical VRAM buses, Turing had much the same. Ampere similarly 'gets around bandwidth inconsistency' between different capacities of VRAM. This is no coincidence.
 
Last edited:
Very poor performance and mediocre graphics...


For example, the old Ryse Son of Rome (2013) have better graphics level and the same performance, but in 4K
 
Very poor performance and mediocre graphics...


For example, the old Ryse Son of Rome (2013) have better graphics level and the same performance, but in 4K

I think that looks and runs fine given the GPU you use, what did you expect? Free RT and 120 FPS at max settings on yesteryear's midrange?

Also Ryse is as on-rails as it gets, not really a good example I'd say. Godfall isn't thát much different, but still has much more assets in play.
 
I think that looks and runs fine given the GPU you use, what did you expect? Free RT and 120 FPS at max settings on yesteryear's midrange?

Also Ryse is as on-rails as it gets, not really a good example I'd say.
in this game there GPU have performance level ~2070S|1080ti, look at the TPU benchmarks. But what does it have to do with RT? This game hasn't RT yet. It will be later, after patch. Now reflections = SSR
 
in this game there GPU have performance level ~2070S|1080ti, look at the TPU benchmarks. But what does it have to do with RT? This game hasn't RT yet. It will be later, after patch. Now reflections = SSR

It caps out faster but for your GPU it seems to me performance is where it should be. Even without RT.
 
graphics level the same, performance level 3 times worse. Ok, understand...

Well... look at the results of other GPUs in the stack. Most of the way up until the 1080ti the stack seems to be maintained, then Turing GPUs with 8GB and more come up quite well (more L2 cache) and Ampere tops the chart (even more changes to handling memory - more suited to new console gen). My analysis here is that the 3090 tops the chart so hard (even at 1080p with a lower VRAM requirement) because the current crop of GPUs misses some magic trick for doing stuff in VRAM that the 3090 simply doesn't need because it has so much.

If you then look at 1080p compared to 1440p, Vega and a crop of similar GPUs fall off quite hard. More tax on VRAM? Additionally, this would also explain why the 2080ti with 11GB can reach past the 3080 that has 10 and overall seems a bad performer.

Quite possibly, these results will change quite a bit with new drivers?
 
No matter how you spin it it seems like VRAM is the only explanation for this unusual scaling.
 
And that's when I thought that 4k gaming is here. Looks like that new cards can't keep up with ever declining quality of console ports. Give it a couple of years, and 3080 will be good for 1080p only...
 
Do you plan to test this game in future CPU reviews? I was pretty interested because it recommends a 3600x, which i think is the highest of any game? But now looking at these results, i dont think it would be that useful. Seems the GPU will always be the bottleneck.
It's (yet another) UE4 game, so probably not.
 
And that's when I thought that 4k gaming is here. Looks like that new cards can't keep up with ever declining quality of console ports. Give it a couple of years, and 3080 will be good for 1080p only...
Because 4k is an ever-moving goalpost. That’s why in a few years you’ll want the next model, 4080, not the 3080.
 
what settings is this game running at in the benchmark
would i be able to get my 580 8g pushing 60 fps at 1080p
 
what settings is this game running at in the benchmark
would i be able to get my 580 8g pushing 60 fps at 1080p
There is a page that shows the settings he ran the benchmark at. The 580 was tested and showed 30 fps at 1080. Did you read the article?
 
yes
im asking if lowering the setting will alow me to get 60
There is a page that shows the settings he ran the benchmark at. The 580 was tested and showed 30 fps at 1080. Did you read the article?
 
I see the main "problem" with Ampere architecture in games, that each sub-unit in SM consist of two blocks. One proper block with only FP32 and second with concurent mix of INT32 and FP32 ops...and I assuming, that without better shader compiling games are using only first block(half of the total shader units). So in some cases we can see that 2080ti with proper 4352 cuda cores can easily outperform 3070 with proper 2944 cuda cores(5888/2). GodFall is good example as it is using tons of shader effects in materials...probably during development it was cooked too fast and we see not so good quality of coding.
Applications like luxmark, 3dmark, ...are better optimized for GPU utilization, there we can see masive performance boos, almost teoretical boost(turing vs ampere).
Nvidia is aware of all this and therefore msrp prices of Ampere TeraFlops monsters are not higher than turing GPUs.
 
I see the main "problem" with Ampere architecture in games, that each sub-unit in SM consist of two blocks. One proper block with only FP32 and second with concurent mix of INT32 and FP32 ops...and I assuming, that without better shader compiling games are using only first block(half of the total shader units). So in some cases we can see that 2080ti with proper 4352 cuda cores can easily outperform 3070 with proper 2944 cuda cores(5888/2). GodFall is good example as it is using tons of shader effects in materials...probably during development it was cooked too fast and we see not so good quality of coding.
Applications like luxmark, 3dmark, ...are better optimized for GPU utilization, there we can see masive performance boos, almost teoretical boost(turing vs ampere).
Nvidia is aware of all this and therefore msrp prices of Ampere TeraFlops monsters are not higher than turing GPUs.
Welcome to the forums, best first post I've seen here in a long time :)

Shouldn't it be trivial in such cases to run the 2nd block in just one mode (int or fp) ?
 
Back
Top