• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Immortals of Aveum Benchmark Test and Performance Analysis

Woh you don't need the launcher to play an EA game in Steam. Is this a first? Great news! I might buy it! :) I actually love EA games but hate the EA launcher ha.
 
Yeah, definitely. Any idea if it's 60 FPS locked again?

its not. they already announced or at least i read somewhere its 120 fps locked this time :)

which is good enough for me, beats the heck out of 60
 
Woh you don't need the launcher to play an EA game in Steam. Is this a first? Great news! I might buy it! :) I actually love EA games but hate the EA launcher ha.
Correct, it is a first. It also doesn't have EA's Denuvo and only the regular Steam Denuvo, which allow unlimited GPU changes
 
Not unexpected for an Unreal Engine 5 game, there's a little bit of pop-in as you travel across the world...
I thought Nanite should take care of this issue with it's native LOD scaling or what do you mean with the pop-in?
This I believe is the one area where any first-person (more noticeable) view game is still lacking to this day.
Swapping LOD on objects instantly breaks any immersion the game might want to create. :rolleyes:
 
Meanwhile another UE5 game with lumen and nanite, Fort Solis
1692864705474.png
 
I thought Nanite should take care of this issue with it's native LOD scaling or what do you mean with the pop-in?
You don't have to use Nanite just because you're using UE5.
 
I think you have to look back at the original intend of the technology. FSR 1 was release to make it simple to apply such that any GPUs, including older ones, have some sort of upscaling tech so that you don't have to sacrifice on resolution. Sure, it is known to be inferior to DLSS, but I don't see Nvidia doing a GPU agnostic solution like Intel and AMD.
IIRC FSR was born in the consoles which are underpowered for the 4K TVs they're almost always plugged into. FSR lets you render at whatever res is required to reach 30 or 60 fps, and then the UI/text can be rendered crisp and clean at 4K to give that next-gen polish that keeps people upgrading. Like Freesync, it was already something AMD had in the dev toolkits for console manufacturers, but Nvidia's DLSS forced AMD to spin it into a consumer feature to try and reach feature parity.
Given the 3 comparative images, the games far from being ugly in "low" preset ... (compared to max settings)
but still, the numbers are pretty low :/
Apart from some differences in shadowing there's practically no visible difference between low and ultra, right down to geometry, LOD, and textures. Some shadowing looks marginally higher resolution at Ultra but it's high enough resolution at Low that I cannot see why you'd want to go higher looking at still screenshots.
 
Hate to break it to you but your 3080 is about to hit 3 years old.
Technically, Nvidia officially released the GeForce RTX 3080 12GB graphics card on January 11, 2022. But of course the 10gb version and Ampere itself is almost 3 years, still only one gen old high end part though.

I can't believe how steep the requirements are for such a little visual return. Man 2023 game launches has been a roller-coaster that's for sure, seeing both ends of the spectrum.

Recent video of DF showed FSR2 does wonders for the re-release of Red Dead
Wonders compared to using no temporal solution at all, and also its FSR2's anti aliasing only on a native resolution image. No wonder it looked good in that comparison.
 
This has more to do with the engine than the Developer I think....
Yep, those total budget values are the literal results of a synthetic benchmark built into UE4/5:

Code:
FSynthBenchmark (V0.92):
===============
Main Processor:
        ... 0.025383 s/Run 'RayIntersect'
        ... 0.027685 s/Run 'Fractal'

CompiledTarget_x_Bits: 64
UE_BUILD_SHIPPING: 0
UE_BUILD_TEST: 0
UE_BUILD_DEBUG: 0
TotalPhysicalGBRam: 32
NumberOfCores (physical): 16
NumberOfCores (logical): 32
CPU Perf Index 0: 100.9
CPU Perf Index 1: 103.3

Graphics:
Adapter Name: 'NVIDIA GeForce GTX 670'
(On Optimus the name might be wrong, memory should be ok)
Vendor Id: 0x10de
GPU Memory: 1991/0/2049 MB
      ... 4.450 GigaPix/s, Confidence=100% 'ALUHeavyNoise'
      ... 7.549 GigaPix/s, Confidence=100% 'TexHeavy'
      ... 3.702 GigaPix/s, Confidence=100% 'DepTexHeavy'
      ... 23.595 GigaPix/s, Confidence=89% 'FillOnly'
      ... 1.070 GigaPix/s, Confidence=100% 'Bandwidth'

GPU Perf Index 0: 96.7
GPU Perf Index 1: 101.4
GPU Perf Index 2: 96.2
GPU Perf Index 3: 92.7
GPU Perf Index 4: 99.8
CPUIndex: 100.9
GPUIndex: 96.7

The devs and publishers may choose to devote extra time to improve the technical aspects of their game and optimize the code, but that is highly unlikely given the present state of the industry. The majority of gamers want flashy graphics with "amazing new technologies", but they quickly get bored playing the same game and move on to another. Most of them will never finish that new game they just purchased, and many will never return to it. And plenty of gamers don't even care what they're playing as long as it's new/trendy/hyped.

Publishers simply capitalize on this attitude to maximize their profits, by pushing out new titles faster. Little time is put into testing and optimizing the final product, and performance deficits are masked by "amazing new technologies" like image scaling and frame generation. Unless the paradigm changes, we can expect all major titles utilizing UE5 to perform similarly IMO.
 
Last edited:
Pay for the game upfront and only play it with hardware that's coming out in two generations.

Whack job.
 
Frostbite is a huge POS that's extremely complicated to work with. UE on the other hand makes it REALLY easy to produce a game world and make it work .. with UE5 you can now play with a lot of things in the editor in real-time, while previously you had to bake lighting just to get a preview, and on other engines most of these things aren't even possible. there is a reason why everybody is using UE
Easy doesn't necessarily mean better. Mainly for companies that want to increase profits at any cost.
 
Easy doesn't necessarily mean better. Mainly for companies that want to increase profits at any cost.
If you can't find people to hire, because nobody likes your engine, and they all prefer to go somewhere, there will be no more business left to run for you
 
If you can't find people to hire, because nobody likes your engine, and they all prefer to go somewhere, there will be no more business left to run for you
Honestly, there are more people out of a job than the other way around, you could argue that people with the talent and experience to optimize an engine at low level to extract every last drop of performance from the available hardware turns out to be a very select group. Then I would have to agree.

But I still wonder if the only solution is to create engines full of shortcuts and easy ways (which work but not optimally) to do everything. Good times developers created their own engines from scratch... AAA games were heavy but delivered graphics compatible with the required hardware. :(
 
But I still wonder if the only solution is to create engines full of shortcuts and easy ways (which work but not optimally) to do everything
Unreal Engine is open source, everybody is free to submit patches (which happens all the time). There's 8800 pull requests on their Github that have already been merged, and 1977 open ones
 
That's not really true. I know this is a PC forum, but if you're talking about "architecture" and outselling (in gaming), you'd need to count the consoles too

Consoles use the older RDNA 2 tech, and RDNA 2 is not receiving over-inflated budgeting or performing irrationally high on UE5 in general - it kinda just does its thing the way it should, with Ampere-like performance. Even if it was optimized for consoles only - it wouldn't apply to PC
 
You don't have to use Nanite just because you're using UE5.
Yes, but in this case devs are using it - @W1zzard mentions this in the intro:
Immortals of Aveum harnesses the power of Unreal Engine 5, integrating cutting-edge technologies like Lumen and Nanite to enhance its visual and gameplay experience. These advancements ensure that players are immersed in a world of unparalleled realism and detail. Additionally, the game employs DirectX 12 as its graphics API exclusively, but there is no support for ray tracing. To improve FPS rates you may enable NVIDIA DLSS, DLSS 3 Frame Generation or AMD Radeon FSR 2.

Also:
Nanite is one example: This micropolygon geometry system intelligently adjusts the level of detail of any in-game object depending on how close you are to it. So, if you have an object that’s far enough away that you couldn’t make out fine details on it even if they were there, Nanite will actually make the object physically less complex—on the fly!—so that the game doesn’t waste resources rendering it fully.

“In any other game,” says our Chief Technology Officer Mark Maratea, “you might see what looks like a big craggy wall, but it’s actually flat with a craggy texture and maybe some shader trickery. We don’t have to do that; we actually build an object with all of that detail, and Nanite determines whether that detail shows up based on your distance.”
 
Consoles use the older RDNA 2 tech, and RDNA 2 is not receiving over-inflated budgeting or performing irrationally high on UE5 in general - it kinda just does its thing the way it should, with Ampere-like performance. Even if it was optimized for consoles only - it wouldn't apply to PC

I know you are mostly talking about the silly arbitrary numbers this engine spits out when telling you what your gpu is capable of but RDNA2 does generally outperform Ampere in this game I would even say Ampere in general is not performing very well.
 
I know you are mostly talking about the silly arbitrary numbers this engine spits out when telling you what your gpu is capable of but RDNA2 does generally outperform Ampere in this game I would even say Ampere in general is not performing very well.

But unlike Ada v. RDNA 3, there generally wasn't a performance gulf between Ampere and RDNA 2 unless RT was directly involved. In lower resolutions, it was actually even a little faster due to the huge amount of raster muscle + the 128MB cache doing its magic. The 4080 and the XTX just about trade their blows as the 3090 and 6900 XT did, I would consider them equal each with their own strengths (as was the case then), the 4090 is just straight better than both.
 
But unlike Ada v. RDNA 3, there generally wasn't a performance gulf between Ampere and RDNA 2 unless RT was directly involved. In lower resolutions, it was actually even a little faster due to the huge amount of raster muscle + the 128MB cache doing its magic. The 4080 and the XTX just about trade their blows as the 3090 and 6900 XT did, I would consider them equal each with their own strengths (as was the case then), the 4090 is just straight better than both.

Was more talking about this game specifically with how a 6800 nearly matches a 3080 at 4k and beats it at 1440p and a 6900XT beats a 3090ti at 1440p and nearly matches it at 4k. Which you don't normally see in the majority of games especially 3080 vs 6800.
 
Hate to break it to you but your 3080 is about to hit 3 years old.
Right, but a 3080 is about the same as a 4070 and the main advantage the 4070 has besides being more energy efficient is frame generation. The 3080 is a powerful card that happens to not support newer software enhancements which I suspect is intentional, and it had all the performance that I personally needed but it's locked out by design, again my suspicion. I just upgraded from a 3080 to a 4090 (arguably the best 4000 series value besides maybe the 4070) because I saw the writing on the wall where if you don't have upscaling along with frame generation (fake frames), which I call the combo "fakescaling", you're not going to have a good time going forward. Games are being made in UE5 engine without the optimization that you find in fortnite which would have made the 3080 run great in Immortals of Upscaling, I mean Aveum. So the only way the 3080 being 3 years old is a problem is that game designers or publishers or whoever is to blame are leaning heavily on the latest nvidia software enhancements so they can do less optimization so they can get the game out the door as fast as possible for maximum profits. So you're not wrong, but saying the problem is the 3080 being 3 years old isn't entirely right either. The problem is that the 3080 is locked out by nvidia combined with the greed of whoever's responsible for Immortals of Upscaling, I mean Aveum, being unoptimized.
 
Last edited:
Back
Top