• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Software Adrenalin 22.7.1 Released, Includes OpenGL Performance Boost and AI Noise-Suppression

In Minecraft you can't really expect any scaling in performance that would make sense. You can upgrade from an RTX 2080 to an RTX 3080 and see no improvement at all, even though that would be a big upgrade in other games. There's a couple of reasons for that and they boil down to Minecraft basically being an inefficient Java game (along with the additional inefficiencies of modpacks). The inefficiency of the engine means that it benefits more from processor IPC and that's what limits your framerate performance. CPU bottleneck, basically.
I'm well aware of the game having a very crude game engine written in Java using the library LWJGL (I believe), and this language and its libraries with the resulting inability to write efficient and reliable code. Game engines usually interface with the OS to read input events, uses threading smart and efficiently to avoid rendering and IO affecting each other, and hopefully reduce/avoid heap allocations and random memory accesses in performance critical code, etc. all of which is hard or impossible in Java. If this was written efficiently in C using OpenGL well, then it should easily be able to push 2000 FPS with this level of geometry.

And I don't mean this as criticism of Minecraft as a game, I'm well aware of it starting as a hobby project that went viral. I'm saying this because I'm well aware of the technical limitations of this game, and you are right about it facing effectively a CPU bottleneck (of sorts). But having a vastly faster CPU (which I what I assume you mean by "IPC") will not completely eliminate these performance bottlenecks, in short some highlights why;
- Java will result in layers of extra function calls, many of which will result in cache misses of a type which a faster CPU can't do much.
- Incompatibilities of Java's types and OpenGL's state machine design resulting in loads of heap allocations, Java's garbage collection, and the resulting memory fragmentation. All of these will not scale well with a faster CPU.
- Inefficient use of OpenGL itself, which has nothing to do with Java. Even an infinitely fast CPU can't make up for inefficient batching of operations. The solution is obviously a better engine design.
So there are design bottlenecks too.

I'm not sure why the minimums are so crap but those big framerate stutters only happen when you're moving around rendering new chunks. Stay in the same place for any amount of time and the 0.1% lows go way up to over 200fps and it's butter smooth. The Nvidia cards also suffer from the same thing but manage to keep framerates above 60. But yes you can definitely feel that it runs worse on Radeon. I think there's a lot of improvement still to go.
Was your comparison of RX 6600 and GTX 1060 with the same or a comparable PC?
Because I find it interesting that the Nvidia card didn't experience the same level of slowdown. So if your description is correct, it would mean that the difference here is due to overhead on those API calls, not the game engine itself. As mentioned, I suspect it has to do with updates to vertex buffers, and find it puzzling that their new driver implementation is so much worse in this regard.

And scratch what I said about testing my old Radeon card. (I used it in a Haswell machine which broke.) It turns out they dropped support for my R7 260, even though it's 2nd gen GCN. (Thanks AMD!)
 
AMD's been behind in openGL for so long, and this is the second big oGL boost recently - great news

Edit: After first thinking I managed to screw up my before/after testing in Unigine Heaven, I managed to find a screenshot (only of the first run? weird.) in some obscure Heaven subfolder. Since I don't have the latter run in screenshot form (only the default .html), here's a table:
BeforeAfter update
Score33084738
Avg. fps131.3188.1
Min. fps12.318.7
Max. fps268.1421.8
Settings: 1440p fullscreen, OpenGL, Ultra quality, Extreme tesselation.

That is a very, very solid performance increase. Well done AMD - but also: about time.
dude, that's a massive change

How on Earth would a graphics driver break your keyboard, trackpad and USB functionality? That seems incredibly odd to me. Something seems off.
Laptops can be weird

Being that AMD has some unified drivers, it sounds like he somehow removed his chipset and USB Drivers
(DDU screwup, perhaps)

Quite a few peeps on reddit lost HDR too so not system specific. Out of interest do you use an 8 or 10 bit monitor? And do you use display screen compression for your resolution/framerate mix? I'm trying to figure out if it is a DSC related bug (as my monitor uses DSC at 4k 144hz) or a but depth related issue.
Someone in a thread here on TPU noticed something similar to that, where certain monitor combinations raised his VRAM clocks (the focus of the thread) but also removed certain colour options... like 165Hz would remove the 6 bit option.


That said, on my Nvidia cards - HDR drops me to 120Hz from 165Hz. You might have been using the software style HDR from MS, and now AMD is trying to push the hardware one - try lowering your refresh rate?

The improvements in my modded Minecraft 1.6.4 average FPS is huge! The chunk loading still sucks (which is why 1% and 0.1% are so low) but LOOK at those average and maximum improvements!

RX6600 - Old driver (can't remember what version):
18-05-2022, 17:29:35 javaw.exe benchmark completed, 10548 frames rendered in 52.157 s
Average framerate : 202.2 FPS
Minimum framerate : 171.2 FPS
Maximum framerate : 252.4 FPS
1% low framerate : 48.1 FPS
0.1% low framerate : 30.7 FPS

RX 6600 - New driver
(22.7.1):
29-07-2022, 10:46:34 javaw.exe benchmark completed, 29361 frames rendered in 89.625 s
Average framerate : 327.5 FPS
Minimum framerate : 168.4 FPS
Maximum framerate : 589.4 FPS
1% low framerate : 6.2 FPS
0.1% low framerate : 3.2 FPS


It still can't compete with my GTX 1060 6GB in minimum values though......

GTX 1060 6GB:
17-05-2022, 21:38:06 javaw.exe benchmark completed, 46906 frames rendered in 137.562 s
Average framerate : 340.9 FPS
Minimum framerate : 254.3 FPS
Maximum framerate : 463.5 FPS
1% low framerate : 122.3 FPS
0.1% low framerate : 64.4 FPS

Even though Radeon performance is still not great, the improvement in average FPS is very noticeable and a lot of the framerate stutters are gone. It's actually enjoyable now rather than just "playable".
That's possibly the biggest FPS increase a driver has ever caused in the history of drivers
The 1% lows are concerning, but it's crazy you mention the stutter is gone when the lows indicate the opposite!

@W1zzard are any of the TPU test titles openGL?
If so, this might be one of those rare cases it's worth re-benching due to a driver update
 
Last edited:
I'm well aware of the game having a very crude game engine written in Java using the library LWJGL (I believe), and this language and its libraries with the resulting inability to write efficient and reliable code. Game engines usually interface with the OS to read input events, uses threading smart and efficiently to avoid rendering and IO affecting each other, and hopefully reduce/avoid heap allocations and random memory accesses in performance critical code, etc. all of which is hard or impossible in Java. If this was written efficiently in C using OpenGL well, then it should easily be able to push 2000 FPS with this level of geometry.

And I don't mean this as criticism of Minecraft as a game, I'm well aware of it starting as a hobby project that went viral. I'm saying this because I'm well aware of the technical limitations of this game, and you are right about it facing effectively a CPU bottleneck (of sorts). But having a vastly faster CPU (which I what I assume you mean by "IPC") will not completely eliminate these performance bottlenecks, in short some highlights why;
- Java will result in layers of extra function calls, many of which will result in cache misses of a type which a faster CPU can't do much.
- Incompatibilities of Java's types and OpenGL's state machine design resulting in loads of heap allocations, Java's garbage collection, and the resulting memory fragmentation. All of these will not scale well with a faster CPU.
- Inefficient use of OpenGL itself, which has nothing to do with Java. Even an infinitely fast CPU can't make up for inefficient batching of operations. The solution is obviously a better engine design.
So there are design bottlenecks too.


Was your comparison of RX 6600 and GTX 1060 with the same or a comparable PC?
Because I find it interesting that the Nvidia card didn't experience the same level of slowdown. So if your description is correct, it would mean that the difference here is due to overhead on those API calls, not the game engine itself. As mentioned, I suspect it has to do with updates to vertex buffers, and find it puzzling that their new driver implementation is so much worse in this regard.

And scratch what I said about testing my old Radeon card. (I used it in a Haswell machine which broke.) It turns out they dropped support for my R7 260, even though it's 2nd gen GCN. (Thanks AMD!)

Yeah there's been fans of Minecraft from day one wanting it to be remade in it's own game engine. Being based on Java makes it very accessible for the modding community and so I can understand why they didn't migrate it to something else, it just makes the in-game FPS kind of crappy when it shouldn't be. That's the trade-off I guess.

The changes were just a graphics card upgrade, yeah. Otherwise identical machine. You know more than I do about API calls but sounds to me like you're right.

One interesting thing I found is that the 22.7.1 driver absolutely HATES the Optifine 1.6.4 implementation of anisotropic filtering. No joke, I was getting 20fps average with it set at 16x! Not sure what's going on there but it runs terrible. But disabling makes it all good.

That's possibly the biggest FPS increase a driver has ever caused in the history of drivers
The 1% lows are concerning, but it's crazy you mention the stutter is gone when the lows indicate the opposite!

It actually does feel much better, even though the numbers suggest it's worse in terms of 0.1% lows. Previously there were constant microstutters as if the game was struggling to maintain Vsync framerate, which seem to be gone now. The huge dips while rendering chunks are somewhat predictable and so they're actually not that big of an issue. Overall the experience is much better and it's one of those situations where the numbers only show half the story.
 
It actually does feel much better, even though the numbers suggest it's worse in terms of 0.1% lows. Previously there were constant microstutters as if the game was struggling to maintain Vsync framerate, which seem to be gone now. The huge dips while rendering chunks are somewhat predictable and so they're actually not that big of an issue. Overall the experience is much better and it's one of those situations where the numbers only show half the story.
You werent getting the whole 'hitting Vsync refresh rate' latency spikes were you?

In another thread i posted images from a youtube video where they proved 58FPS with a cap could be 1/4 the input latency of 60Hz Vsync on, and hitting 60FPS
 
Being that AMD has some unified drivers, it sounds like he somehow removed his chipset and USB Drivers
(DDU screwup, perhaps)
Yeah nope happened to me too all USB Devices just stopped no power to KB or Mouse or USB HDD's either also had a black screen I had to hit the power button to reboot
 
@W1zzard are any of the TPU test titles openGL?
Nope, pretty much nobody uses OpenGL outside of academia .. I am rebenching GPUs with newest drivers right now. No game changes other than F1 2021 -> F1 2022, but slightly higher CPU OC, slightly better memory timings
 
It's just a pity that the new drivers have reduced OC VRAM on the RX 6700 XT from 2150 to 2120 MHz :/
Now granted I am using an older profile I made and I am also usually on Beta drivers so this was not a clean install either but?
I'm currently messing with getting my absolute max OC right now so having Fast Timings enabled is kinda of an experiment.
2150R.jpg
 
Yeah there's been fans of Minecraft from day one wanting it to be remade in it's own game engine. Being based on Java makes it very accessible for the modding community and so I can understand why they didn't migrate it to something else, it just makes the in-game FPS kind of crappy when it shouldn't be. That's the trade-off I guess.
I do play it from time to time, while the frame rate seems fine to me, it's the game glitches, freezes and variable input lag that annoys me (not fun in survivor mode), but most of these have probably little to do with the graphics driver.

I honestly think the performance would be good enough if it was stable, no one really need >200 FPS.

One interesting thing I found is that the 22.7.1 driver absolutely HATES the Optifine 1.6.4 implementation of anisotropic filtering. No joke, I was getting 20fps average with it set at 16x! Not sure what's going on there but it runs terrible. But disabling makes it all good.
That's really interesting, as Anisotropic filtering is a hardware feature exposed through the API. It's normally just a property set on a texture, and normally only slightly increases the memory bandwidth and TMU usage, so normally a no-brainer to enable in any game.

Do you have any other OpenGL game so you can force AF in the driver and see if behaves the same? If not there is always Unigine Valley.

Nope, pretty much nobody uses OpenGL outside of academia ..
Just a few years ago big studios like Valve and Id used to support it, but most of their newer titles use Vulkan instead. As no recent big titles uses it, it's probably not very relevant for your benchmarks, but that doesn't mean good OpenGL support is irrelevant for gamers, as most gamers probably have many OpenGL games in their collection. A few well known titles includes;
Doom (2016): also supports Vulkan
Ion Fury (2019)
Farming Simulator 22 (2021): not as default, also supports Vulkan
CS: GO (2014): only in Linux and OS X, also supports Vulkan
Rocket League (2015): Linux only, but may be dropped
Dota 2 (2013): dropped OpenGL in favor of Vulkan
Plus countless of these cheap "simulator" games and Unity games.

Many of these support either DirectX or Vulkan too, so we are basically left with indie games and some emulators on Windows supporting only OpenGL. At this point there are probably no demanding OpenGL games for Windows, yet stability and consistent performance will remain important to gamers. So in conclusion, it's probably not very relevant for your benchmarks, not unless you started to "review" drivers.

Unfortunately, pretty much any multi-API game will achieve this support through some kind of abstraction layer, usually resulting in sub-par performance for the "second class" APIs.
 
OpenGL support is irrelevant for gamers
OpenGL is easy to learn because it's very high-level, which also makes it great to teach graphics concepts. For games this high-level approach is bad because you can't do things your way, or they are slow. There's a huge number of extensions, some vendor-specific, that try to work around this, but usually it's just easier to use Vulkan/DX12/DX11. OpenGL ES is big on mobile devices though.
 
Internal stress-test don't work on any settings. 6900XT Waterforce.
 
A few well known titles includes;
Doom (2016): also supports Vulkan
Ion Fury (2019)
Farming Simulator 22 (2021): not as default, also supports Vulkan
CS: GO (2014): only in Linux and OS X, also supports Vulkan
Rocket League (2015): Linux only, but may be dropped
Dota 2 (2013): dropped OpenGL in favor of Vulkan
Plus countless of these cheap "simulator" games and Unity games.

Stardew Valley (2D game) just switched from DX9 to OpenGL earlier this year. Likely for cross platform compatibility but not sure. Did mess up compatibility with some older Intel iGPU computers, tho, Sandy Bridge being the big one. Ivy Bridge seems to work OK though.
 
One interesting thing I found is that the 22.7.1 driver absolutely HATES the Optifine 1.6.4 implementation of anisotropic filtering. No joke, I was getting 20fps average with it set at 16x! Not sure what's going on there but it runs terrible. But disabling makes it all good.

Try Iris instead, it should work great. Especially if your primary aim is to use Optifine to load shaders, Iris is much better.

But yeah, there are a significant amount of OpenGL games, they may not be the most visually stunning titles around, but the number of games using it has significantly increased with developers and gamers becoming increasingly interested on Linux gaming and the Steam Deck.
 
OpenGL support is irrelevant for gamers
That cut is more than a little misleading, considering what I actually said was;
As no recent big titles uses it, it's probably not very relevant for your benchmarks, but that doesn't mean good OpenGL support is irrelevant for gamers, as most gamers probably have many OpenGL games in their collection.
When nearly all GPUs on the market are fast enough to run popular OpenGL games at a satisfying performance level, it doesn't make much sense to benchmark them. But having reliable and stable OpenGL support will still be important as games like Minecraft are still immensely popular, with >200M copies sold and is still one of the most popular games today. So solid OpenGL is important to gamers.

OpenGL is easy to learn because it's very high-level, which also makes it great to teach graphics concepts. For games this high-level approach is bad because you can't do things your way, or they are slow. There's a huge number of extensions, some vendor-specific, that try to work around this, but usually it's just easier to use Vulkan/DX12/DX11.
I think you are confusing old OpenGL 1.x with modern OpenGL, and while most of the legacy features are still there, no recent software utilizes it. Obsolete OpenGL might still be used in education though, it was last time I checked. OpenGL 4.x isn't higher level than DirectX 9-11, it's just much less bloated and cleaner, and more performant. OpenGL 4.x isn't as intuitive as OpenGL 1.x to teach though.
Many think that DirectX 12 and Vulkan are low-level APIs, but they're not, at least not in the sense that a programmer would think, like the low-level APIs the drivers use or consoles use(used to have?). DirectX 12 and Vulkan gives more control over many parameters to the driver, settings about buffers, allocations etc, which can be used to harness more performance, but they are still high-level APIs in the sense that they are completely abstracted from the GPU architecture. The driver still translate these APIs into the true low-level API of the GPUs.

Stardew Valley (2D game) just switched from DX9 to OpenGL earlier this year. Likely for cross platform compatibility but not sure. Did mess up compatibility with some older Intel iGPU computers, tho, Sandy Bridge being the big one. Ivy Bridge seems to work OK though.
Stardew Valley has supported OpenGL for Linux for years. I guess they just realized it was less maintenance to just support OpenGL, as it's more widely supported. Keeping all kinds of old testing hardware isn't simple either, at some point we must expect developer to make compromises.
 
Nope, pretty much nobody uses OpenGL outside of academia .. I am rebenching GPUs with newest drivers right now. No game changes other than F1 2021 -> F1 2022, but slightly higher CPU OC, slightly better memory timings
Good to know

A lot of workstation/rendering stuff is still OGL, but that's not really TPU's target market

Do you have any more info on what they changed to drastically change this performance?
 
It is not about the difference in performance, but about the very fact and reason for doing so. Oddly enough, so far the OC from 2000 MHz to 2150 was every time, and now it is 2120 and once 2150 MHz :confused:
again you won't see any difference going from 2120 and 2150 and sounds like you are complaining for the sake of complaining.
 
again you won't see any difference going from 2120 and 2150 and sounds like you are complaining for the sake of complaining.
And as someone who’s OCing his it’s still “dynamic” and usually only hits 2140 regardless….oh and you can still set it 2150 so I don’t know where this is coming from….
 
And as someone who’s OCing his it’s still “dynamic” and usually only hits 2140 regardless….oh and you can still set it 2150 so I don’t know where this is coming from….
It seems to be unstable at 2150 for them now? Doesn't matter whatsoever though, given the complete absence of a performance difference between 2150 and 2150.
 
It seems to be unstable at 2150 for them now? Doesn't matter whatsoever though, given the complete absence of a performance difference between 2150 and 2150.
I agree in principle but I’m monitoring it in real-time and it just fluctuates between 2138 and 2140 as for any real world performance gains I will admit it’s something I haven’t tested. I just maxed it for the sake of it. I have not experienced any kind of instability in my benching/stress testing.
Because I’m always changing drivers like socks(been on 2 today) I always run my same suite of tests on every driver.
 
Whenever I do a driver install I like to run a Stress Test in AMD software. 22.6.1 was broken and caused nothing but issues for me. I installed this driver and actually updated everything, Windows was reset. I also changed the MB but used the same brand. With any previous driver I would see 255 Watts of Power draw reported by AMD. With this new driver I see 210 Watts Power draw and the GPU temp does not go past 42 C (Waterblock) but the clock is around 2400 MHZ. I just hope it will be stable though. I played 2 hours of TWWH this morning and it was fine. I will say though that it would seem the logging software is buggy.
 
 
They dont use pretty graphs so i got bored with a TL;DR - but they have a lot of specifics there on some performance regressions as well
 
Back
Top