• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Why games don't use AI HW?

Joined
May 8, 2021
Messages
1,978 (1.32/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
I have been wondering for a while why nobody makes any games, that utilizes graphics card's AI HW. DLSS, RT are all nice and stuff, but let's be honest, games have had good graphics for over decade now, meanwhile in-game AI hasn't particularly evolved much since early 2000s and often current games have worse bot AI than some games from even 2001. A common explanation back the nwas CPU power and admitedly CPU performance has stagnated a lot with basically only 4 major jumps in performance: AMD K7 (with single threaded IPC not moving forward very much until after Core 2 chips, which merely were only a bit faster than K8 cores), Core i (with stagnation until Broadwell) and Skylake (this one lasted a long while, it basically lasted until rocket lake and AMD's Zen 2) and Alder lake (another big leap with no big improvements expected for some time). Anyway, my point is that with RTX cards nV talked a lot about AI and stuff, but we only got DLSS. I wonder why those cores couldn't be used to finally drastically improve game bot AI and finally stop using CPU for that? It sounds intuitive, but there's probably some hardware limitation for doing that. And besides the main question, so far nVidia (and for that matter AMD and Intel)has been speaking a lot about that AI stuff, but besides some boring stuff like fancy upscaling and RTX voice, AI capabilities of those cards seem to be barely utilized. Why is that?
 
My best guess would be that as people have cards from entry to enthusiast level, it would probably be hard to implement as a weaker card would result in more "stupid" AI?
 
My best guess would be that as people have cards from entry to enthusiast level, it would probably be hard to implement as a weaker card would result in more "stupid" AI?
That actaully sounds like a cool flaw. Like you can actually see the brains slowly shut down. I would love to see that :D
 
That actaully sounds like a cool flaw. Like you can actually see the brains slowly shut down. I would love to see that :D
"damn this is a hard part"
*grabs a GT 710*
*plays that part*
*puts the better card back*
"here we go again"

:laugh:
 
"damn this is a hard part"
*grabs a GT 710*
*plays that part*
*puts the better card back*
"here we go again"

:laugh:
It's like using dual core CPU, so that it glitches games and you can see through walls XD.
 
I have been wondering for a while why nobody makes any games, that utilizes graphics card's AI HW. DLSS, RT are all nice and stuff, but let's be honest, games have had good graphics for over decade now, meanwhile in-game AI hasn't particularly evolved much since early 2000s and often current games have worse bot AI than some games from even 2001. A common explanation back the nwas CPU power and admitedly CPU performance has stagnated a lot with basically only 4 major jumps in performance: AMD K7 (with single threaded IPC not moving forward very much until after Core 2 chips, which merely were only a bit faster than K8 cores), Core i (with stagnation until Broadwell) and Skylake (this one lasted a long while, it basically lasted until rocket lake and AMD's Zen 2) and Alder lake (another big leap with no big improvements expected for some time). Anyway, my point is that with RTX cards nV talked a lot about AI and stuff, but we only got DLSS. I wonder why those cores couldn't be used to finally drastically improve game bot AI and finally stop using CPU for that? It sounds intuitive, but there's probably some hardware limitation for doing that. And besides the main question, so far nVidia (and for that matter AMD and Intel)has been speaking a lot about that AI stuff, but besides some boring stuff like fancy upscaling and RTX voice, AI capabilities of those cards seem to be barely utilized. Why is that?

There are probably several factors at play.

The most important is that this is an extremely limited market right now in terms of hardware support. Many devices don't have AI/machine learning cores including all video game consoles and >85% of PC graphics hardware. Game developers and publishers are going to write software that can run a wide variety of devices. For example, there are over 111 Nintendo Switch units sold.

If a major part of the game relies on hardware that usually isn't there, that's not practical in terms of profitability and end user satisfaction.

Let's say you can program NPC behavior on ML cores. Do you NPCs get stupider on an RTX 2060 versus an RTX 4090? What about the RX 5700 XT with zero ML/AI cores? A notebook with an Intel Ice Lake CPU? Does the load get passed off to the raster cores? The CPU?

And then what happens on mobile (where most of today's gaming revenue is being made)? Are those NPCs better on the iPhone 14 Pro versus some $99 Huawei?

The second is probably because convenient tools don't exist yet for multi-platform development. Only now are we seeing new ML development tools like RTX Remix.

My guess is that it will come, but the market isn't ready because only a small number of devices would be able to perform the tasks in a useful manner.

I think the typical gamer understands that better graphical quality and quantitative performance comes with better and more expensive hardware. But the hardware tier shouldn't define qualitative gameplay aspects or storyline difficulty.

As Lenne says, it doesn't make sense to say, "Oh, I gotta stop here on my Nintendo Switch and save my game. I'll pick it up later when I'm sitting in front of my $4000 RTX 9090 Ti 64GB to deal with this next boss fight" or "the boss is dumb on my cellphone so I'll switch to that and play the pretty cutscenes on my high-end gaming PC."

Games like Fortnite, Minecraft, Apex Legends, CS:GO, World of Warcraft, etc. are popular because they run on a wide variety of devices with different capabilities. It may look better on one device but the gameplay is the same.

There is a way to level the playing field and use the AI cores more fully: put the game in the cloud and stream graphics to the client device. Put the AI cores and GPUs in a data center.

A lot of today's machine learning advances are happening in data centers, not on people's desktops. If you read the Solutions section of NVIDIA's website, you'll see big names (like Wal-mart) harnessing ML cores for analysis. It's not happening on someone's desktop graphics card, it's happening in the cloud.
 
Because good AIs are boring.

1. Good AIs retain the advantage and feel unbeatable.

2. Good AIs, if they ever lose the advantage, run away and refuse to fight (instead: waiting for them to get the advantage again before fighting).

Neither of these attributes are fun. What people actually want in AI is a predictable opponent that charges in and continues fighting even if there's a disadvantage.
 
You are messing up Tensor cores and RT cores on nVIDIA GPUs. The machine learing cores on GPUs are rather specialized and I am not sure if they could be used for improving game AI. Something (week AIs) I also see as a major problem in games, especially if you are in strategy games. The solution to this often was adding multiplayer to games so there is no need for a better AI. Maybe the craftmanship to do good AIs was lost because of this. On the other hand there was that experiment where - I believe it was google - they were training an AI to play Starcraft 2 or take the Forza racing games. I think the bots there are also "trained" by using player data.
 
And then what happens on mobile (where most of today's gaming revenue is being made)? Are those NPCs better on the iPhone 14 Pro versus some $99 Huawei?
That's actually the best platform to test that out as phone chipsets have had NPUs (neural processing units) for some years even before RTX cards.


A lot of today's machine learning advances are happening in data centers, not on people's desktops. If you read the Solutions section of NVIDIA's website, you'll see big names (like Wal-mart) harnessing ML cores for analysis. It's not happening on someone's desktop graphics card, it's happening in the cloud.
Yeah, I'm aware, also within smaller labs with workstation cards too, however nVidia has been very vocal about AI on RTX hardware since RTX 2000 series and those AI capabilities which they talk about almost don't have any use case. DLSS is probably the only one thing that came out of it and it leaves me wondering "that's all it can do?".

Because good AIs are boring.

1. Good AIs retain the advantage and feel unbeatable.

2. Good AIs, if they ever lose the advantage, run away and refuse to fight (instead: waiting for them to get the advantage again before fighting).
I think that's incorrect. It should be the most easiest thing for programmer to write is 100% accurate bot, meanwhile programming failures is much harder, as well as adding a lot of nuance or even some "personalities" to different bots. And I think that believable flaws that are typical to humans can create a very good bots in games. Also some games, like racing sims would benefit a lot from very good driver AI.
 
Last edited:
AI has gotten vastly better and more in depth across the years is the thing, but Rome wasn't built in a day and progress doesn't arrive overnight every night.
 
AI in the sense of machine learning and AI as relates to NPC behavior are vastly different things. To train a program to act "intelligently" in the face of a diverse range of inputs is a project greater in scale than the game itself.
 
On the other hand there was that experiment where - I believe it was google - they were training an AI to play Starcraft 2 or take the Forza racing games. I think the bots there are also "trained" by using player data.
Oh god, if that was the case, then Microsoft completely blew it. Drivatars are really really dumb and don't seem to "observe" anything from human drivers. Horizon 5 has also a dumber AI than Forza Motorsport 2, where AI changed racing lines, some bots were more aggresive and others were calmer and it was pre-programmed to make human like mistakes like accidentally oversteering a car and spinning out. Hell you could even hire drivers based on their skills and personality. It had really good AI not only back then, but also now. Meanwhile Horizon 5 very clearly doesn't. But to be fair, Motorsport 2 also was more simulation biased simcade game and it had different tire compounds, more advanced tire physics, actual dirt simulation (cars got dirtier after driving more laps), very realistic tire wear out rate and very pronounced gradual loss of grip. Anyway, I will shut up, this thread is about AI and RTX hardware.

AI in the sense of machine learning and AI as relates to NPC behavior are vastly different things. To train a program to act "intelligently" in the face of a diverse range of inputs is a project greater in scale than the game itself.
Well, we also had something called voxels, which allowed game environment to be completely destryed granularly. Sounds very cool and it was cool, but it was very challenging to make it into game, becasue games still rely on some sort of progression,. goals and goal meetign conditions and when you can just destry anything, it makes it difficult to keep players from not breaking a game, but also not feeling overly restricted. Obviously it was super niche game technology, but it made to some lesser known games like Delta Force and Delta Force 2 managed to pull it off, but it's important to note that both vwere army simulators, so obviously no on rails gameplay, bit more abstract goals too. And there is last year's Teardown game, which so far is just open world destruction sandbox without campaign. Either way, I think that better AI could be built into more niche games or perhaps integrated into some old game, basically quivalent of putting RT into Quake. I think what ever it would be, it would be interesting purely based on technical merit, rather than being AAA tier games with campaign and shit.
 
Last edited:
ML cores have been on Apple Silicon (first A-Series starting with the A11 in 2017, now M-Series) for a while so it's easier for iOS/iPadOS/macOS developers to harness that.

Segmentation in the PC hardware industry is a clear barrier to swifter adoption. From a game industry standpoint, developers also have to work within the limitations of console hardware as well.

DLSS is the most obvious consumer facing ML usage.

Macs and iPhones have more examples, like face recognition, image reconstruction, etc. In a recent keynote, Apple demonstrated an aspect of their Facetime teleconferencing which reconstructions the user's desk from an oblique shot.

For NVIDIA RTX cards, there are a few demos out there including NVIDIA Canvas and NVIDIA Broadcast. I don't know if OBS is using RTX Tensor cores for background replacement but if not, it's coming soon.

I know Intel is putting Xe cores into Raptor Lake, not sure if that also includes ML cores but if they do, their CPUs will be a bit more like Apple's SoCs. I've read a few image upscaling comparisons between AMD's thing, DLSS 2, DLSS 3, and XeSS and notably Intel's solution does not trail DLSS by much.

There are some chess games like Leela that use ML cores/OpenCUDA.

I don't know if there's an easy pathway to splitting ML jobs between a local client device (like a smartphone) and a more powerful cloud server for the heavy lifting. That wouldn't necessarily be raster operations but more behavioral stuff.

One important thing is these cores are specialized. You can't just say, "these are unused transistors, let's put them to work." The job needs to something suitable for ML cores and right now, image upscaling is the lowest hanging fruit. Frame generation seems to be the next step. Both of these take some of the load off of the 3D rasterization cores.
 
Last edited:
Developers are just getting into it. It's a bit premature as the tech is pretty young for the gaming industry.
 
Developers are just getting into it. It's a bit premature as the tech is pretty young for the gaming industry.
Perhaps, but I just watched some parts of GTC 2022 keynote and in enterprise RTX GPus can simulate whoel world's weather, character can learn human like movemens in just 2 days and stuff, you known that sounds insanely advanced and yet consumer GPU is like "best I can do is upscaling". It looks like those cores are basically useless for gaming. And well phones had NPUs for a while too and all they do is applying blur to your face and call you pretty.
 
Part of the challenge of using ML in the gaming industry is how far to let it run.

Let's say some developer uses ML cores to manage enemy behavior and learn to become tougher opponents with more interactions. Let's say machine learning on the 4090 brings the enemy combatants to a superior/unconquerable level very quickly. What point becomes too much? It's a game, people are supposed to have fun playing it.

How many gamers are going to pay for a title they can never hope to finish? You don't need ML cores do to that.

It's one thing to use ML for a military combat simulator. It's something else to use it for a game.
 
I have been wondering for a while why nobody makes any games, that utilizes graphics card's AI HW.
Time. A lot of AI applications require time for training and processing, even when you have a good algorithm in place. One of the biggest issues with the initial implementation of DLSS and its reliance on the tensor cores is that it only helps if you're below a certain frame rate because of the fixed time cost of the computations being run on the tensor cores. AI tends to have the best results when you have time to spend on it. There can be big trade-offs in the AI space when you trade accuracy for performance. All in all, it's probably just not worth it for games, particularly when these implementations tend to vary between nVidia, AMD, Apple, and Intel. Tensor/AI cores are just special processors that do a particular task really well, it doesn't mean that its use is necessarily ubiquitous.
 
It's one thing to use ML for a military combat simulator. It's something else to use it for a game.
Many military simulators are also games. I wouldn't put Arma series close to CoDs and while Arma may not be exactly an actual army simulator (well it is, but not used by government level accurate), games like it may benefit.
 
Many military simulators are also games. I wouldn't put Arma series close to CoDs and while Arma may not be exactly an actual army simulator (well it is, but not used by government level accurate), games like it may benefit.

Well, then someone can market it if there's a big enough audience to make it profitable and worthwhile.

There are always some people who really want some highly specialized niche attention to their personal fetish/interest/whatever. It's one thing to write a combat simulator for the world's military forces.

It's something else to write something that Jill Gamer wants to play with her friends for a couple of hours. It's not like these combat simulators are 15-20 minute sessions that you can pause so you can snack on another CBD edible.

The point of a combat simulator is to train someone for a real situation that might result in casualties and lead to telling some woman that her son died in the line of duty. The point of Apex Legends is to have fun.

I am not a good gamer. Most gamers are WAY more skilled than I am. I just want to have some fun, maybe for an hour or two. Getting thrown into a boss fight and seeing "You are dead" in twenty seconds isn't my idea of fun.

Surfers have a mantra: the best surfer is the one who is having the most fun in the water. You don't need Kelly Slater skills. Skateboarders say the same thing. If a game isn't fun, why would you go back it do it again?

In single player games, I play in Easy/Story mode. Sometimes I look at video walkthroughs. Sometimes I use cheat codes and trainers.

For a soldier, it's their job.
 
Last edited:
One important thing is these cores are specialized. You can't just say, "these are unused transistors, let's put them to work." The job needs to something suitable for ML cores and right now, image upscaling is the lowest hanging fruit. Frame generation seems to be the next step. Both of these take some of the load off of the 3D rasterization cores.
actually DLSS 3 frame generation takes it off the game engine not the GPU.
 
Well, then someone can market it if there's a big enough audience to make it profitable and worthwhile.

There are always some people who really want some highly specialized niche attention to their personal fetish/interest/whatever. It's one thing to write a combat simulator for the world's military forces.

It's something else to write something that Jill Gamer wants to play with her friends for a couple of hours. It's not like these combat simulators are 15-20 minute sessions that you can pause so you can snack on another CBD edible.
Obviously, but people buy Arma.
 
Obviously, but people buy Arma.

Like I said, there's always a small number of people who have such interests.

You are ignoring the reality that the game industry is motivated to recover their development costs. Most of Arma's revenue comes from military and similar customers, not Joe Gamer. Whoever writes Arma isn't a charity and they're not writing on their lunch breaks.

Respawn Entertainment isn't going to make money from the U.S. Army from Apex Legends cosmetics and passes either.

Do you want to buy a Baker's Pride commercial pizza oven, like the ones in pizzerias all across the USA? Sure, go to your local restaurant supply store and I'm sure they'll be happy to take your order. But the Baker's Pride primary customer base are professional restaurants and food service operations, not that kooky neighbor who built out his garage to put in a commercial grade pizza oven.

It takes time and money to get programmers to write software. In the end, the efforts of a developer need to result in a product that pays the bills. If there's very little return for a particular feature that takes an enormous amount of time and money, is it worth it?

Heck, a lot of game developers don't even bother with DLSS. But with each passing month, more developers are seeing the value of putting some time and effort into it because of the possibility of a better product that might result in higher sales. If a game that could previously only be satisfactorily played on a --70 tier graphics card can now be played on a --60 tier card with image upscaling, that opens up a larger market.

Catering only to the high end just results in high prices. Would you like a game that only runs on overclocked 4090 cards and needs to be priced at $500 to pay for development costs?

Gaming is already headed down the path to being a plutocracy, where microtransactions is the main way for most gamers to achieve success because in-game achievements have a low ROI.

I think the gaming industry can use ML more once the hardware support is widely deployed. We're not there yet. NVIDIA Tensor cores are still pretty new and AMD still doesn't even have anything yet. Even though Apple has had ML cores in their silicon for a while, smartphone game developers are still motivated to think about other smartphone platforms. iPhone is what -- 20% of the world's installed smartphone base?

As Aquinus mentioned, ML cores are designed for specific workloads that aren't prevalent in most game titles.
 
Last edited:
The point of Apex Legends is to have fun.

I am not a good gamer. Most gamers are WAY more skilled than I am. I just want to have some fun, maybe for an hour or two. Getting thrown into a boss fight and seeing "You are dead" in twenty seconds isn't my idea of fun.
You’re in luck, as today’s era makes almost all games tailored to such taste. And to shamelessly dig on the industry, they’ll also take a deep breath, change a few bits, and then (somehow quite successfully) sell the game all over again.

I’m in the camp where if I’m not just narrowly avoiding getting curb stomped by my wits and skill, then I’d be wondering why I’m wasting my time on a cake walk. I play less intense games too, certainly, but those are less intense because of the pacing (like those from Mimimi, Stoic, et al.).

Gamers like me get few real time games catered to us. Darksouls and co are rare in the industry and appreciated.

I’d recommend watching that short vid to see where we ‘hardcore’ gamers are coming from.

Happy gaming to all and another plug for CircleToonsHD.
 
Last edited:
Because good AIs are boring.

1. Good AIs retain the advantage and feel unbeatable.
Sometimes when I'm playing Forza, I wish that all cars go straight to hell and let me win. :rockout:
 
Back
Top