Listen, all games need to listen

Right now, we have some of the best tools ever made to build game audio and computer games.

The duopoly FMOD and Wwise as well as all the custom audio engines and older ones allow us audio designers around the world to integrate audio with input, game mechanics, animation, interactivity and so forth.

We have great tools. The question is: in the experience of playing the game, how much control audio has? How much audio is kind of central to a core mechanic, to an experience?

The answer is: we need game engines to listen to players and audio! How many games are listening? Why NPCs in FPS are so impossibly deaf? Someone screaming like a pig cannot not be heard by someone a few stairs away. Why? We’ve all watched and listen to the making of of Inside’s game audio. The biggest takeaway being that the game runs on top of the audio layer. That is, audio happens and the game waits for it or acts accordingly. Most games don’t. Most games will stream some music or soundscape and cut it like someone unplugged the speakers, just to load the next level. In 2018 with the amount of storage, memory speed and all those CPU threads, we have very, very few technical excuses. It’s all about decisions.

Those are decisions that need to be made at the game design and game engine level. It’s not about audio technology, generated or authored content. It’s about deciding that audio should and would be treated as an underlying layer orchestrating the rest. It is extremely efficient at enhancing a game and polishing it.

Nintendo has been doing that for so long with enormous success. Many 80s and 90s games have “smart” audio. Martin Stig Andersen had a full session at GDC in 2016 on how Inside is a game that listens. It’s a bit embarrassing that we’re not making this a common practice across our industry, a standard for game audio.

Making games listen is crucial to our craft. To our polish. In a world of very expensive game development, audio could… *puts sunglasses on* change the game.

Leave a Reply