back to blog page

How Devs Can Enhance Games With Audio Systemization

The pursuit of high-fidelity, high-impact games means so much more than delivering the best graphic quality. Even the best art direction and visual rendering will fall flat if a game’s sound design is treated as an afterthought to its image.

As a technical sound designer, it’s my job to strengthen the link between top-tier visuals and smarter sound performance. Our ability to deliver greater audio variety and quality that amplifies player immersion depends on how audio is implemented on the back end and whether or not audio is systemized. 

The grander the scale of a game, the more important audio systemization becomes to making sight and sound all work their best magic for a rich and immersive front-end game experience. Here are some important ways that developers can embrace a systemized approach to audio implementation for creating better overall games.

Systemize Audio Early in Development

I cannot emphasize enough how crucial it is to plan your game’s audio systems before too much groundwork is laid. It’s not impossible to introduce or reconfigure these systems later, but hitting rewind on prior implementation throws significant hurdles at engineering teams as well, ultimately slowing the flow of development.

By bringing in technical sound designers early, we can get ahead by understanding the creative vision and then plan for strong audio systems at the game’s core. These early systemic decisions can even help determine the blend of primary assets that sound designers are asked to create. 

As development progresses, the presence of strong audio systems makes it easier to pivot should new approaches or applications become necessary. By establishing a solid framework from the start, teams handling your audio implementation can maintain consistency in their organization and oversight, especially if creative developments introduce the need to transition from one system to another.

Create Variety Without Amassing Assets

Audio systems serve as powerful frameworks for processing sounds in a way that gives gamers the impression of variety while keeping asset libraries on the back end relatively lean. Instead of tasking sound designers with creating a separate sound for every niche variation, they can trade off the time suck of higher asset counts for stronger effort on making typified sounds that are more impactful and unique.

In the upcoming card-based MOBA Wildcard, players command champions and creatures of all shapes and sizes, and as the team behind the game’s audio implementation, we sought a way to bring a suitable sonic variety while keeping things sounding cohesive on the battlefield. Instead of calling directly to a unique footstep asset for every creature, we systematized audio to call on a class of footstep sounds and modulated the base audio in-program for creature type, creature size, terrain type, etc. By also programming in a degree of randomization to how sounds get expressed, we prevented short loops from sounding repetitive or cyclical to players’ ears, keeping their heads in the game.

Newer tools available, like MetaSounds in Unreal Engine, give whole new levels of control for technical sound designers to enhance how sounds are processed in ways that previously required direct programmer support. With these technologies in hand, we can now go above and beyond base audio implementation without drawing heavily on programmers to execute our best audio vision.

Treat Audio As Essential Info

One principle that defines my approach to implementing game audio is ensuring sound works to the maximum benefit of the player. What is a game communicating to the player at any given moment, and can the player understand what’s occurring quickly  â€” even if an activity is just off-screen? Does a sound make sense to the player without visual support — or does a sound inspire the need for a new visual element to work in tandem to better the gaming experience? 

In Wildcard, when teams of summons are duking it out across the arena, commanding players depend on sound to fill in the activity that’s not happening directly in view in a way that aids their control without providing either side an unfair advantage.

One unique aspect of our Wildcard implementation is the incorporation of sportlike spectator POVs — embracing the greater community support around live players and their gameplay. As such, we had to consider how our audio works for both the viewers in the virtual stands as well as the competitors on the field. Audio systems helped us strengthen how sound conveys info to players and spectators alike by strengthening stylistic consistency, readability, and clarity of the overall audio mix from whichever POV one steps into. 

Without a systemic approach to audio, achieving the increasing level of immersive complexity today’s games demand would be near impossible — so set your games off on the right foot with technical sound designers embedded early in development. Your players will thank you.

Recent Posts