“Adaptive music” is a term that has been buzzing in the game industry the past few years.
If you’ve played any one of the many AAA titles released recently, you’ve probably experienced it. And — if it was done well — you likely didn’t even notice it. So, what exactly is it?
Adaptive music is, when boiled down to its very essence, music that changes and transforms. Typically, it is mapped to the actions of the player — be it combat, movement, or interactions. Alternatively, it could be mapped to change with the environment (for instance, day/night cycles), or any variety of parameters. The goal it to provide a deeper level of immersion for the player by having a soundtrack that adapts to what is on screen.
The most common approach is to have multiple “layers” of a track that can be toggled on or off depending on what is happening in the game. Take, for instance, a first-person shooter. Imagine: you’re lurking through the woods, hunting a target. A soft, pulsing atmospheric track is playing in the background. As you approach your prey, the track increases in volume, with some light percussion fading in to give the music a stronger rhythmic pulse. Just then — you’re ambushed. Now a more intense track fades in on top of those two, complete with aggressive synths and booming drums. In this scenario, the music is mapping your every move, and transforming a traditionally static element of a game (the music) into an additional vessel of immersion.
I write about this for two reasons. First and foremost,adaptive audio is something that I am passionate about and have been working with for several years (starting with the flash game “Colony”). Second, because I believe many game developers (especially indie) are unfamiliar with this concept and/or hesitant to implement it. This is understandable — developers are developers, not composers. Implementation requires additional coding, or licensing of an audio engine with adaptive capabilities. And, not all composers are comfortable — or even familiar — with the concept of adaptive music. It requires extra time, skill, and forethought to conceive and compose. These are real-world concerns, and ones that only the developer can contemplate and address.
Let me present you with a real-life, personal example of adaptive audio in action. I recently finished the soundtrack to Gemini Strike with long-time friend and collaborator Krin. This game features a very simple implementation of adaptive music. It has two layers — an “atmospheric” layer that plays when you are between battles (in the menu, buying items, etc.), and an orchestral layer that plays when you are in battle. The two were composed on top of each other, and you’ll often hear them seamlessly fading between each other. The results, however, are far more immersive than the standard “menu loop/battle loop” setup. The soundtrack never stops, but instead moves with the player — replacing traditional aural seams with a far more elegant solution.
This isn’t meant to be a sales pitch for Gemini Strike, nor is it a sales pitch for my services (but feel free to contact me anyway!) Instead, I write this to address a topic that is much deeper. In a game industry that is rapidly changing, it is important to consider progression on all fronts — including music. Just as a bad score can ruin a great movie, a poorly implemented soundtrack can hurt a great game. When a player turns off the music in favor of their personal playlist, the soundtrack has failed. A soundtrack should be an essential part of the game — something that players miss when it is turned off. Adaptive music is a considerable step closer to achieving that goal, and something that every developer should consider.