This page is for general tips and tricks, focused mainly on game audio implementation.


Where applicable, you can download the associated reference material for free directly from


Limiting your sound instances

Context taken from my Unity / Wwise project:

While working on the audio implementation for my own personal project, I often came across the problem of duplicate sounds being triggered at run time.

This was happening most often when Wwise ‘events’ were being posted from C# script, with the main culprits being anything written for update, collision and sometimes the animator.

The solution I found was quick to do and is handled entirely in Wwise by tweaking a few settings > details to follow!
































The screenshots above (Peach1 & Peach2) show a typical offender at line 43 of the script, and the corresponding event from Wwise (Note that we will be only looking at the event ID : Sad_String_Quartet)

























The Above screenshot (Peach3) shows the Advanced Settings Menu from Wwise. Look at the settings under Playback Limit; these are the ones that are directly relevant to the behavior we want (Note that they are already set correctly, so copy them down).

Using these settings, only 1 sound instance of this type will be allowed to play globally (meaning at any one time during run time).

For any new sound instance to be played, the previous one will have had to have played through it’s length entirely first, else it will be discarded.

It is important to note that these settings will need to incremented to include 2 sound instances if you are using looping sounds of any kind. This is in part because of the way Wwise queues up sounds for a smoother fade in and out.

As an added bonus; By effectively managing how Wwise handles its voices, you can quite easily improve the efficiency and sturdiness of the audio implementation as a whole.




Custom cue points in Wwise



The following suggests you already have a working knowledge of how to set up basic switches and transitions.

To help me give context to this. I have included a short example brief detailed below;

You need to make and implement a village theme that dynamically changes based on the time of day. It is important that the music never drop out while the player is inside the ‘village area’ of the game and it is important that the music remain a single piece of music with seamless loop points.

The finished test project outlined in the above brief can be downloaded and is fully playable. Follow this link to check it out. The password for the download is; villagetheme



















In the screen shot above (apple1) there are a bunch of custom cue points which will be used for transitions at run time. Each cue point (added by right clicking the time line at the top of the Wwise sequencer and ‘add custom cue’) is assigned a tag corresponding to a music track.

Take a closer look at the music track labelled ‘village_theme_melody’, it has 3 potential sub tracks. Of these sub tracks, 2 have been assigned switches, in this case for ‘day’ and ‘night’.

When switches for ‘day’ and ‘night’ are called, the custom cue points determine where the transitions from one sub track to another should happen. These transitions always happen at the ‘Next Custom Cue’ in the timeline.






















The screen shot above (apple2) is a closer look at the transitions tab where ‘village_theme_melody’ has been set up to work with the above mentioned custom cue points. Notice that the ‘Fade-out’ and ‘Fade-in’ options are also being used alongside the cue points. This is important as it helps to smooth out the changes.

I haven’t shown the ‘Edit’ window for these ‘Fade-out’ and ‘Fade-in’ options because I don’t think it’s important. Simply set them up to sound good.

The usefulness of custom cue points over
other methods is that you can set them up exactly where you want the changes to happen. It is more musical in my opinion, when compared to the other transition options and subsequently allows for greater depth in a dynamic music system.




Using the animator in Unity to post events in Wwise



















The screen shot (fig1) is taken from a scene I built in Unity to try out various implementation ideas using the animator.
The highlighted game object waits for an ‘if’ statement from script on update; which will evaluate to true once the 3 colored boxes are triggered in the correct order.




















The screen shot (fig2) from Unity, shows the point at which (highlighted in red) the Unity animator calls the method “PillarLowered”. This then posts an event in the script where AkSoundEngine (Wwise) is told what event(s) it should trigger.























The screen shot (fig3) from Wwise, shows the event chain “Pillar_Lowered”. There are multiple tasks assigned to the event, each with their own programmed behavior. The end result for this simple execution order can be summed up as doing the following;

- Lowers the volume of the ambience mix bus, which has assigned to it all audio assets of type '_ambience'
- Stops the sfx of the pillar game object lowering, after a short time delay
- Plays the sfx for the pillar impact, after a short time delay.

To see how this works in real time you can download the project here.



Random/generative music project




  • Based on melodies from the Sea of Thieves OST.

  • Uses instrumentation/arrangement ideas heavily influenced by the in-game music.




From a game audio perspective


The main starting point was to write music that would loop indefinitely, without getting boring...

After some research + much trial and error, I found that a few short sections, with multiple variations on each worked best!


I wrote some custom C# scripts that could automate section changes for each track. However, I decided to use Wwise for testing out this idea as an implementation within Unity.


Integration notes


  • Implementation was considered before writing the music.

  • Considered the use of Wwise for it's greater flexibility.

  • Special attention given to the starts and ends of each section for each track and their variants, for smoother transitions.

  • Extensively tested most important ideas in Unity game engine.


Part 1: Distance attenuation (tested in Unity & Wwise for track 1: 'Tav Sav' )


Ground floor, tavern

The above screenshot (banana1) shows the prototype setup I built for testing my 'tavern-esque' music in an indoor space. The game objects shown in the diagram (Portal, Emitter and Room) correspond to 3 commonly used Wwise scripts; AK Room, AK Emitter and AK Portal.


The screenshot below (banana2) taken from Wwise shows the attenuation settings I used, in conjunction with the above mentioned scripts. The results were quick and acceptable, after some light tweaking.

Top floor, tavern


I decided on a ray-casting based script for handling the distance attenuation and occlusion upstairs. The screenshot below (banana3) shows the relevant script for this behavior. Moreover, I have also included a screenshot (banana4) showing one of the more unusual RTPC assignments to a Harmonizer plugin!

The settings for the DSP in the screenshot above (banana4) were set so that as the 'Wet Level' increases, it thickens up frequencies in the low-mid range. Combining this effect with a low pass filter and some volume attenuation gave me the sound I was looking for.


Overall, I felt the result was more interesting than using another portal, though it does require more resources.




Part 2: Random music (tested in Unity & Wwise, across all tracks)

FC Pirate Music Player

FC music player screenshot.png

The download link above (coconut1) is for a simple music player I built in Unity and Wwise. It was used in many iterations for testing all of the tracks and core implementation ideas for this project.


Over to the Wwise, control the random

All audio files were housed in Wwise random containers or music segment containers (examples in coconut2). Moreover, I tweaked the randomization settings as required.


The next step was to edit/remove potential clashes and control the volume level of the mix bus. For this, I used a combination of side-chaining and also the 'Polyspectral' Multi-Band Compressor.


Additionally I removed some low end on the busier tracks, then added it back in using the 'Harmonizer' plugin. It is important to note that this was a done to neaten up low end at the group level, affecting the entire mix with one plugin instance.


For a closer look at the routing structure, there is a screenshot profiler readout (coconut3) for track 2: 'Battle Royale', shown below.

Final notes


While I enjoyed the dynamic effect of distance attenuation and spatialization for the tavern music, it was only really appropriate in scenarios where music was to be heard by the player as a live performance.


However, this variation in panning and volume was important for making the tavern music listenable over longer periods. I wanted this extra layer of dynamics for all the music I'd written!


My preferred solution was assigning LFO curves to voice volumes. This was done with individual tracks and for the entire mix bus, before any compression was applied. Most of the volume changes were very subtle, though it was enough to create some dynamic variation that wasn't otherwise present in the raw audio files!



Random/generative music