WHICH ENGINE SHOULD I LEARN?

 This post is intended for anyone who is considering starting out in game audio, or who is interested in an overview of available options that enable you to make sound and music in games. 

The software used to make game audio usually falls into two categories: The game engine / editor (examples of which include Unity or Unreal) and the audio authoring solution (e.g. FMOD, Wwise, Unreal etc). As we shall see, these two areas overlap somewhat, depending on the combination or the choice of software that a developer decides to use. 

It may seem strange, but out of the box game engines are a relatively new concept. In the late 90s Unity didn't exist and Unreal was a game first, before it became a publicly available engine. Game publishers and developers spent a huge amount of time and money on internally developed software tool tech so that they could build and ship their games across different console platforms. Most of the biggest publishers in the world today have some dev studios that still make their games using their own in-house tools due to the sheer amount of legacy functionality that existing game brands are built on. Of course, there are other reasons too, not least of which is the deep level of system control and efficiency that a bespoke engine can provide as it is tailor made for the needs of a particular project. So, if you go and work for one of the major game publishers, the chances are that you may end up using in-house engines, including the audio tools required to author sound, music, and dialogue.

On the other hand, it is very likely that nearly all mid size to small developers that you may work for will be using Unreal or Unity. These out of the box solutions have revolutionised game development as they provide a set of tools for a team of any size, even an individual, to make and ship a game. No need for an expensive in-house tech team frantically writing and updating a bespoke custom game engine like it was in years gone by. Both Unreal and Unity enable users who are not programmers to manipulate and build game systems by utilising a layer of user functionality built upon the C# scripting language. Of course, the programmers dive under the hood and work in C++ too, but they are their own breed of clever left brained geniuses and visual scripting features found in current game engines weren't really built for them. Instead they were built for artists, designers, and audio peeps. I will say now that if you have a background or interest in C# then you immediately stand in great stead to work in game audio! You certainly don't have to know it in order to work in audio, but it can help as it will enable an instant understanding of how all of the elements come together when developing a game. If you see a role for a 'technical sound designer' then it is extremely likely that knowledge of C# would be requested, even some C++. But creative individuals fear not! A huge proportion of working in game audio involves making stuff, and there are plenty of aspects of the job where creativity and building wonderful things from scratch are more important than anything else. 

So, where does FMOD and Wwise fit into all this and what is this 'audio middleware' thing anyway? Well, to cut a long story short, FMOD and Wwise came about because game engines were traditionally severely lacking on the audio features front. It was terribly bare bones around 20 years ago and if a game developer was not using FMOD or Wwise then they would often resort to building their own audio tools so that sound designers and musicians could do what they needed to do. Also, the major gaming console platforms each had their own technical requirements and idiosyncrasies relating to audio. If you were working for a Sony studio, the chances are you would use in-house Sony tools designed for Playstation. If you were working on an Xbox game, you would use Microsoft's audio tools which were actually pretty good but only worked on Xbox. What if you were a game studio who shipped games simultaneously on Microsoft, Sony and Nintendo consoles? This is where FMOD and WWise came in, they offered the ability to compile and prepare audio files for use on each system, but the actual audio content was managed under one main project file. More importantly from a creative point of view, they also offered a way to build sound effects and manipulate the manner in which they are played back within the game, create interactive music playback systems, and monitor and mix the game sound levels in real time. They both created an interface that an audio designer would be more at home with, and helped bridge the gap between DAW-based workflow that sound engineers and musicians were familiar with, and game dev tools that were traditionally in the domain of a programmer. 

So, are they still relevant today, and which should you learn? Well, yes, they are both still relevant and are still widely used. Each has their pros and cons and one is not better than the other, they are just different. If you have the time, it would be worth learning both. Traditionally I would say that learning Wwise was a bit easier to get stuck into simply because they have always offered good comprehensive tutorials but I believe FMOD is really improving in this regard and making the software more accessible. They are things I love and don't love about both of them.. it's kind of like Unreal and Unity in a way - both are essentially doing the same thing but in their own manner. Speaking of Unreal and Unity, one of the biggest problems with the concept of audio middleware is that it is just that - a layer that sits in the middle of another code base and has to be integrated as a plug in inside the game engine itself. This can be problematic because there can be inherent limitations on how deeply the code base of the audio middleware can meld with the code base of the engine itself. There are some instances were the two systems cannot integrate as tightly as you may wish them to. This is where native audio solutions comes into play....

Unity has always been weak as far as audio features out of the box are concerned, and arguably still is. There are many third party audio plug ins that improve matters but it's still a long way away from being a fully fledged solution right off the bat. Unreal on the other hand is a different story. There has been a concerted effort from Epic over the last 5 or 6 years to improve the built in audio functionality and it is finally starting to show. There is a massive advantage in using a built in audio system, providing it is stable and reliable of course! The main thing is a deep and complete integration with other game systems, for example, visual effects, materials, geometry, and other code systems that affect run time processing. You simply cannot achieve such synchronicity using middleware. The other thing to bear in mind is cost. Unreal's built in audio features come free with the package. Middleware such as FMOD and Wwise needs to be licensed and paid for when you ship a game. On the downside, if you are not as familiar with Unreal's Blueprint visual scripting system then you may struggle at times with it. It is not as intuitive to someone coming from a DAW based background. The recent upgraded audio engine in Unreal is also still relatively new and has not had as many games shipped with it is as the middleware stalwarts Wwise and FMOD which have both been around for a long time and are proven solutions. 

So, lots to consider. I hope that this overview has helped a little on what can be quite a daunting and confusing topic. There is no right or wrong way to learn this stuff. If I were starting out I would suggest choosing either FMOD or Wwise as a starting point to grasp key concepts of game audio, and then learn either Unity or Unreal 'on the job' as you travel along the path of making games. If you have any questions or feedback about any of this then feel free to hit me up via my contact page.       

Previous
Previous

The interactive music of wylde flowers

Next
Next

let the blog begin..