Saturday, 5 April 2014

UDK Audio Tutorial 3: Kismet Introduction

Kismet: Definition; destiny, fate - an event (or a course of events) that will inevitably happen in the future.

Introduction: Especially for beginners in the game design business, Kismet is a very powerful tool in creating all aspects of a game. From opening doors, to spawning enemies, and even laying out complicated cut-scenes, it allows a developer to manipulate the world in a surprising amount of ways. Of course, what I'll be covering here is more to do with audio implementation, but I'll only be scratching the surface.

A system in Kismet.
So what is Kismet? - If you imagine all of the parts of a game level are puppets, Kismet is one of the places where the strings are pulled. It's the foundation of all the in's and out's of the programming which makes a level interactive. It uses object-based programming, where by small boxes are connected up with virtual wires, creating a workflow which is quicker and easier than line-by-line programming (Unreal script in this case). Of course, it may not be nearly as powerful as coding, but it's certainly viable to create a game using only Kismet.

The principles of Kismet: To better understand Kismet as a whole, we have to look at the bigger picture - what does it actually do? What comes in and what goes out? How do you know if your system works correctly? All of these questions can be answered by taking a look at the flow of information.

You see, Kismet works in pulses. When a module is activated, it sends a pulse from it's output, which is then transferred down to anything that is connected to it. When the pulse is received, that module is activated and performs whatever purpose it was designed for, be it playing a sound, adding a float figure, comparing values etc. Of course, this is instantaneous, much like switching a light on.

The white outline represents a pulse coming from the 'Clock' module.

Bringing up modules: Now we know how Kismet functions as a processing tool, we can bring up some modules to use. If you're unsure of how to bring up kismet, it's easily accessible at any point - all you have to do is click the 'K' button at the top of the editor window (shown left).

If you've read my Soundcues post, you'll be very familiar with this environment: it's almost exactly the same. You've got an open space for your modules, a properties window at the bottom and modules are placed by right-clicking and choosing the appropriate one. A nice little addition in this case is the use of shortcuts to add 'staple' modules to your systems; I say staple because they're useful for quite a lot of things and will no doubt be used a lot.

The Kismet interface: Here, I've placed 3 modules; Delay, Gate and Play Sound.
You'll also notice there's a much larger selection of modules compared to a Soundcue. In fact, there are so many, they've had to categorize them (some several times) in order to keep them organised. If you're curious, it never hurts to add these modules and check them out - see what they do - see if you can make them work in some way. After all, knowledge is power and I always look for new ways to solve old problems.

Staple modules: As I've said, there are a range of modules that you'll use a lot when designing systems, so I'll go over what they are, what they do and why they're so important.

IMAGE FOR EACH MODULE

Level Loaded - This module will be the sole provider of many systems triggering; it simply sends out a pulse when the level has completely loaded and you're in the game. Examples of use might be for a music system which begins on starting the level, or a set of sound loops for ambience implemented in Kismet (a lot of ambience loops don't require kismet).






Play Sound - In order for a soundcue to be triggered and manipulated fully, the best module (and only by default) is the Play Sound module. As you can see, you're able to play and stop your sound, as well as activate modules on the output based upon the playback of the sound. You're also able to show other parameters which are currently hidden, such as the volume level, which can be changed using the math modules (more on that below).





Delay - A very simple module, this adds a defined delay time between being activated and sending the pulse out. These modules are great for setting off a sequence of systems from a single trigger, or you can connect the out to the in and create a loop system, such as looping a non-looped sound or a make-shift if-statement system.






Gate - As with many elements of audio production, the gate is as it sounds - a module that that can either prevent or allow signal to pass through it, depending on it's open or closed state. You'll find yourself joining a lot of your systems together with these, because you can essentially pick and choose which systems to turn on and off. For example, if you have a piece of music playing for ambience, and an enemy pops up, you would open a gate to allow for the 'enemy' music to play, while closing the gate on the ambient music.







Switch - This module (as expected) allows you to switch the input to several different outputs. You can either have the output switch concurrently (i.e. once output 'Link 1' has been activated, the next signal will come out of 'Link 2' and so on), or you can lock the output to be changed externally (i.e. use an Integer figure attached the the 'Index' module, so that the output is set to this figure; 1 being 'Link 1', 2 being 'Link 2' and so on). You can also disable an output when used, or enable the switch to loop (go back to the first input upon reaching the last).


Get Velocity - These next two 'Get' modules are very useful for systems that require information about the player. Each time Get Velocity is updated, a figure is created that represents how fast the associated object is travelling. When a looped delay is attached to this, you can get an almost realtime 'speed' which I've used in the past to create footstep systems, as well as the whooshing sound you get when falling over a certain speed.



Get Location - Location gives you a 3-figure displacement on the X,Y and Z axis. This can be converted into separate figures, which can then be used in systems, such as a sound which gradually get's loader the higher you are (which would otherwise be awkward using a simple sound source). Rotation is also useful for things such as a sound-based navigational system, or attaching a light source to the player to create a torch (used in my university project).

Modify Health - When your systems start to evolve and have consequences on the player, this simple module is great for giving and taking the players health. You can either have a set rate at which health is reduced/gained, or this can be modified externally using math systems.




References - When there are many modules and many systems, things can get messy. Before you know it, there are wires flying all over kismet and it will become increasingly hard to keep track of everything. Instead, we can use reference modules. Simply, when you want a wire to connect two modules, you can instead connect the out to a reference module, and then place an event reference module to the in of the other module. The beauty of this is that you can then call up as many event reference modules as you like and they will all use the same signal as the initial reference. I would always recommend this setup for the 'Level Loaded' module, as some strange anomalies can occur when multiple Level Loaded modules are used, caused by minute time differences of initializing them.





Maths: In order to start creating more complex systems, they'll more than likely be a level of math involved. This math will not only give a better level of control over certain values, but it will also give you the ability to dynamically alter properties - that is, rather than having set values which can only be changed in development, we can connect them to objects in the 3D enviroment with mathematical systems to have numbers update automatically, during gameplay, on the fly! Breathe...

A system that integrates maths; adding and comparing floats in this example.
Ints and Floats - An important aspect of any math system in kismet is knowing the difference between figures, and how you would use them. There are 2 different types that you can use - Ints and Floats. An Int node (short for Integer) allows only for whole numbers: 1, 2, 3, 4 and so on. A Float node on the other hand allows for decimal places: 1.1, 2.45, 3.6048568 etc. Obviously, you can still work in whole numbers using a float node, but it's only advisable if there's a possibility that you may need decimal numbers.

An Int on the left; Float on right.









Good uses of Int's might be:
  • Jumping to a set position on a switch module.
  • Setting up a timer that counts in whole seconds.
  • Pre-determining a number of enemies to spawn in a level.
Good uses of Float's might be:
  • Calculating the speed of a player.
  • Calculating the distance between 2 objects.
  • Working out a delay time to coincide with the tempo of some music.
  • Changing the volume of a play sound (value is between 0-1).

Add, Subtract, Multiply, Divide and Set Modules: There are a number of different ways to achieve calculations with math nodes, and these modules allow just that. I assume you're knowledgeable enough to work out additions and subtractions, so I won't go explaining what they do, but it's definitely worth noting how they work.

For each module, there are 2 node inputs and 1 node output. These are coloured dependant on the kind of node, be it an Int or Float. In order to perform a calculation, you must have a node connected to each of these inputs and the output, as the 2 inputs will be added/subtracted/divided/multiplied, with the output node showing the result. In an instance where you'd like a cumulative calculation (the number continues to add/subtract etc), you can use 2 nodes and connect one to both an input and the output node.

Add/Subtract/Multiply/Divide operate in the same way; Left: a normal addition; Right: A cumulative addition.
Comparisons: Almost directly connected (pardon the pun) to the math modules are comparisons. These are, in effect, your 'if', 'else' and 'when' statements in object form. Simply, they compare 2 values and open/close gates depending upon certain characteristics.

A Float comparison on the left, Int comparison on the right.
As you can see, there are 3 inputs - the 'signal' input on the left and two float/int inputs on the bottom. The outputs available are what make these modules great. Not only do you have this breadth of comparison, but you can use all of these side by side, for whatever purpose you desire. So in essence, it's a multi-tasking 'if' statement!

A good use of comparison modules is to manage memory usage. If you were to have a system where by a regular heart beat was to fade up when the players health became low, you wouldn't simply have the sound looped constantly and alter the volume; you would want to turn the system on only when it was needed. In extracting the players health, you could compare it to a set level (say below 40% health) and use this to turn on the heart beat system. Similarly, when the players health reaches a comfortable level (above 40%), you can use the comparison to turn the system off.

Linking Kismet to Objects in the Editor: The final step of using kismet is being able to locate objects from your 3D environment and link them to your systems. This can work on several different levels, either through the Content Browser, or through one of the view ports. Generally, you can get away with referencing objects by physically selecting them in a view port, but some elements will require you to find them in the content browser, as you can't select or click on them.

The most basic example regarding sound is when you wish to turn a persistent ambience on or off (going in and out of a building say). Of course, you need to make sure that the ambience is located in the 3D space as an ambientSoundSimpleToggleable. Once placed, simply select the item and head back to Kismet. Here, you can either right-click and add the reference as an object, which will create a standalone variable that can be attached to modules. Alternatively, you can add a Play Sound module, and add the sound into the properties window, which will allow you to control the sound through the modules inputs.

Left: The selected ambientSound in the 3D viewport. Right: In kismet, right-clicking will give you two options to add the item; New Object Var Using [object name], and New Event Using [object name].
Conclusion: Kismet is very powerful. Even though I concentrate my efforts on audio implementation, it's hard to use it for just that alone. If you want to create a basic game with cinematics, moving objects, pickups, AI and so on, it's all possible with Kismet. The more you use it, the easier it gets, and the beauty is that you can visually build up from a very basic system to incredibly complicated ones. If you have a game style you want to achieve, it's likely you can do it. If you want to manipulate and implement audio in wacky ways, the tools are they. Have a go; you may surprise yourself.

Alex.

Sunday, 1 September 2013

UDK Audio Tutorial 2: Soundcues

Introduction: In order to start creating and implementing game audio in your level, there are several elements of the Unreal Development Kit that need covering. Not least of these is the humble Soundcue; a file that references one or many sound files for designing smaller contained systems.

NB: I did say in my previous post that I was covering soundcues, kismet, matinee and more here, but on reflect, it seems that all of those together would be FAR too much for a single post. I'm therefore splitting them down, starting here. Oh to be naive!

Importing Audio into the Content browser: Before I can detail Soundcues, we'll need some audio to use in them. To do so, we simply need to import some audio files. However, this might not seem so straight forward at first glance; it's actually easier than you think.

A small note: If you're interested in best practices for audio editing sounds for games, see my previous post here.

To import audio, you'll need to be in the content browser.


From here, click on the 'Import' button in the bottom left of the window (shown right). A windows explorer dialogue will come up, for which you'll need to navigate to your audio file. Don't worry: once you've found the location once, it opens up in that same spot the next time. It's important to note here that only certain file types will be accepted. The easiest and most widely used format is the '.wav' format. As anyone in sound will know, this is the definitive uncompressed file type that's size reflects the quality - especially in game audio, you need to keep on top of this. However, I'm digressing into memory constraints; on with the plot!

The next screen you'll see on choosing an audio file is the import window: this is where you'll choose the package in which the file will be saved, create an optional group and finally name your sound file. The package in this instance is a physical file that will be saved on the disk. The group is a folder inside this package, used for better organizing your files.


Naming Schemes: A very important rule to work by is a set naming scheme for your files, not just audio. The purpose is to help keep everything in the editor organised and give an at-a-glance view of file types (purpose of use that is, such as footsteps, brick wall textures etc). If you have an inconsistent naming scheme, files can get lost and (more importantly) work flow can be broken.

Personally, I favor the single letter groups and underscoring scheme. For example, if I had a sound file for a single footstep in snow, I would start with 'A', as it's an audio file, then 'F' for footsteps, then 'snow' and optionally a number if there were variants. this would therefore read 'A_F_snow1'. Another example would be for a light brown brick wall texture; this would read 'T_B_LightBrown'. Others may not use this scheme, but it's my personal preference.

After you've filled everything in and clicked OK, your audio file should show up in the content browser - success!


You'll notice that, once imported successfully, there are a few '*' characters dotted about the place. This means that a package has been edited in some form and requires saving. By looking at the bottom-left of the content browser (shown left), you'll see your new package, also with an asterisks. To save this, simply right-click and save, bringing up another windows exlorer dialogue.

Naming and location of this file is as important as your internal naming scheme. What I like to do is create a new folder inside the one used for your level. This way, you'll have all your assets together. For naming the package, I used the same scheme as before; 'A_Footsteps' in this case. However, you can choose to save all audio in one package, or split it down even further - which ever you're more comfortable working with.

The Soundcue: One of the beautiful things about UDK is it's level of detail for audio implementation, while also keeping things simple and easy to use. One such element of this is the Soundcue.

What is a Soundcue? - Simply put, a Soundcue is a file that refers to soundwave files in order to implement them in the engine. Moreover, it's able to manipulate the audio files, creating small systems in themselves to achieve a much broader complex system as a whole.

Why do we need a Soundcue? - There are 2 main reasons to have these Soundcue files. First of all, they can help shorten the programming time drastically - using object based design, you can quickly add modules and link them together, giving realtime feedback. Secondly, Soundcues will no doubt be referred to throughout the level, especially for SFX such as doors opening or elevators. If the sound for these objects were design and implemented individually for each individual occurance and you change your mind on how the sound works, it would potentially take hours to go round fixing them all, let alone keeping track. However, with a soundcue, you can change it once in the content browser and all references will be changed as well, saving valuable time.

What can a Soundcue achieve? - There are a surprising amount of systems you can create with what seems to be such a simple tool. there are quite a selection of modules to choose from, which leaves a lot of room for creativity. At the end of the day though, it's up to you: you can use them simply for triggering a sound, all the way up to concatenating random dialogue to create a unique randomized sequence every play (more on this system later though...).

Creating the sound cue: this is a very similar process to importing audio into the content browser. However, rather than using the 'Import' button, we're simply going to right-click in the empty content browser space (next to where your audio file is) and select Soundcue from the menu. This brings up the same prompt for importing files, asking for a package name, group and file name. I like to use the same package as the audio, but I choose to put cue's in the parent directory (not grouped). This way, I can see all of them at a glance, rather than searching through each group in each package. Again, this is a personal preference; you may want to do otherwise.

Naming is of course important here as well. The simplest way I find for naming is adding 'Cue' to the end of the audio file name. For example if the audio file was 'A_S_BombExplosion', I would name the cue 'A_S_BombExplosionCue'. Similarly, if you have multiple audio files for the same cue, such as 'A_F_Snow[n]', I would replace the number [n] with 'Cue'.


Now your soundcue is named, placed and saved, you can go ahead and open it up. What you'll find is actually a blank canvas (shown below). The 'speaker' on the left is the eventual output of the soundcue. Everything leading up to it is what makes up the system.


To add your audio file into the sound cue, first open up the content browser and make sure your sound file is selected (yellow outline). Then go to the soundcue, right-click anywhere inside it and select the sound file at the top of the list. You'll be left with the file as a module, shown below:


However, we can't just leave it here, as the soundcue still doesn't have an output. We'll need to 'connect' the modules so that a flow of data occurs. Don't worry though, it's very easy: just drag from one of the small black blocks to another and there you have it! A functional soundcue. To make sure it works, press the right play button (shown right), which will play exactly what would occur if triggered in the game. The other play button plays the soundcue at whatever module you've selected.


The Functions of a Soundcue: This is where it gets interesting. Not only is there the fact that you can achieve quite a lot with a soundcue, but because it works in a flow-chart-like manner, modules can be combined, giving us a huge potential for creative use.

I've used 90% of the available modules in my work, so they're all very useful. Best of all, they're so easy to use! Like I showed you: it's a case of connecting modules together and listening to the outcome. Of course, you can go into more detail with their properties. For example, the 'attenuation' module has editable parameters for min and mix distances of attenuation.


The following provides a more detailed explanation of each module in the soundcue editor. However, as I mentioned, I haven't used them all so I'm going to detail the ones that I know well. If you do want to find out about the other 4 modules, feel free to go to the UDK website.

Attentuation: Apart from any sounds that need to be locked in pan and volume (such as first-person SFX or dialogue), most sounds will require attenuation. What this achieves is a basic distance-volume relationship - the further away you are from the source, the quieter the sound. Another tool within this modules is spatialisation - also key for most sounds, but not the ones listed before. This is the act of sound panning based on perspective, which means that if the source is to your left, it will be panned left. Finally, we have the facility to apply a LPF (low pass filter). If enabled, this will both attenuate volume and gently roll off the higher frequencies with distance.

Attenuation and Gain: Almost identical to the Attenuation module, this simply adds the ability to control more the gain of the source with regards to distance.

Concatenator: Very useful for dialogue, the concatenator is used to 'stitch' audio together to create the illusion of seamless audio playback. When combined with multiple randomised files, you can very quickly create unique SFX with very little memory usage.

Delay: This is as simple as it sounds - it adds a predetermined (or randomised depending on a min and max time) delay before the next module.

Distance Crossfade: Particularly for static ambience soundcues, the distance crossfade allows you to have a 'distant' sound looping, until you get to a predetermined distance where by the sound fades into the 'close' version of the sound. Obviously, you would need to create these near and far version of the audio, but it's a great time saver for a much more realistic outcome.

Doppler: More for when a sound passes the player rather than the other way around, the doppler module adds the pitch-bend effect that occurs similarly to when a car drives by. This must be a recent addition because in all my previous work, I'd never seen it!

Looping: Exactly what it says on the tin - this module loops a sound. You must be cautious with this one though, as you have very basic control for stopping and starting the loop. This is best for a sound that will always loop, such as an ambience.

Mixer: Much like a mixing desk, the mixer allows you to change the volume of multiple sounds irrespective of the overall soundcue volume (controlled in the properties window). A very useful tool when creating SFX or loops with multiple layers.

Modulator: The key to creating repetitive sounds seem non-repetitive is modulation. This is the act of randomly altering the volume and pitch is small variants to give the illusion that one sound file seems like multiple. Great for foley such as footsteps, as well as gun shots, explosions, doors opening and closing etc.

Continuous Modulator: This works in a similar fashion to the attenuation module, which is technically also a continuous modulator, only that attenuation uses the distance and displacement as parameters. An example of continuous modulation is a car engine. When the speed gets higher, the pitch goes up, and when the speed goes down, the pitch goes down. This happens continuously, as there is no need to 'trigger' the car going up or down in speed. Similarly, with speed changes, the volume might increase slightly when in first gear, then will jump back down when up to gear 2.

Oscillator: Oscillation is the constant alteration of values between and a min and a max, usually in a predetermined fashion such as a 'sine' wave or 'saw tooth'. In this case, we can use the oscillator to alter the volume or pitch of a sound. An example might be a siren going up and down in pitch, or the waves at a beach going up and down in volume as they swell and subside.

Random: The mother of all modules: random is the most useful tool in my opinion. It's function is to randomly select from a group of sounds connected to it, either with or without repeating itself. Especially in game audio, you want to prevent repetition accordingly, and randomising files is the first step in doing this. Coupled with modulation, you can very quickly have professional sounding results.

Example Soundcues: the best way to show these modules in use is through example. Below, I've put together a list of systems that I've used, how I built them and why I've made them as such.

Randomizing footsteps - This is a staple system for any game that involves a character that walks. What you want to achieve is as little repetition as possible, unless designed otherwise (the japanese tend to have repetitive footstep sounds for style). The reason we want to prevent it is that you'll be hearing them a lot. No one wants to hear the same sound over and over for hours on end, so we do everything in our power to prevent them breaking immersion.


Above, you'll see a system that I've mentioned a few times. Here, you'll see 4 sounds being referenced on the right which are individual footstep sounds; that is, a single foot. These are then fed into a random module, which both plays them out of any repeated order and makes sure a single sound isn't played twice successively. Finally, the output goes through a modulator which alters the pitch and volume to further prevent repetition.

We could take this one step further by including a scuff sound when you catch your shoe on the floor rather than a full step sound. We could also split the footstep sounds into 2 (the back of the shoe and the front of the shoe hitting the floor), which would create an almost lack of repetition.

Cross-fading a Waterfall - It's very easy to create a waterfall loop out of a single sound file and have it attenuate with distance. As anyone knows, a sound almost develops the closer you get to it and certain features (such as splashing on rocks) fade in that weren't apparent at a distance. To solve this, we can have 2 separate sound files looping, which are faded between the closer or farther you get.


What we have here are the two sound files (right), named accordingly for the 'far' sound and 'near' sound. These are first fed into the loop modules, making sure the files aren't interrupted. Finally, we have the Distance Crossfade module. Within the properties (right), we can adjust the distances intricately, so that the fade is to your preference.

You're also able to add more inputs to the module, meaning you could have 5 or 6 different sounds crossfading all within this soundcue. One use for this could be a weapon firing, which has several layers of the same sound that performs differently depending on the distance. Close up, you would get a very bright and low-end burst, with another layer that has less lower frequencies at a further distance, a third layer which brings the top end off and a slight delay further again, and finally a layer which is particularly dull for a great distance.

Dialogue Randomisation and Concatenation - In preventing repetition, there are issues in all aspects of audio, including dialogue. When you have a lot of NPC's, the volume (pun not intended) of dialogue can balloon and before you know it, almost half the games memory can be taken up with it (just take a look at any Bethesda game post-morrowind). The system below is a way of resolving part of this issue.


What we have are 3 instances of dialogue, with 2 of these instances randomised each time the soundcue is activated. First, we have a single line of dialogue, which feeds directly in to the concatenator. the second section uses a few different modules: there are 3 variations of the line to be used, of which one is picked at random. This is then fed into a delay, which is used to give a natural pause between this and the first line. Finally, we have the same setup for the third line, only with 2 variations rather than 3. To those who wouldn't know, the outcome would sound like a single audio file.

More Next Time...: So those were 3 examples of the kinds of systems you can achieve with a soundcue. I hope it gave you an insight into what is only a small part of what makes up audio implementation in UDK. However, I'm not really finished yet. Next time, I'll be covering Kismet, which is a whole different world in itself. With it, I'll be showing how we'll be able to trigger and manipulate these soundcues. A very satisfying part of the process I can tell you!

Conclusions: As I've just said, this is only a small part of the audio implementation that's available. I could go a lot further and certainly a lot deeper, but I could go on for far too long. If you're planning to get into this, have a good play with all the modules; some you won't be able to use properly until they're in-game, but implementing a rain sound from single raindrop sounds that are chosen randomly, concatenated and looped is a good challenge.

At any rate, enjoy having fun with this and I'll be back shortly with the fun of Kismet!

Alex.

Monday, 26 August 2013

UDK Audio Tutorial: Introduction + Basic Level Design

Introduction: For me, there is nothing more fun and rewarding than creating interactive audio. Not only are you making something sound good, but you're allowing others to take this and make it their own - something that other mediums of sound simply can't achieve. From recording, to editing, to mixing and finally building systems, the process is varied and satisfying. Here, I'll be going through everything you need to know about creating simple sound systems in UDK.

Before I get straight into it, I've detailed below a list of the kinds of things I'll be covering, from the basics on to more advanced subjects:
  1. Getting to know UDK: Basic Level Design - In this first post, I'll be covering the basics of the Unreal Development Kit and tools to create simple levels using the builder brush.
  2. Getting to know UDK: Soundcues, Kismet and the rest -  In the content browser, we'll go into setting up internal sound systems that act independently, and why they're important for workflow. Also using kismet, I'll implement some simple systems, while explaining some key modules that are useful for all aspects of interactive sound design.
  3. Setting the Scene - the most basic use of sound in games is ambiance, or setting a 'sound background' for all other sounds to sit on. I'll explain how to create them and best practices for volume, frequency and content.
  4. Music - Another key element of a games sound is it's soundtrack. The way in which this is implemented varies from game to game, so I'll cover a few instances.
  5. Foley - The little details certainly create a bigger picture, especially in audio. I'll create some systems and discuss audio file creation, lengths and frequency to help make the most of your recordings while making it appropriate during gameplay.
  6. An Interactive Environment - Using everything detailed in previous posts, I'll create an interactive environment that I'll make available for download so you can interact with it too!
As I write these posts, they'll probably evolve and more things will be covered than first planned, so expect to see more than just 6 posts. I could do 6 posts alone on foley!

Getting to know UDK

Before you can create your soundscapes, it's good to get to know your tools. One of the things I love about UDK is that it's relatively simple to create block worlds quickly with playable results without a huge amount of prior training. You can also work out systems that would otherwise entail coding, instead using object based design in 'Kismet' (which I'll be using for our sound systems). All in all, it's very user friendly and is completely free to design games in, so it's a sure fire for any beginner in whatever aspect of game design you'd like to persue.

Installing UDK: In order to download and install UDK, simply go to the UDK Download Page and get the July beta (I'll be using July through these tutorials as it was the latest at the time of writing). The process of installing is as you might expect; the only questionable section is the perforce installation. You can go ahead with this if you choose, but I won't be using this in the tutorials. For more information on Perforce, check out their website.

Opening UDK and Creating a New Level: When you first open UDK, you'll be greated with 3 windows:

What you'll be greeted with when UDK loads up.

The main window at the back is used for the level design and physical placement of items. The second window from the front is the Content Browser: this contains all of your assets, such as meshes (items that can be placed in the world), textures, animations and sounds. Finally in the foreground, we have the start up window. this gives you quick access to the creation of new levels, opening existing levels, video tutorials and more.

The other important window that we'll be using quite a lot is the Kismet window. As discussed, this is used for creating reasonably complicated systems using object oriented programming. An example of this is shown below:

An example of a kismet system. This particular one reduces your health by 5 every second if you're within a set radius of an object.

So why cover level design tips for an Audio Tutorial? The thing that sets game audio apart from creating a song or designing sound for film is that, if you don't have an environment to work with, you can't really create the systems necessary. Now you may say that without film, you can't create film sound, which is a valid point. But it's so easy to create a simple game level, that it's not worth overlooking. When I started my interactive audio module at university, the first thing we were taught was basic level design, as it gave us much more freedom and flexibility over the systems we designed. The other advantage to knowing these techniques is the breadth of tools it introduces you to, which gives a much broader picture of game design as a whole. If you're already adept with level design, you can of course skip this post and go straight onto the next one, detailing kismet and other elements.

For this tutorial, we're going to start by creating a new map. You'll be greeted with a choice of new maps, 4 of which include a basic floor and static-mesh block with a different time of day for each, with the last being a blank canvas. To make things easier (as I'm not covering much level design), we'll just go straight into a pre-built one.

Here, you can choose a pre-made level, or start with nothing.

The first thing you'll see is the 3D view. This is used to view a close representation of what your final level will look like, including any invisible volume and nodes. We can hide and show a plethora of different features so you can find exactly what you need.

The main editor window, with a pre-made level loaded.

However, to really help with workflow, were going to show the other views. By clicking on the button in the top right corner (shown below) 3 other views will be shown. These provide a texture-less perspective from the top, front and left side of your level.

With this view, you can see a 3D adaptation of the level, as well as 3 perspectives with texture-less models.

Creating a Simple Block Level: The best tool for the job in creating a level is the builder brush. This is used simply to create blocks of varying sizes and shapes, and can be manipulated using various methods. For our level, the builder brush is currently on the cube in the middle. Clicking on the Go To Builder Brush button (shown right, right button) will take you to it (no matter where in the level) and will automatically select it.

The Builder Brush can be changed into many shapes. You can see these on the middle-left (turquoise coloured buttons).
Here, I've simply moved the builder brush out of the cube. As you can see, it's a texture-less entity which gives an outline as to the shape it will create; you'll also see some controls in the centre. For everything you select in the editor, a set of controls will appear to allow manipulation, for which there are 3 kinds available - Displacement, Angular and Scaling. A forth is available, which allows you to scale an object separately in the x, y and z axis.


Have a mess around with these tools to see what kinds of cuboids you can get. There are also preset shapes you can use, detailed in buttons on the left-side toolbar, such as spheres, cylinders and stairs. Once you have a shape you'd like, you can then create a static object. The quickest way to do this is by pressing Ctrl + A. As you'll see, the area is filled with a blue and white checkered block.

The builder brush defaults the created object to have this texture and stretch it across each plane. You can however change these textures and methods of application in the texture properties window.

You can now move the builder brush around and create more blocks the same way. Another great feature is the ability to cut out holes in these blocks. In the below screenshot, you'll see I've simply changed the angle of the builder brush and 'subtracted' this area from the one created previously. You can achieve this by pressing Ctrl + S.


I use this feature a lot for creating doorways and windows, but as you might imagine, it can be used very creatively even for such a basic element of the engine. It's a very quick and easy way for drafting a level which will aid in the design and implementation of systems.  Below is an example of something we can use for a building with exterior and interior walls, an entrance and an exit.

Of course, you can intersect these blocks if you wish. This makes it easier to prevent unwanted light leakage or stopping the player from becoming stuck in gaps.

When you're happy with the layout of your blocks and want to have a play-test, all you have to do is press the green play button at the top of the editor window and you're in. You can run around and view what is the start of your creation!

This particular screen has had a full lighting refresh - a process of rendering which you can do in the 'Build' menu. By doing so, the engine reads the lights refraction's, giving a much more realistic look to your level.

More uses of the Builder Brush: It's great fun to create these levels with the builder brush. However, it's much more powerful than this, and you'll find yourself using it for much more than physically building blocks. A lot of the use will be for volumes: trigger volumes, water volumes, reverb volumes and many more. Each of these have a distinct function to help you design systems and gameplay. Particularly though, trigger volumes will be used more than most for setting off the sound systems I'll detail in coming posts.

This is what a trigger volume looks like. In a later post, I'll be hooking this up to a system in kismet as the start point, or trigger as such.

Static Meshes from the Content Browser: Another way of 'furnishing' your level is with the use of static meshes: these are complicated textured 3D models that are anything from a door, window, table, engine, car; basically any object that isn't created with the Builder Brush. These are stored in the content browser, and can be found by using a filter tick-box at the top. To put them in your level, you can either select the object (which will stay selected even if you close the content browser) and right-click in the level to add it, or you can simply drag it in from the content browser.

In the content browser, clicking on the 'static meshes' tick box reveals all the static mesh files available.

Here, I've added 3 of the same static meshes; barrels in this instance. The medium sized one is the original, with the other 2 scaled appropriately and angled slightly. 

Conclusions: So that just about covers basic level design without getting too much into more complicated systems. I will carry on level design slightly in the next post, covering matinee (the tool used to animate objects) and integrating it into Kismet. Mostly though, I'll be going over Kismets many different modules and how I've used them in the past to overcome certain challenges (namely footsteps!).

Please don't forget to contact if you have any questions or would like to add anything; I very much appreciate your feedback!

Alex.

Monday, 19 August 2013

Editing and Mixing Sound: Frequency, Volume and Content

From a large action sequence to a subtle conversation, a huge amount of thinking and planning goes into the sound design to create an engaging, believable and pleasant listening experience. Not only does pacing need to be accounted for, as well as which sounds are needed, but considerations for the frequency and volume are paramount to getting a good mix.

Let's take a couple of examples, mentioned above; an action sequence and a subtle conversation. These are both quite distant in terms of content and frequency, and must be planned thoroughly as such. I'll break each down into relevant components so it's easier to account for.

The Action Sequence:



Set the scene: Let's take a place of action similar to one of the fight sequences in 'Inception'; it's raining, there are many cars driving around with 3 or 4 chasing the main characters in their car. Of course, these cars are crashing into other cars and objects, while the 'henchmen' all have guns of varying power, attempting to kill our protagonists'.

I must first stress something - this is only one of many ways to go about designing, composing and implementing the sound for a scene, regardless of it being an action sequence. It's really up to the director and the way he wants to take the action, or the Lead Sound Designer who may have a particular vision or style. A lot of other films might have had fast paced music over the top of this, but Richard King (the Sound Designer behind Inception) went with a more dynamic approach; bringing forth a much higher impact than if designed otherwise.

Break down into layers:
  • Ambience - Throughout the scene, rain is falling fairly heavily. However, you'll notice the sound is particularly low in volume. The reason for this is to help enhance the dynamic range, which (as stated previously) in necessary for the impactive sounds. In fact, the ambience will likely be taken out at sections where the sound gets busy (crashing, gun fire, louder sounds in general). The same goes for the section where the train appears. Just before this occurs, you'll notice the dynamic range is heightened by leaving only the ambience in at a low volume before the train crashes into the initial cars.
  • Dialogue - Although there isn't much in this section, the dialogue is seemingly set back in the mix, which is another device to increase the impact that crashing and gun fire has.
  • Foley - This is a part of the sound that takes some picking and choosing. Much like a camera can focus on one part of an image, sound (when designed effectively) will do the same. If there is a man walking in the back of a shot out of focus, you generally wouldn't place footsteps on him. Similarly, if 5 or 6 people are shooting guns, but the camera is focus on a particular person, the sound would be mixed towards this person with their gun being loudest and possibly even leaving the sound out altogether for some of the other guns.
  • SFX (explosions, crashes) - In this scene, explosions, crashes and gun fire certainly take precedence. My way of thinking about a mix is to concentrate on the loudest and most prominent sounds, and then add other sounds around them. This way, you get a sense of perspective and shouldn't make too many mistakes in the way of bringing down the dynamic range.
Frequency Content: There are quite a lot of thing going on here. One thing is getting the sounds in, synced and mixed in volume, but making sure they don't clash in the frequency range is hugely important.
Let's consider the ambience first of all. This sets the initial scene, much like a backdrop at a theatre will do. However, you wouldn't have any bright objects or 'loud' images on this, as it would take the attention away from actors and props on stage. So in terms of frequency, you need to leave quite a lot of room for other sounds to happen, such as dialogue, foley and SFX. Therefore, you'll want a low-to-mid range frequency band - this leaves room for low frequency explosions, mid-to-high dialogue and the high-frequency chinks of bullets landing on the tarmac. In short, you'll want the ambience to accommodate all the other sounds; surround them, fill in the gaps.
Dialogue and foley will have a similar frequency range, but different from other scenes or mediums. These would usually take up more of the lower end, taking advantage of the proximity effect in Dialogue or intimacy in foley. However, for this type of scene, any lower frequencies would be EQ'd out (or recorded as such) to leave space for explosions and gun fire.
I've said it enough already, so let's finally get to SFX. These will take up the largest range of frequencies, as the breadth of SFX is quite large in this instance. Explosions, for example, will take up the majority of the lower end, with car crashes similarly lower but with higher crunching sounds layered over. Car engines would usually take up the lower end, but with so many short bursts of lower frequencies, they've designed them to take up more of the mid range. Gun fire will depend on the size of the weapon: hand guns have shorter low-to-mid bursts, with automatic weapons taking up more of the low end.

Fluctuations: As with an overall film, a scene can have fluctuations of frequency content, as to 'fill out' any gaps. For example, I said earlier that cars take up the mid range more here, but for some shorter sections, the frequency tends to the lower-mid. They can get away with this because of so many cuts and perspective changes image wise.

The Subtle Conversation:



Set the scene: As a big fan of the show, I suppose it wouldn't go amiss to use a scene from The (American) Office as an example of subtle conversation - there's certainly a good amount of it through the entire shows catalog of episodes, ranging from whispered chats to shouting battles.

Break down into layers:
  • Ambience - In this instance, ambience takes up quite a lot of room in the mix. In a way, the ambience in The Office is very much a character in itself. That is, it sets the scene and changes dramatically depending upon the circumstances. Most of the time, it's made up of fans and air conditioning, layered with SFX (detailed below). As there isn't much going on in the way of needing a huge dynamic range, the volume is brought up right behind the dialogue, and fills out the remaining frequency range (more on this later).
  • SFX (Copier, Phones, Doors) - These SFX are placed strategically and help further set the scene and create a 3D space around what you see. In fact, this is a great example of sound continuity. When the camera is facing one way, there will always be something going on around and behind it. These SFX are placed almost as jigsaw puzzles to complete that picture. See someone walk off camera towards a door? The designer will carry on the footsteps and use an opening door sound to signal and bring closure to that movement. Back to the point on hand though - the levels tends to be reliant on how far the objects are from the camera, which gives a sense of depth to the room.
  • Dialogue - For intimate conversations, Dialogue takes more priority than any other sound. Here, the volume tends to stay at a similar volume (regardless of speech type such as whispering or shouting). When the camera is taking a close-up, the proximity effect is utilised to give the sense of intimacy through sound.
  • Foley - In some cases, the foley can take as much precedence as the dialogue in terms of volume and content. The Office, as you might imagine, involves a lot of phone, keyboard and paper handling. These sounds are therefore quite high in the mix and are used as focus tools. For example, some scenes don't have a lot of dialogue, and use the foley to tell a miniature story. This can anything from someone pretending to read a magazine while spying, or Kevin eating a cupcake.
Frequency Content: For The Office, there are some distinct differences in the frequency ranges of each component, but they fill the same gaps for the most part. Of course, the only change here is that there aren't any cars crashing or guns firing, so there's little need to take up that lower frequency range of 60Hz and lower (unless someone punches a wall...).
Ambience now takes up a much larger part of the frequency bandwidth, with more lows and low mids that would otherwise be used by explosions and gun fire. Dialogue varies widely now, with shouting from afar being similar to the action sequence (mid range), while close-up shots use the proximity effect, utilising the low-mids and often lower. Keyboards would otherwise have a very 'light' sounds, taking up the mid-to-high frequencies, but here the sound involves some lower-mids too. Clearly with a show like this, they can use more varied frequencies for sounds that would otherwise not use them.

Fluctuations: Changes in sound are as important as changes in the image for a 'Mockumentary' style show such as The Office. With so many swiftly changing shots, quick camera movements and single-shot takes, it's important that the sound stays consistent and varies accordingly for the scene. Particularly with the one-shot takes, where the camera is walked through the office, sound has to change enough to offer an audible picture of the environment it's moving through. A good exercise is to find one of these sequences and watch it without the image - you'll notice a lot happening which would otherwise seem normal with the image; a consideration necessary for the Sound Designer when mixing everything. In fact, you could do this yourself at your own office or workplace: even there, you'll find fluctuations in volume and frequency content.

Conclusions:
From looking at these two examples, we can quickly see some correlations in the needs of a mix for film, TV or even any form of sound scape: regardless of the style, filling out the volume, content and frequency range appropriately is vital. You could argue the same for a song without a bass guitar; instead, the guitar would need to be 'chunkier' and the kick drum would have a more open deep sound to compensate.

These are thoughts you need to consider when designing the sound for your own project - is the frequency range, volume and content fulfilling the mix? Ask yourself this through the entire process: you won't want to be destructive in editing or mixing and find the decision was made in error.

I hope you've enjoyed this post! Please leave a comment if you have any thoughts to add, or better still, let me know of any personal experiences you've had with this kind of thinking.

Next Time: UDK Game Audio! - I'm finally going to dive into my love of interactive sound. This first post in the series of many will briefly touch on best practices of audio before getting to the importing stage and basic implementation of sound in a 3D game world.

Alex.