Article by Alyx Jones & Lorraine Ansell
Develop:Brighton 2018 has closed it’s doors for another year after a wonderful three days packed with talks from a range of disciplines around game development, including programming, art/design, business/marketing and of course our favourite: the audio track. Set in The Hilton along the seafront, there’s an expo full of companies, showing off their latest work and looking to hire new talent, as well as an indie bootcamp track for those earlier in their career. Every year thousands from the game development community gathers in Brighton at Develop, to learn from each other, share experience, network and do business (as well as drink, party, play games, eat cake and sing Marioke in the evenings!). It’s not cheap to attend, but they do offer reasonable indie rates, and there is so much going on over the 3 days, even an expo pass will help you network, as well as being able to take part in the game jam.
The Audio Track runs on the Thursday of the conference, for anyone involved in sound or music for video games: “Nothing ruins a good game faster than annoying music or inferior sound effects. Audio professionals can expect to be inspired by the latest techniques and trends, plus hear from peers and experts who will share their own experiences and knowledge on all things sound related.”
Audio Day started with John Broomhall welcoming everyone to the Hilton. John has helped organise the audio track at Develop for years, but is also well known for his work as a composer on titles such as X-Com and Forza. We’ve complied everything we’ve heard over the day and given you our favourite moments from each session:
Directing Actors: The Do’s, the Don’ts and How to Get the Best Performance
The first talk of the day at Develop in the Audio Stream was by legendary voice actor Stephane Cornicard. An actor who has been in Lego Marvel, Total War, Dark Souls, Fable and my personal favourite being part of the Bond Franchise. He also directs voices and he came up with a handy checklist for Games Dev and Audio bodes when it comes to working with actors.
Actors are a lovely bunch, they genuinely enjoy voicing your stories. By giving them sides that have full information about the character (but not necessarily about the whole game iteration) and an in-action image is extremely helpful. This helps them place the physicality and as this informs the voice, then you have your words come to life. Then get talking to the actor about the character and the lines including the environment (outside/inside/up high/down low/talking to a person right next to them/talking to a person at the other end of a lake) and especially the adverbs needed. How do you want them to feel? Not do, though stage directions are helpful but what are they supposed to be feeling in each scene? Empathy is a key acting skill and that helps the actor place themselves into the characters shoes. By directing in terms of emotional states actors can capture a character for you much more easily. Avoid line reads as though can be timely it can just create a parrot fashion repeat of a voice which may not be the character you are after.
Run Fast, Sound Great: Keys To Successful Voice Management Systems
Will Augar and Jon Ashby (Frontier Development) brought us some wonderful “Dinosaur Recipes” from Jurassic World to enjoy, as lunch approached (no Dinosaurs were harmed). They talked us through how using sets of rules that didn’t depend on each other, allowed Wwise to work out how important each dinosaur was in the mix. They didn’t want to use numbers to rate this, but instead phrases that were easier to understand. By using this method, they were able to find for example the top 4 most important dinosaurs that the player should hear, and the top vehicle that should be heard. Their system would only alert Wwise to play the most important sounds based on their rules eg. size of dinosaur, proximity of dinosaur. It was also easy for them to change rules, as they weren’t dependent on each other in order to work.
Using Audio to Support the Story and Maintain Immersion in Batman: Arkham VR
Andrew Quinn from Rocksteady took the podium to show us how he worked with audio within the Batman: Arkham VR experience. With virtual reality, one of the difficulties is transitioning from one scene to the next, without it being jarring for the player, who is suddenly in a new environment. Andrew showed some early work from development where the screen went black and then we appeared in the next scene, with some voice narrative in between. They progressed to blending the audio between both environments while the screen faded to black and the new scene loaded. The sound design allowed the new scene to be set, sonically first, and then slowly faded up, to make it a more natural transition. Sounds of rain or cityscapes bridged the gap while we moved to a new location. We were also shown an example from the game where Batman can fast forward, stop and rewind time, to analyse Nightwing’s combat. There is also an easter egg in here, of ringing tones from a piece of music you may recognise!
Challenging Videogame Music Tropes
This talk combined the work of Olivier Deriviere (Remember Me, Get Even) and James Hannigan (Command and Conquer, Harry Potter, The Lord of the Rings) to look at the nature of video game music, how best to work with developers and how greater levels of cohesion between music and gameplay, as more than a background element, leads to deeper and more meaningful experiences for the player. Olivier talks about the importance of your relationship with the developer and that it should be cherished, as this is what allows you to reach greater levels of creativity and ultimately better results. He showed us an example from Get Even where a lot of musical elements were placed within the environment, such as knocking on doors, being in time with the rhythm and the what the voices were saying/singing changed depending on our actions. This tight integration of the music into the game led James to comment that:
The most celebrated music is linear, so the more you embed the music, the less separable it is from the experience – James Hannigan
But also added that perhaps composers have to put a lot of extra work to create a soundtrack from their game music if they integrate it this deeply. Games shouldn’t be treated like films though, they aren’t a passive experience, James questioned “Why would you treat players like a passive film audience, and throw canned music at them?”. Olivier added a games composer should “Understand at least what is is to be a gamer”. Game music is not just for fun, it’s meaningful and far more than just an illustration.
Audio Design Masterclass: Creating Force Power Sounds For StarWars Battlefront II
Philip Eriksson (EA Dice), is an Audio Designer who worked on Star Wars: Battlefront II. For this game, the team also had access to a big library of sounds from the Star Wars franchise, including films, games and TV animations. Philip started by listening to the characters original sound design, how it was made up, and how he could translate this to their “special move” (every hero in Battlefront II has a signature move). It was important that a clear distinction be made between heroes and villains. There were a lot of interesting sources for some of the characters, including a baby alligator combined with a slinky, and a kitten in Kylo Ren’s audio make up. The in game results were really impressive, and certainly felt like part of the Star Wars universe.
Horizon Zero Dawn – Dealing With Scope and Scale in a Brand New Open World
Next we had Bastian Seelbach to talk about how the studio managed a shift into a big open world game, that Horizon Zero Dawn became. The scope of the game was pretty big, and the narrative matches this. Horizon is a game full of story, lore and side quests, but the biggest problem this presented buy generic isotretinoin online is that they ended up with needing audio for about 3 hours of cinematics (that’s longer than the average feature-length film!). Normally they would do all the audio work for cinematics separately from the in-game audio system but they needed a way to save time over the project so decided to re-use the audio system. It worked really well, and actually when cinematic are obviously different from the gameplay it can sometimes pull you out of the story a little.
How to Maximise the Story-Telling Power of Sound in Your Game
The penultimate session was led by Shannon Potter (Formosa Interactive), who has a lot of experience creating foley and sound design for incredible narrative titles such as The Last of Us and Unchartered. Shannon talked about foley, and how working with snow in a foley was particularly difficult (she had to get ALOT), and that foley was often underscoped in projects. Live spotting in foley sessions isn’t ideal for Shannon, as it takes artists out of the creativity. She also said that she loved creating all the less noticeable sound design and likes to “Geek out on smaller sound effects that sit there, aren’t noticeable because they feel right“. In terms of story telling and building tension, one of the tense parts in The Last of Us involved deformed people called “Clickers”, who make a throaty, demonic sound and click to identify where objects are. The reverb of the clicks arrives before they do, and this makes the experience really terrifying. When the player is partly in the dark in the gameplay, the audio has much more of an impact.
Open Mic 2018
The final session of the audio day is always a general panel discussion, this time with: Kenny Young (Little Big Planet), Nick Arundel (Batman), Shannon Potter (The Last of Us, Unchartered), Stephen Baysted (Project Cars) and Bastian Seelbach (Horizon Zero Dawn). There was general discussion about recent games that set the bar for audio and music, Cuphead being the favourite, and then moved onto advice for composers. Nick said that you should always question creative decision you make such as “Why is that second chord there? Think through what you’re going to do next” and also that you should always try to finish your work, rather than have it taken off you: “Finish it. If you don’t finish it then you have two problems, it’s rubbish and it’s unfinished“. An audience member asked about imposter syndrome and how to cope with it. Turns out pretty much the entire audience felt like they struggled with it. Shannon said “If things feel s**t, you might need to eat, or go home and then ask collaborators what they think, or move onto something simple that you can do and then come back to it“. The panel also talked about how it was increasingly difficult for young people to enter the industry, with fewer internships or junior positions and a higher bar for entry. Bastian said that “Audio teams haven’t upscaled in the same way as other departments” and Shannon added that there “are just not enough internships available“. Towards the end discussion turned to the low number of women in the industry with Nick asking for practical advice on how to have more women employed in the industry. Paul Lipson replied that he had a pretty even split at Formosa Interactive. Some panelists and audience members found that on job applications, they didn’t even attract female applicants. Jessica Saunders (Salix Games) also added that the moment you get harassed or groped, and you freak out, it’s then impossible to report it without risking career suicide.
Develop 2018 was really awesome, and the balance of talks between VO, sound design, implementation and music was much better than previous years. There was also a good range of speakers, from different backgrounds and disciplines. The game audio community never cease to amaze in their passion, drive and expertise from technology to creativity.
We’re also running a Patreon campaign to make sure we can keep bringing you regular, high quality content if you’re feeling generous! Thanks for even sharing!
The Sound Architect