You are here
Home > Blog > Quantum Break Interview with Richard Lapington

Quantum Break Interview with Richard Lapington

Deniz Zlobin has very kindly shared with us his interview with Richard Lapington (Original HERE in Russian), who at the time of interview was Remedy Entertainment’s Audio Lead (Now Narrative Lead), about the audio for Quantum Break! Read the full interview below:

 

How did you start working in game industry, and how did you end up in Remedy?

I’ve got a background in music. I have a degree in jazz music, a diploma in audio engineering and master’s in new media. When I was studying in Amsterdam to be a sound engineer, after my failed music career (not really failed, I just changed my mind), I became very interested in programming. I started looking for ways to use my audio skills and programming in a job environment, and I realised I can do this in games. Similar to many of my generation, I’ve grown up with games, writing simple programs on ZX spectrum when I was a kid. So after I left Amsterdam I came to Finland and started looking how to become part of the games industry. I realised that I need a portfolio and programming experience, and a whole host of stuff I didn’t know, so I went to study ‘New Media’ at the University of Art and Design (Aalto University nowadays). There I focused my entire time on learning everything I could to get to the games industry. The first thing I managed to do is getting myself making flash games for project called ‘The Tulse Luper Journey’. It was run by Peter Greenaway, the famous English film director. That was my first game project, and I worked there for 2 years while studying for my Masters. Then, just before I graduated, I was hired by Midway, and that was my first AAA game industry job.

It was back in 2006. The game was called Wheelman, and my work lasted for around 2 years. Then Midway went bankrupt, so I moved back to Finland to work for Bugbear. I did a couple of canned projects, before starting work on Ridge Racer Unbounded. Soon after that I was approached by Remedy, and I’ve been working for them for around 6 years. At Remedy I’ve worked on Death Rally HD, Alan Wake: American Nightmare and Quantum Break – which was in production for about 4 and a half years.

 

Tell me about Remedy’s audio department.

There is myself; Ville Sorsa – senior sound designer; Josh Stubbs who is a dialogue and tools specialist (he does sound design as well), Martin Bussy-Paris, who specialises more in integration and technical side; and Lucas Pierrot, who is the newest member – and really good sound designer. That’s my direct team. Then we have an audio programmer Perttu Lindroos and composer Petri Alanko, who often does music for us. Petri works freelance, but he is always hanging out in the office, so I consider him part of the Remedy Audio Team. For Quantum Break when we were getting to grips with the game he worked from the Remedy offices which helped alot.

The Quantum Break team was different. It was massive: we had 6 people in our core audio team + 3 people in London working for us. And then on top of that we had 2 guys in Seattle. Plus all the cinematics were done in Canada in a company called Game On, and Petri Alanko as our composer, of course.

 

How was the communication arranged between you and other teams?

It was quite complicated. Lots of Skype calls, lots of emails. But in hindsight I think we were quite clever. We had people from different time-zones, so everyone wasn’t available at the same time. We had quite a few guidelines. Because everybody was working on the same Wwise projects, we had very strict rules about what people can do in a project and how the things were named. It was very important, particularly in Quantum Break, where we had this concept of ‘stutter world’ and ‘non-stutter world’. There was a whole ruleset for how hierarchy is built in the Wwise project, what plugins we can use and where, also, for example, use of switches and game states – it was all defined from the beginning. Very strict guidelines helped a lot. Basically, we tried to make everything as foolproof as possible. We tried to make sure everyone works in their own little bubble, so we didn’t cross each other’s work. It worked, but was was a bit chaotic at times.

 

Did you do the trailers in house or were they outsourced?

It depends on the trailer. The first trailer was done 50% in house, 50% outsourced. Then we did the Gamescom 2014 presentation – it was quite a long one, and that was actually in-game sound. I mean, it was played live on stage. That was extremely risky thing to do, but for some reason we decided to do it. And then there was Gamescom 2015, trailer, it mainly had in-game sounds with some outsourced sound design. Just to clarify, when we usually do game trailers, we capture the game itself without music, then we compose new music on the top. On top of this we sometimes add additional sound effect or VO as well. It really depends on the trailer and what we feel the trailer needs. So to answer the question … it’s always a mix.

 

How did you come up with sounds for Jack’s time abilities? There are almost no references for that in nature. The most obvious choice would be to filter sounds as if they were underwater and “slowed down”, but your time stutter doesn’t sound like that.

It took a few years to get it to feel right. Originally we were looking at references from films. The first reference we used was from the film Constantine with Keanu Reeves. It has a few sequences where time freezes and we actually stole a lot from that. Particularly that high-frequency glassy sounds, our original ‘stutter’ design had lots of that.

When we originally started designing stutters, there were few things we wanted to avoid. One of them was this underwater thing. Even though going underwater was used frequently as example for the ‘different world’ we were trying to create. However at that time the game visual world looked quite blue in stutter, so when we added the ’underwater filter’ to the sound it felt like you were actually under the water. So to avoid that, we started experimenting with high-frequencies. When we started putting it into the game, all the high frequency audio started to feel fatiguing and a bit annoying. Additionally, the game direction and game design was changing all the time, and our high frequency sound design wasn’t cutting it with the new more aggressive, violent game. So we had to change direction and ended up where we are now.

A bigger challenge was trying to describe what feeling we wanted from a Stutter of Jacks time abilities. “Violence” was one of our keywords. “Unpredictability” was another. We knew everything had to feel over the top, “hyper-real”. However, the stutter and time sounds we wanted ended up being more about what we didn’t want. For example, we didn’t want reverb i.e. no locatable sounds in the stutter world, so you should never be able to feel the space. When you were in the ‘normal’ world, we actually tried to emphasize the reverb to make the space feel really full, to create a contrast.

Quantum Break Stutter Non Stutter Comparison from Richard Lapington on Vimeo.

During production we had a long conversation about what happens to air molecules when the time freezes. Jack can move, he can vocalise and can hear his footsteps, but why is that happening? It’s because when he speaks he is forcing the air, but it’s not going everywhere and coming back, it just goes directly to you. So it feels very different. If something moves in a stutter, it forces the sound out all the time. It’s not natural, it’s breaking this “frozen-ness” but can’t bounce around freely.

 

Quantum break has lots of really complex animations, and sounds felt really tied to them. How did you communicate with animation team?

Main character movement sounds were relatively simple. We have an automatic footstep tracking system, that, like in any other game, detects data. We can detect floor material types, player’s speed and velocity, what weapon he is carrying and other things. Basically, we trigger every character sound from the footsteps, we just have many parameters going into the system.

Then there are custom in-game animations, which are special animations that take away player’s control for a while, like ducking under something. You aren’t actually in control of that, you push the stick forward and game will play an animation. When the animation starts, it just plays a ready-made synced sound.

For objects, there are several different types we dealt with. We have physics controlled objects, which are not really animated at all, they run through our physics system. For different animated breaking things we just create specific assets, and for big animations, where each part is animated we synced different sounds with different events. We have a tool called Timeline Editor, where we can sync sound directly to an animated scene. First we create a game capture video with sync points, and then we design sounds for that animation and render them to the sync points. The timeline editor would follow the audio clock, so animation always stays in sync with the sound.

 

So is that how all of those things which go backwards and forwards were done? The train crash scene, for example?

No, that’s something different. For random back and forth things we have our own granular synthesis plugin for Wwise. We design a sound that fits the motion of, for example, the train crash scene. Then we track the position of animation and play it through the grain synth. That’s the main sound you hear. Then we add one-shot sounds for certain key points. Like when the train comes through the wall and hits a pillar, we add a one-shot sound for the wall and the pillar. Each time it hits the pillar we know how fast it’s moving, and we play a different sound depending on a speed. So it’s a combination of granular synthesis and a matrix of one shot-samples.

 

Granular synthesis is cool. What other custom tools did you have?

We also created a plugin called Q-analyzer, which is one of the coolest things in Quantum Break. Basicly it is a plugin which analyzes the audio signal in real time and sends this data to the game. We can then use this data for animation or visual effects. For example in stutter, when the world goes “wavy”, it is the sound that drives the wave. We use different kinds of data like fast RMS, slow RMS, spectral centroid and the total width of the frequency band to drive different parts of the VFX.

Quantum Break VFX Proto from Richard Lapington on Vimeo.

 

That’s not a very typical way of doing these things though, right?

No, it’s not, but there are two reasons why it’s good. Audio is very complex data, so we can easily create variations by changing volume and pitch, or by designing lots of variations as well. When we send this data to the VFX department, it’s really good for them, because they don’t have to create all those variations by hand. The other reason is that audio-visual sync is always perfect.

 

You had an interesting experience of working with Hollywood actors. How is this different from working with “usual” voice actors?

The biggest difference is that Hollywood actors are very busy and they cost lots of money. Because of that we had a relatively short time to work with them. So we had to be quite organised, and get what we need when they were available for us. The process itself wasn’t very different. These actors were really good at their jobs, so the sessions were in the end pretty smooth. If anything it’s just a bit more stressful because it costs more and there is less room for error, but in the end it all went really well.

 

Tell me about dynamic music, such as in the battle scenes and time machine corridor.

I’ve done quite a few talks about that. I think it is one of the biggest wins we had. Particularly in the combat, how “on the button” the music was: you press the button and the music changes. It is very empowering, it feels really cool. And it’s funny that when Quantum Break was released some people were saying that they didn’t notice the audio effect until they started analysing it afterwards. So we somehow managed to create a very obvious effect no one noticed.

Quantum Break Combat With Music Soloed from Richard Lapington on Vimeo.

We were specifically thinking: music is an art form in time, and QB is a game about time, so we want the music to reflect the game. That a was very conscious decision: we can break this music and use it do demonstrate time manipulation in the game. The music is composed in a very specific way to fit in our system. The combat music is not written in 4/4, it is very polyrhythmic. If it was 4/4, player would expect some kind of a rhythmic resolution. Also there is no melody as such in the dynamic sections, so there is no harmonic resolution, and we can ‘break’ the music quite easily without it feeling wrong. The music is split into 2 stems: percussion and tonal elements. We can manipulate them differently. When the player presses a button we drop the rhythmic element, because we want to break the timing, and we time-stretch or manipulate with tonal parts in real-time by using different effects. We had a special way of applying effects for each time power, plus there’s also some randomisation to spice things up. What’s cool about that system is that it always sounds different, depending on where you are in the music. It sounds kind of composed. It’s not completely generative, but it is manipulated very heavily. On top of all that, the music has all the usual things as well – we have a combat intensity system and stingers for enemy kills and other events.

The other interesting music system was for the time travel sequence -(when Jack is using the time machine), it was a lot of hard work. For sound effects we are using the granular synth. We know where Jack is and we know which direction he goes – forwards or backwards in time. So we have multiple granular synthesis plugins running forward or backward, depending on what the player is doing. For the music it is just a clever system of layering. We have music playing forwards and music playing backwards which cross-fades, but we know which part of the song you are in at the moment. It was just a lot of works and experiments. We spent lots of time just chopping pieces of the music and making it all fit together in the right way. It’s all smoke and mirrors, there is no crazy system behind that. Just a good idea combined with lots of work.

Quantum Break Time Travel from Richard Lapington on Vimeo.

Do you think that now sound designers need to specialise on something, or is it better to be generalist?

It depends on what you want to do and where the work is. I think it is very important to know everything you can, even if you are a specialist – it informs you how your speciality fits to everything else. I’ve done a lot of interviews with sound designers, and I’m always interested in their general sound knowledge. I think if you go to the game sound field, you need to know a lot just about sound in general. If you are just starting, it’s better to be generalist. Lots of people who have not worked in big companies like Remedy don’t actually know what (sorry if it sounds a bit patronising) the sound designer job is. Only 30-40% of it is actually designing sounds. The rest of the time you are chasing up environment artists for stuff, or you work with animators, game designers and programmers on different features. I think what I’m trying to say is when you start out you need to know not only how audio works but also how games work and how audio can serve the bigger picture.

I think specialisation comes after. Once you’ve been in a company for a while, you tend to specialise in something. But unless you are looking for a very specific job and have lots of experience in that field, I wouldn’t specialise too much.

 

What are the most valuable skills you look for?

You might be the best sound designer in the world, but if you can’t work in the team, that’s not gonna work out. It’s a really complicated set of skills you need to really impress. And above everything you need to be a nice person. How will you approach people, how will you fit Remedy’s vibe or my team’s vibe. Remember that you are gonna work with the same people day to day in often very stressful situations.

In Remedy we look for, obviously, sound design skills. And since we are a story driven company, I’m particularly interested in how people interpret the story – for some originality in it. If I look at a show-reel I often pay attention to how they use music, how the dialogue is mixed, what are their focus points? Does it have a nice dynamic curve, are they following a story arc with the sound design, are they driving to a peak in the cut scene? This is the same when I look for composers. How they relate their music to the scene of the story. Originality – are they doing things that are slightly different, which can add to my team.

It also depends on a position, but to summarise, we are looking for originality and being aware of narrative. To be honest, knowing Wwise and FMOD is not first on my list. Even though they are very important, they are just tools and anyone can learn them. I’m much more interested in the creativity, in the spark and drive the person has.

 

So you are not looking for a very specific set of technologies as some companies do?

No. It’s partly because we use our own engine, which you have to learn anyway. Maybe if we used Unity or Unreal, that would be a requirement. It is always a benefit to know FMOD, Wwise or something. But if we really like the person, we are not going to not hire him just because he doesn’t know Wwise.

 

What about programming?

At Remedy sound designers would have to know some. They don’t need to know C++ or anything like that, but good if they have some experience with things like Python or Javascript. At least they are required to know about variables, functions and arrays – these basics. You don’t need to be an amazing programmer, but we expect people to know how to add a couple of sounds with scripts. That’s really important for us and this is something we always ask when we hire.

In general it very much depends on what work you want to do. If you want to work in-house, it is valuable, if you are an outsourcer, it is not that important.

Quantum Break Time Is Power from Richard Lapington on Vimeo.

What is the worst thing a sound designer can do in the job interview?

Ummm, show up drunk?… which I have seen! Not for a sound designer position though. I’ve done a Skype interview with someone who was drunk, and he was so drunk he couldn’t even sit on the chair. That’s not something you should do if you want a job.

 

What happens with this profession, there isn’t enough jobs for everybody?

I work in a AAA-company, and my view is really limited to it. I don’t know how it is for mobile or smaller games development. VR could be a thing. The requirements for audio in VR are much higher for the immersion, so that might be ‘the’ future opportunity. So if someone wants to focus on this it might be a good idea, but VR could also be a fad. AAA is a very interesting place at the buy 5 mg cialis moment in terms of sound or any other department. The faster the console, the bigger games become. The expectations on quality and on the number of assets we need are gonna go up and up. But there are also massive advances in technology, particularly in things like machine learning. If you look at animation, they are moving much more to machine learning instead of doing everything by hand. All of our facial animation is now from neural networks. We record an animation data set, but we don’t do any physical animation tracking anymore, we just let the machine do it. I think, it’s gonna be the same in audio quite soon.

 

That’s a hot topic nowadays. Aren’t you scared machines are going to replace us?

Yes, they are going to replace us. I mean, in the actual asset creation itself. Assets are not going to be designed by a human being in 10 years. We might be even machine learning the entire game. But the thing with machine learning is you still need to teach the machine, and that’s where sound designer’s role is, and you need to make sure that machine is making the right stuff.

Procedural audio is also going to be a big thing. For instance: Max Payne, Max Payne 2 and Alan Wake had just over 5000 lines of dialog in total. Quantum Break has 11306. So three games from the last 15 years had half of the assets from the one game we released now. Our current project Crossfire 2, we predict, will have the same or more assets for dialog only. Our expectations of immersion are so much more, that we need more content. We can’t keep going like that, it’s too expensive. So we’re gonna have to, at some point, pull that switch and say – this amount of assets is going to be procedurally generated, it’s just critical assets we’ll deal with.

 

What should we learn today to get the job tomorrow?

I really don’t know. Sound design is always gonna be a thing. I think we are on the start of a tipping point, so there is nothing you can learn now. It is more about forging your own future and being very brave while following your own path – this is probably the best thing you can do. We are going to be at the stage soon where we can create i.e. music procedurally, and the only place humans will have there would be to do something different than the machine. Something very unique and personal to you – this is the only way.

LINKS

Quantum Break

Official

Facebook

Twitter

Remedy Entertainment

Official

Facebook

Twitter

Richard Lapington

Vimeo

Twitter

LinkedIn


Original Interview Here (Russian): https://igrozvuk.com/richard-lapington-interview/

We hope you enjoyed the interview, feel free to check out more of these at the Interviews page. Also, don’t forget to sign up to our Monthly Newsletter to make sure you don’t miss anything!

We appreciate all the support!

The Sound Architect

Liked it? Take a second to support The Sound Architect on Patreon!
Become a patron at Patreon!
Sam Hughes
Sound designer, voice actor, musician and beyond who just has a big passion for conversations, knowledge sharing, connecting people and bringing some positivity into the world.
https://www.thesoundarchitect.co.uk

Similar Articles

Leave a Reply

Top