6.12 Point of Audition: Objects in Ears May Be Closer Than They Appear
6.13 Summary and Further Mixing Exercises
Reading and Listening Guide
7 Surround and Spatial Sound
7.1 Human Sound Localization
7.2 Binaural Audio
7.3 In-Head Localization
7.4 Surround Sound
7.5 Ambisonics and Object-Based Audio
7.6 Spatial Sound
7.7 Sound Propagation
Reading and Listening Guide
8 Sound and Meaning
8.2 Sonic Archetypes, Stereotypes, and Generalizations
8.3 Basic Semiotic Theory
8.4 Phenomenology, Embodied Cognition, and Intersensory Integration
Reading and Listening Guide
9 Sound for Story
9.1 Functions of Sound in Audio Story
9.2 The Mix
9.3 Audio Research
9.4 Audio Story Analysis
9.5 Spotting a Script
9.6 Cue Sheets
9.7 The Asset List
Reading and Listening Guide
10 Conclusions and Wrap-Up
This book would not have been possible without the many students who contributed to my lectures in sound design at the University of Waterloo and guest lectures around the world. I am grateful to their input on learning methods and the discussions we had about sound design. I also must acknowledge the input of those who contributed to lectures and workshops at academic and industry conferences and organizations where the teaching of sound was often at the forefront.
Some of the ideas presented here are based on work I have produced with coauthors, colleagues, and supervisors over the years, and I must also acknowledge their intellectual contributions to my research in sound: Bill Kapralos, Philip Tagg, Paul Théberge, Ruth Dockwray, Alexander Hodge, Michael Dixon, and Kevin Harrigan: thank you!
The accompanying photos and videos found on the studyingsound.org website were taken with Andrew Smith and Charlotte Baker, who also took the photos for this book. They also contributed most of the audio examples online, and Charlotte built out the website.
The funding for some of the experiments I have undertaken over the years that contributed to my thinking about sound was provided primarily by the Social Sciences and Humanities Research Council of Canada. I am very grateful for their continued support.
The composer John Cage tells the following story in his book Silence: after he played the same sound on a loop nonstop for fifteen minutes to a class of students, a woman got up and ran screaming from the room, “Take it off, I can’t bear it any longer!” Cage turned the sound off, only for another student to ask, “Why’d you take it off? It was just starting to get interesting” (2013, 93). Throughout this book, I hope that you will learn to “find the interesting” in sound. I aim to take you on a journey from being the person who might run out of the room screaming in annoyance, to being someone who is very comfortable thinking and talking about sound, who can focus on sound and learn to listen to different aspects of sound. A person who finds that the more they listen, the more interesting their world becomes.
Other books about sound design are available, but in my experience, sound design tends to be taught as sound for moving image (that is, sound for film/television, animation, theater, or games). The reader is left with no time to cultivate an appreciation of just sound, or to develop a language and rhetoric of sound on its own, to explore the potential that lies in sound as a medium and as a rhetorical device. The complexity of sound on its own is often rushed through in order to get to the technical aspects of sound for moving image. Part of this oversight is the fault of an educational system that focuses on other aspects of the production of media, and the entire world of sound is often forced into a thirty-hour course over one semester. The result is that sound designers are always enslaved to the image, creating sound for that purpose, rather than developing their skills in actually designing sound. I’m not suggesting that sound design for moving image doesn’t have a purpose: clearly, it has a very specific purpose. I’ve spent years studying the relationship that sound has to image, and image to sound, and at this point published seven books and countless research papers on that very subject. However, sound for moving image has often assumed the role of a subordinate: sound is there, we are told, to support the dominant image. The eye rules supreme in our ocular-centric Western culture. Is it any surprise that image dominates sound design practice and education, too? Of course, most sound design jobs are in film or games, so it’s understandable that sound design programs focus on sound for moving image, but having a background in sound-for-image misses out on all the possibilities that can be created by sound design as “just” sound design.
An interview I conducted with a video game sound designer, Adele Cutting, made me think there may be a better way to teach sound design in schools, by focusing on sound before moving on to sound for picture. Cutting had been hired to design the sound for an audio-only video game: that is, a game designed with little to no visual component. She explained some of the differences when there’s no image to design to:
I worked on Audio Defence: Zombie Arena (Somethin’ Else 2014), the audio-only game, the zombie shooter, and that’s like the Holy Grail for a sound designer, isn’t it? An audio-only game! It was a short turnaround—like four weeks—to do all the sounds. And it took me a good couple of days—probably three days—which is a lot of time when you’ve only got four weeks, to get my head around it. Because all these tricks that I did [with visuals]: Say, you were making a giant sound, you learn every time all these tricks to make it weighty and heavy. But when there’s no giant’s foot falling [visually], it didn’t work. I really had to get my head around it. That there was no visual clue to hang on with, because I’m always talking about how audio fills in, how audio is the glue that holds everything together, and we fix things. We make things look better when the animator hasn’t had time to do this, so we’ll put a sound in, so nobody notices. We’re always fixing things, and if things are far too slow, you can add audio and it speeds it up. You can add audio and make it go slower, but all of a sudden, [without visuals] it’s just you. I found that game at the start very, very difficult because you have to be so focused. There can’t be any fat on your sounds. It’s just got to be the one thing that you need to hear, and you can’t mix in [with visuals]. . . . I found myself chucking a lot of things out with the sound, to get the focus on it. . . . I felt it was so important that if there was only one sound going to be playing, or if you could only focus on one thing at once, it had to be the right thing. (quoted in Collins 2016, 119)
I am proposing that sound design, as a practice, may be better approached as an art form that stands alone from image, prior to learning about the complex things that happen when we put sound and image together. In other words, before we learn to put sound to image (looking and listening), sound designers are better served learning to just listen. I’ve designed this book based on my own teaching of sound design for about fifteen years now at several universities and in industry presentations and workshops, with the aim of helping others to structure a course in sound design beyond image. In an ideal world, students would then go on to learn sound for moving image in another course, and sound for interactive media in yet another course. We don’t often get the luxury of teaching multiple courses on sound, however. Anyone studying visual production would get all kinds of courses in drawing, illustration, painting, printmaking, typography, digital arts, graphic design, and so on; sound designers rarely get that same kind of scaffolded and multifaceted approach to learning.
This book is about sound design as “just” sound design. I bring in examples from other media, but the many exercises I include are meant to focus the student of sound on just that—sound. But what does it mean to design sound? We hear the term “sound designer” applied to film or video games, but what exactly does a sound designer do? In fact, although the term is fitting, it was an almost accidental title. In the Hollywood movie system, a sound editor was (and still is) the person responsible for creating and selecting sounds for film (by substituting, eliminating, and adding to the original live recording or creating the sounds in postproduction). The term sound designer was first used to describe the work of Walter Murch. Director Francis Ford Coppola recalls:
We wanted to credit Walter for his incredible contribution—not only for The Rain People, but for all the films he was doing. But because he wasn’t in the union, the union forbade him getting the credit as sound editor—so Walter said, “Well, since they won’t give me that, will they let me be called sound designer”? We said, We’ll try it—you can be the sound designer. . . . I always thought it was ironic that “Sound Designer” became this Tiffany title, yet it was created for that reason. We did it to dodge the union constriction. (quoted in Ondaatje 2002, 53)1
Although the term sound design is most commonly associated with film and more recently video games, it is also applied to radio, theater, product design, and more. Traditionally, the goal of product sound design has been to reduce or remove sound, by engineering products that absorb (incorporating foam, perforations, etc.), block, or enclose the sound. Today, however, a growing awareness of the important role that sound can play in products is redefining the role of a product sound designer. Product sound design now has many of the same concerns as film and game sound design, that is, driving our emotions, rather than strictly information.
Increasingly, sound designers are finding a role in the growing audio-based media world of podcasts, smart speakers, and audiobooks. We can also add to sound design the growing field of sound art, in which artists use sound to convey their thoughts and feelings and express themselves, much as they have done for millennia using visuals. Artists are no longer confined to canvas; they can create multimedia works that incorporate sound or make sound the primary focus of their work.
Sound design can take place at the level of a single discrete sound or at the level of an entire soundscape. Tomlinson Holman, the inventor of the THX sound format, provides a succinct definition of sound design that will suit our purposes: “getting the right sound in the right place at the right time with the equipment available” (2002, 26). Of course, describing what is the right sound is a more complicated process that requires further exploration. Sound designers must work within the constraints of context, in addition to budgetary and technical constraints. But there are additional elements that must be satisfied: the aesthetic choices made will affect the overall reception of the work or product. Is it pleasing? Is it annoying? Designers must make choices about sounds based on the ways in which they want the audience to (consciously or unconsciously) interpret the sound.
Designers design sounds by:
(1) Choosing recorded sounds which, by their selection, context or combination, create something new. For instance, how sounds are juxtaposed—or the situational context in which they are used—influences their perception. This task may also include using unusual materials or getting unusual sounds out of everyday objects. Selecting or recording the right sound is an important design decision. Gregg Barbanell has talked about how he uses everyday objects for the sounds of gruesome bone-breaking in The Walking Dead TV series (2010–): “For ‘breaking bones,’ big, full stalks of celery are employed—not merely individual stalks, mind you, but huge bunches capable of producing layered, complex snaps. They give you this huge, sinewy stringy sound. . . . It’s very effective” (quoted in Eddy 2015).
(2) Layering or combining several sounds to create a new sound, or splicing sounds together to create a new sound. For example, Ben Burtt describes the creation of the lightsaber sound for Star Wars (1976):
I was a projectionist, and we had a projection booth with some very, very old simplex projectors in them. They had an interlock motor which connected them to the system when they just sat there and idled and made a wonderful humming sound. It would slowly change in pitch, and it would beat against another motor—there were two motors—and they would harmonize with each other. It was kind of that inspiration, the sound was the inspiration for the lightsaber and I went and recorded that sound, but it wasn’t quite enough. It was just a humming sound, what was missing was a buzzy sort of sparkling sound, the scintillating which I was looking for, and I found it one day by accident. I was carrying a microphone across the room between recording something over here and I walked over here when the microphone passed a television set, which was on the floor, which was on at the time without the sound turned up, but the microphone passed right behind the picture tube and as it did, this particular produced an unusual hum. It picked up a transmission from the television set and a signal was induced into its sound reproducing mechanism, and that was a great buzz, actually. So I took that buzz and recorded it and combined it with the projector motor sound and that fifty-fifty kind of combination of those two sounds became the basic lightsaber tone, which was then, once we had established this tone of the lightsaber of course you had to get the sense of the lightsaber moving because characters would carry it around. They would whip it through the air. They would thrust and slash at each other in fights. And to achieve this additional sense of movement I played the sound over a speaker in a room. Just the humming sound, the humming and the buzzing combined as an endless sound, and then took another microphone and waved it in the air next to that speaker so that it would come close to the speaker and go away and you could whip it by. And what happens when you do that by recording with a moving microphone is you get a Doppler shift. You get a pitch shift in the sound and therefore you can produce a very authentic facsimile of a moving sound. And therefore give the lightsaber a sense of movement and it worked well on the screen at that point. (Burtt 1993)
(3) Altering sounds through analog or digital signal processing, such as morphing sounds together (as in ring modulation); time domain effects (phasing, flanging); compression and limiting; reverberation and echo; and so on. For instance, the sound of the disc flying through the air in Tron (1982) was “a combination of a monkey scream backwards processed through a flanger and it was also another one of those weird synthesizer effects that I was able to create through the modulator, and also I took a big wire cable spin and that was the whooshing element. . . . I turned it [the monkey scream] backwards and you couldn’t recognize that it was a monkey scream really” (Petrosky n.d.).
(4) Synthesizing a sound, or creating a sound based on granular aspects that are recombined from other sounds. For instance, consider the THX sonic logo, known as “Deep Note,” created by Andy Moorer:
I set up some synthesis programs for the ASP [synthesizer] that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece. The “score” consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz.” . . . The sound was produced entirely in real-time on the ASP. (Whitwell 2005)
This book focuses on the first three of these four means to design sound. The programming and use of synthesizers to create sounds is a fascinating topic, but it requires at least a book of its own as well as more advanced skills. Likewise, interactive sound also requires a separate book to understand the complexities and software involved.
The aim of this book is to provide a set of material that, with each chapter, builds on previous work that you have learned and put into practice. I have interwoven theory and suggested further reading and listening materials throughout, with the hopes that you will take it upon yourself to improve your skills by exploring the many resources available to help you to learn about sound. I have suggested exercises to help you put the theory into practice, and while you may not want to complete all of these exercises, I believe that the more you undertake, the better you will become. Most of the exercises you can do on your own, so there is no need to be enrolled in a class to do these exercises, but a handful of exercises are better experienced with the participation of a partner or class.
In my experience, many introductory books on sound can get very technical with lots of equations and physics, which might put off a beginner coming at the field from an artistic background. It’s my goal to focus on the creative side of sound design, and give you just enough of a technical foundation to get you started so you can put your creativity to work. Learning more about the technical side is an important step in a professional sound designer’s training, but in my opinion that can happen after you begin to feel comfortable with the terminology and tools available.
I use Audacity as the software sound editor for the examples that demonstrate the techniques in this book. The reason for this choice is simple: it’s free. Audacity has its limitations, and if you’re serious about sound design you’ll find yourself outgrowing it quickly, but if you’re just dipping your toes into the waters of sound design, it’s a great cross-platform tool or complement to other tools in your digital audio workstation (known as a DAW). It’s important to note that Audacity is designed as a sound editor, rather than a multitrack editor. It’s great for editing individual sounds, but as we’ll see, the software becomes more problematic when dealing with mixing multiple tracks. The exercises can be undertaken in any other audio software you are comfortable with, like Audition, Logic, ProTools, or Reaper.
As well, there is a companion website to this book at studyingsound.org that provides examples, tutorials, some of the reading material, links, videos, and other resources that you can consult as you travel on your sound design journey. I’d love to hear about your successes with the exercises, and if you’d like to share your work, be sure to keep in touch!
The book is arranged in a scaffolded fashion: ideally you should follow sequentially, so that you can build on your skills as you go. In chapter 1 we start with learning to listen, and begin to think about sound in a new way to train those ears to do what they were born to do. In chapter 2 we’ll begin to develop a language to talk about sound as an acoustic phenomenon, and learn the basics of digital audio. Then we’ll turn our attention to recording and learn the basics of microphones in chapter 3. Of course, sounds don’t exist in a vacuum, and the space in which sounds occur is important—and the focus of chapter 4. We’ll begin to explore digital audio effects that mimic spatial effects, and then take a deeper dive into some of the other types of audio effects in chapter 5. Chapter 6 puts those technical skills together in exploring the theory and practice of mixing. In chapter 7 we’ll explore an overview of spatial sound, or “3D audio.” Chapters 8 and 9 take a different tack, and present some of the useful theories for understanding sound design and putting those into practice in creating sound for story.
To get started, we need to set up our listening space and our software. Try to find somewhere quiet to work in. While an increasing number of sound designers use home studios, it’s important to minimize external noises in any audio workspace. If you have the option, choose a bigger room, which will generally sound better than a smaller room, but it’s important that this room is as far from neighbors or external noise as possible—a basement is generally the quietest. Ideally, you can use some basic acoustic treatment to help reduce reverberation: foam and/or acoustic tiles can be placed strategically around the room. A blanket can be hung over a window to cut down some of the noise (and the window should be closed). A quick search online for “home studio on a budget” can provide some useful tips for your particular space and link to items you can purchase in your country to improve your sound experience.
If you are on a budget and don’t have a private space to work in, a good pair of headphones is an important investment to listen to your work, and headphones are suitable for someone just starting out. Eventually you will want to purchase a good pair of studio monitors, but for this book headphones will suffice. You want to get a pair of over-ear or on-ear headphones, rather than earbuds. This is for both the sake of comfort as well as audio quality: the smaller the transducer, the less ability the headphones have to reproduce lower frequencies. Although wired headphones will give you better quality, wireless Bluetooth technology is improving. Wireless technology can be subject to electrical interference, however—I used to pick up the dispatch from the local fire station on mine! I keep mine plugged in when I work. If you can afford to purchase new headphones, do some online research as to the current best options for your budget.
As mentioned above, we are going to be using Audacity. To install Audacity, the software can be downloaded from the company’s website, https://www.audacityteam.org. In addition to the basic software, throughout the book you’ll need to install some extra plugin modules. The instructions for downloading and installing plugins, and links to the plugins available, can be found on Audacity’s website as well as studyingsound.org.
We will also need some basic recording equipment for the exercises. I recommend using an external microphone with your recorder (particularly if you are using your phone as your recording device). You can always purchase more expensive equipment later, but a basic kit is much cheaper than it used to be. A simple handheld recorder like the Zoom H4N or Tascam DR-40 is under $300. You want to get a recorder that has at least one XLR input and one line input, so that you can plug in an external microphone (professional microphones typically use XLR). These handheld recorders also usually come with built-in stereo microphones, but you’ll find yourself using the external inputs more than the built-in mics. You can start with the built-in microphones, and, if you can afford it, add an external microphone—a shotgun microphone is a common first microphone to purchase for your sound design kit. As with much technology, the price of microphones has been dropping, and today the difference between a cheaper microphone and the high-end professional microphones can be minor in some cases. Certainly, while you are starting out and training your ears, the high-end microphones are not necessary. As your listening ability improves (and your playback technology also improves), you will notice the difference between the cheaper and the more professional microphones. Most sound designers have a selection of microphones for different purposes, but we’ll come back to that later.
1 In fact, Coppola misremembers, as Murch is credited as creating the “sound montage” in The Rain People (1969), and was first given the title of sound designer for his work on Apocalypse Now (1979).
1 Hearing and Listening
What is the difference between hearing and listening? Lift your eyes from this page and look straight ahead for a moment: Notice that we see many more things than what we are looking at directly. We can focus on an object, but the eyes—and the brain—take in a lot more around us than just what we may or may not be aware of. I’m looking at my computer monitor, but I see my speaker monitors behind, and the posters on the wall behind them, and my shelves off to the side, and a second desk off to my left where another computer sits. My dog curls up in the corner. There is a black rug under my chair, and I see my arms moving, and all kinds of small details that aren’t part of what I’m looking at. I see far more than I usually observe. A similar perceptual phenomenon happens with hearing. We are surrounded by sounds, and most of the time we are in a passive hearing mode, actively listening only when we are talking with someone (and actually paying attention to what they say!), in a potentially dangerous situation like crossing the road, or listening for a responding “beep” to a text message. What music we listen to is often just on in the background, a wallpaper of noise. Most of the time, sounds are just there, all around us. We listen with ears half open, not consciously paying attention to sound unless it’s something that we are actively focusing on. We hear without listening, just as we see without looking.
Can we train ourselves to listen? How can we become better listeners? Like anything else we learn, what we need is practice, and we start our journey into sound design with becoming more aware of the sounds around us. We can learn, over time, to spend less time with ears half open and more time actively listening. Like a photographer walking around and mentally framing shots while scanning the landscape, we can learn to be aware of and thinking about sounds around us. Becoming a listener doesn’t happen overnight, but with time and patience and practice, you will find yourself noticing more and more of the sounds around you. You’ll find yourself hearing sounds that others haven’t noticed, and you’ll hear sounds that you never noticed before, and, sometimes, sounds you wish you hadn’t noticed! Unfortunately, once you’ve opened your ears, the world becomes a very noisy place.
This chapter will introduce hearing and listening and begin to provide a language to think about and talk about sound. Listening is work that should be practiced and referred to again and again, until it becomes second nature. There are many exercises here to get you thinking about the sounds you’re hearing, and training you to listen to them instead of just hear them. Training your ears is just like training your muscles in the gym: you can’t transform yourself overnight. You have to keep going back and working at it, and it must be sustained or you’ll find yourself losing your gains.
Exercise 1.1 Quiet Time
We’ve probably all tried sneaking into our house at night: every sound we made seemed suddenly amplified. Trying to be quiet is a great way to focus on actively listening. Try standing up from your seat without making any sound. Try it again with eyes closed. Listen to the sounds. How was the process of listening to your own sounds different from the way that you normally hear sound? (adapted from Schafer 1992)
1.1 Talking and Writing about Sound
Throughout this book you will find exercises and suggestions to get you to think about, experience, and practice sound in new ways. Keeping a notebook to write down your thoughts will help you to formulate your own ideas about sound and track your progress. You might also take a few moments to compare your own thoughts with those of friends, colleagues, or classmates as you follow along, or check the companion website (studyingsound.org) for another perspective.
Purchase a new journal for your sound practice. It helps if it’s pocket-sized. You might wonder why I suggest a paper notebook and not your computer or phone. In theory, you could use a portable computer (laptop, phone, or tablet), but you’ll find a pocket notebook will be handy to keep with you on a walk where you may not want to bring a computer (for instance, out in the rain). A phone isn’t as effective to take notes on because the act of typing on a touchscreen requires you to focus visually on the phone and concentrate on that rather than the sound, which can interfere with the practice. You may also want to use your phone for other aspects of the exercises in the following chapters, as a pocket recorder, for instance, or to check frequencies or the volume of sounds you are hearing.
Once a day, practice sitting still for five minutes and writing down what you hear. You can sit in a different place, or sit in the same place. Sit at different times of day, and in different moods, or the same time and place and mood. What matters isn’t so much what sounds you hear as your practice to actively attend to, concentrate on, and think about those sounds. You need to do this daily, rather than trying to pack in a week’s worth all at once, because you need to start training yourself to listen, and this takes time. If you’re serious about sound, listening is the most important skill you can have.
In addition to this daily exercise, keep writing down your thoughts about the other exercises, any readings or news media you come across that are related, as well as note any interesting sounds you hear in real life or in movies or other media, so you can reflect on your learning, and refer back to it in a few months and see your progress. You may also come up with some great sound design ideas that you don’t want to forget, and your sound journal is a great place to jot these down as you go.
To be a sound designer, we need a language to talk about sound. Language is one of the tricky aspects of dealing with sound. And even after we’ve grasped the language, chances are we’re going to have to talk about it with someone who hasn’t yet learned that language! As children, we’re taught a lot about visuals. We learn about shape and color and texture, and we learn the language to talk about these. If I asked you to draw a circle with a diameter of five centimeters and fill it in with a smooth, lime green color, you could probably come up with something very similar to what I have in my mind. But how do we talk about sound? Sound is time based, which makes it more difficult, and it’s never the same twice. Even if we use an electronic reproduction of a sound, we don’t hear it the same way twice, and the environment in which it’s played is also always shifting and plays a role in our hearing. More importantly, we’re also usually not taught a language to describe sound unless we are referring to musical sound, which has its own specialized language and doesn’t actually refer to the sound of the notes played, only the notes themselves.
Exercise 1.2 Describing Sound
Undertake this exercise every day, and we’ll build on it as we go: Take five minutes and sit quietly, writing down all of the sounds that you hear. The first time you try this exercise, you might come up with a list a little like this one, which are the sounds currently occurring as I type this out:
• Music in the background
• A car driving by
• My fingers typing on the keyboard
• Breathing of my dog next to me
• A scraping sound of someone shoveling snow outside
• The backup beepers on a truck at the construction site
• My own breathing
• Whirr of the heating duct
• Hum of the overhead light
This list is a good start, and we’re training our ears based on how we’ve been taught to listen in the past; but let’s dig a little deeper here.
1.1.1 Sounds and Their Causes
There are two ways I’ve described the sounds I heard in my listening exercise 1.2. The first is in terms of their cause—in other words, the thing that is causing, or making, the sound: for instance, “a car driving by.” The problem with such a description when used to describe sound is that it only tells you what sound I’m hearing if you know what type of car is being driven (a truck sounds different from a Porsche), what the weather conditions are (tires in rain sound different from tires on dry road), what time of day it is (a car in the middle of the night will appear to sound louder), what the speed of the car is, what gear it is in, what the mechanical condition of the car is (is there a hole in the exhaust?), what kind of tires it has (winter tires make a different sound from summer tires), and more. Without all of this detail, we might conjure up a generic concept of “car-ness,” but it’s not a very accurate descriptor of what I heard. How the car is moving is an important indicator of what is happening: Are they squealing tires with some bass thumping out the windows, or are they creeping past very slowly, eerily, suggesting some form of surveillance or stalking—these are two very different sounds! We have to have an agreement on what my description of the car means to even begin to guess all of the associations with the sound of a car driving by.
Let’s look at another from my list: “my fingers typing on the keyboard.” We’ve all typed on a keyboard, but keyboards have very different sounds, and the speed of typing depends on the skill of the typist. The volume of the typing might depend on whether or not the person is frustrated or angry. The tempo may be altered if they are stopping and thinking about what they are typing, or if they know what they are going to type in advance. An Apple keyboard with its low-lying keys sounds very different from a cheap PC keyboard. I have one key that sticks and requires me to hit it harder. So again, “typing on a keyboard” is not really an accurate description of what “typing” sounds like, only the cause behind the sound. The first thing we can learn as sound designers is to be more descriptive in our journals. Moving forward, as you practice listening, get as descriptive as possible for each sound. This requires us to really concentrate on the many attributes that go into the sound, rather than just the cause behind the sound. Concentrate and think about the sounds you hear and imagine trying to describe them in a way that someone could use to reproduce the sound.
The second type of description I used in exercise 1.2 relates to onomatopoeia: a word formed from the description of the sound it makes. I’ve used the “whirr” of the heating and “hum” of the light. We use these types of descriptions with animals a lot: a dog’s “bow wow,” for instance. But did you know that onomatopoeia is dependent on language and culture? A dog says “av-av” in Serbian and “hong-hong” in Thai. So much for using words to describe sounds! If you’re a gamer or anime fan, you’ve probably heard the phrase “doki-doki”: this is the Japanese term for the heart beating quickly. It’s not just a literal sound, but also carries the meaning that one is in love, and their heart is racing. The Japanese actually separate onomatopoeia into three categories, and about 1,200 Japanese words are cases of onomatopoeia, compared to English, which only has about 400 (Kincaid 2016).
Exercise 1.3 Gerald McBoingBoing
Dr. Seuss created a character called Gerald McBoing-Boing (TV series, 1956) who talked in onomatopoeia sound effects: “When Gerald started talking, you know what he said? He didn’t speak words—he went boing boing instead!” The animation uses sound effects, but the book relies on onomatopoeia to describe the sounds. How many onomatopoeia words can you describe off the top of your head? How much can you communicate with just onomatopoeia? Try to write an entire day’s journal entry just using onomatopoeia (hint: you’re going to have to make up some new examples of onomatopoeia).
1.1.3 The Importance of How We Think and Talk about Sound
How can we describe sounds in a way that everyone understands? To do this, we need to learn more about acoustics and use a more precise technical language for sound. There is so much more to sound than what was in my list in exercise 1.1. If you look at my list, you’ll see I didn’t describe where sounds were occurring in space in relation to my position: was the music in front or behind me? Did the car drive by me on the right or the left? How far away was the construction site? The placement of sound in relation to our own bodies also affects the way we perceive sound. We’ll be tackling a language to describe sound and focusing our ears on all of these issues in the coming chapters. Gradually, as we progress through our journey, we will learn to fine-tune our descriptions of what we hear. For now, start to think about how descriptive you can get about the sounds you hear on your daily listening practice. Try to capture as much information about the sounds as possible. The more descriptive you get, the more you’ll find you need to really focus on the sound itself and not just the cause of the sound.
How we think about and talk about sound influences how we use sound in our creative processes. In sound libraries—collections of sound effects recorded for our use as sound designers—sound effects are often categorized based on what caused the sound: “airplane” sounds, for instance, or “bird sounds.” But sounds could be categorized based on what we might use them for: “scary sounds” (hawks or crow sounds are often used in horror), or “morning” sounds (the rooster or the dawn chorus). When we design sounds for media, we often use sounds that are not tied to their actual causality. For instance, we use the snap of frozen celery sticks for the breaking of a bone. Who would think to look for “vegetable sounds” in a sound effects library for their horror film unless they were aware of these uses?
In other words, how we describe sounds to ourselves and to others can influence the creative uses of those sounds. It’s important, then, to think “outside the box” in our descriptions and categories, and to move beyond causality into other aspects of sound. To do that, we need to practice listening, and we need to learn a new language for talking about sound.
Exercise 1.4 Categorizing Sound
Take a list you created in one of your daily listening exercises, and think of the ways in which you might categorize these sounds. For instance, you might divide the list into opposing elements:
What other categories can you come up with to group your sounds? What do the categories tell you about the types of sound that you hear, and the ways that you think about sound?
Exercise 1.5 The Sound Walk
Sound walks are simply walking while paying attention to sound, rather than sitting in one place, so that we can experience several different places and listen to the changes. On a sound walk, we are quiet and listen with attention to all of the many sounds that we normally ignore. We can do sound walks alone or with a partner who guides us, blindfolded, around the walk. The sound theorist Hildegard Westerkamp (2007) suggests first starting with listening to your own body while moving. Listen to your footsteps and how they change on different surfaces. Make a sound by clapping your hands or whistling. Try the sound in different rooms. How does it change? Once you’ve practiced listening to yourself, pay attention to the environment. Do you hear other people? Can you detect rhythms? What are the loudest and quietest sounds that you hear? Focus on a sound and walk toward it. Notice how it changes with proximity. Move indoors. How does sound change in different environments? How did your listening ability change when you couldn’t see?
Exercise: 1.6 Destined to Repeat
“In Zen they say: if something is boring after two minutes, try it for four. If still boring, try it for eight, sixteen, thirty-two, and so on. Eventually one discovers that it’s not boring at all but very interesting” (Cage 2013, 94). Find a sound that at first might seem boring, but after repeated listening becomes much more interesting. How does the sound (appear to) change over time? Describe it!
Exercise 1.7 Listening for the First Time
Listen attentively to something that you typically hear but never listen to, such as the full cycle of a dishwasher, washer, or dryer. What did you hear that you never noticed before? How difficult was it to pay attention for such a long length of time? Did you mentally add beats, or musical notes, or anything to force a structure or pattern onto it? How long were you able to listen before your mind started wandering? Can you train yourself to listen for longer? Repeat this exercise after you’ve finished the book, and compare notes with your first listen.
Exercise 1.8 Soundmarks
R. Murray Schafer, one of the first acoustic ecologists, writes, “Just as every community has landmarks which make it special and give it character, every community will also have original soundmarks. A soundmark is a unique sound, possessing qualities that make it special to a community” (1992, 123). Examples might be a local public clock, foghorns, trains, and so on. Find and describe the soundmarks in your community—either your home, your neighborhood, or the entire city.
Exercise 1.9 Sonic Fingerprint
What sounds are personal to you that others might be able to identify you by? For instance, my dog used to be able to identify my car from all the others that went by our busy street and would run to the window when he heard it. One exercise I try in my classes is to have four students come up to the class with their sets of keys. Facing the front, another student stands behind them and subtly shakes their keys. Can the students recognize which set of keys is theirs by the sound alone? I find the majority of the time they can guess their own keys, even though they’ve never consciously paid attention to the sound before. Think about your own personal sonic fingerprint(s): perhaps it’s your car, an unusual walk, or your keys, and come up with a list of sonic ways that someone close to you might be able to identify you. (adapted from Schafer 1992)
Exercise 1.10 Sound Timer Reminder
Send yourself a little reminder to stop and listen. We can get distracted pretty easily and forget to pay attention to what is around us. You can get a timer for your phone or watch, and set it to go off a few times a day. When it does, take sixty seconds out to focus on and listen to the sounds of wherever you are. Listen to how basic sounds change depending on the environment—your footsteps change based on the temperature outside, what you’re walking on, what mood you’re in, what the weather is, what other sounds are around you, where you are, and so on. Pick a sound to focus on, like footsteps, clicking your fingers, or your breathing, and write down how that sound changes throughout the day.
1.2 The Ear and the Brain: How We Hear
While we focus on listening practice, it’s worth understanding what is happening on the biological side of hearing. In a sense, we hear with our whole bodies and not just with our ears. Our bodies have resonant cavities in them in which sound vibrates: our lungs, our bones, and even our eyeballs resonate with different frequencies. Scientists have tested the base human body resonance to be between 5 and 16 hertz (Hz) (Kitazaki and Griffin 1998). Different parts of our body vibrate at different rates, though, with our head vibrating between 20 and 40 Hz (Hz is a measure of vibrations per second: we’ll come to that in chapter 2).
The human eyeball typically resonates at about 19 Hz, which is below the normal threshold of hearing (meaning we can’t hear a sound at 19 Hz). In the 1980s, a scientist named Vic Tandy was working in a “haunted” medical lab that many people found left them feeling uneasy. One day he brought in his fencing sword and noticed it vibrating. He discovered that the sword vibrated at about 19 Hz, and traced the vibration to a fan in the building. Shutting the fan down shut down all the reports of ghosts. Tandy later tested the theory in a fourteenth-century “haunted” pub cellar and found the same frequency (see Jasen 2016). Could it be that what we call ghosts are just cases of our own eyeballs resonating? More recent work has found that the roar of a tiger is 18 Hz, and could be used to disorientate and paralyze prey in advance of an attack by resonating their eyeballs (American Institute of Physics 2000). Different frequencies of sounds, in other words, affect our physical body in different ways.
In addition to sensing sound through our bodies as a whole, a common means of hearing is through bone conduction, and hearing-impaired individuals can have some hearing sense through this method. It’s been reported that the famous eighteenth-century composer Ludwig van Beethoven used bone conduction to hear after he went deaf, by using his jawbone. Clenching a wooden rod in his teeth and attaching it to the piano, he could sense the vibrations through his jaw (Larkin 1971). Bone conduction bypasses the eardrum, and vibrates the inner ear directly through the bones of the skull. Bone conduction headphones sit on the bone in front of (or behind) our ears, and are used by the military because they don’t cover the ear canal, so can be used to supplement regular hearing for communication. In this way, we can hear everything going on around us with our ears, and any communication through the headphones. Apple was recently granted a patent for a method to incorporate bone conduction technology into their own headphones, so it’s likely bone conduction is going to become more commonplace in the future (Dusan et al. 2013).
Exercise 1.11 Bone Conduction Headphones
If you’re particularly interested in bone conduction, or want a set of headphones you can wear while also listening to the world around you (while jogging, for instance), you can purchase some bone conduction headphones for a reasonable price. If you have access to a pair, write down your experience of bone conduction listening to music or sound in your journal. How does the sound through bone conduction differ from regular headphone listening? What aspects of the sound are emphasized? Do you hear more or less through the bone?
Exercise 1.12 Bone Conduction with a Dowel
Here we will repeat Beethoven’s technique. Get some wooden dowel (3 mm, or 1/4″ width is enough, at about 40–50 cm—15 inches—long) from your local hardware store. Put earplugs in your ears or use your fingers to block your ears. Put one end of the dowel in your teeth and bite down. Put the other end on a speaker, piano, guitar, or other vibrating surface. How does this alter what you hear? What aspects of sound do you miss out on?
Exercise 1.13 Bone Amplifier
In this experiment we will build a jaw-bone conducting amplifier (adapted from Oakland Toy Lab, n.d.).
Two wires, about 30 cm (10″) each
Wooden dowel, about 6 mm (1/2″) diameter, about 10–15 cm (6″) long
3.5 mm audio plug, or “mini jack” (male) for soldering—you may have to purchase these with a cap that you should remove
DC motor 1.5–3V 15K RPM
Soldering iron and solder
Drill with 1/16″ bit
Strip about 2 cm (1/2″) on each end of the wires.
Solder one end of each wire to one of the tabs on the motor.
Solder the other two ends onto the tabs on the jack.
Drill a hole in one end of the dowel with the drill bit.
Push the end of the motor into the end of the dowel with the hole in it. You may have to wiggle it or put some pressure on it to get it to sit firmly in the hole.
Plug it into your computer, stereo or phone’s headphone port and bite down on the dowel. You’ll need to turn the volume right up, particularly if you’re using it with your phone. Keep your fingers free from the dowel so it can vibrate correctly.
Put in some ear plugs or plug your ears with your fingers and listen!
1.2.1 The Outer Ear
While bone conduction is interesting, most of our hearing takes place through our ears. The outer, fleshy part of the ear is known as the pinna (plural pinnae). Another name for this visible part of the ear is the auricle. The pinna funnels sound toward our ear canal. If our ears were cut off, we could still hear, but it would be much more difficult, particularly in localizing (finding the direction of) sounds. With the pinna funneling sound into the canal, we can have a greater sense of our auditory environment and directionality. High frequencies reflect off the pinna in ways that differ according to the angle of the sound. Because we all have differently shaped ears, we hear sound slightly differently. In fact, our pinnae are so unique that earprint identification can be used in forensics like fingerprints (see, e.g., Meijerman, Thean, and Maat 2005).
Approximately 2 to 3 cm inside our ear holes—the auditory canal—is the eardrum, also called the tympanic membrane (tympani are kettle drums used in the orchestra). Unlike the skin of a drum, though, the tympanic membrane of our ear is a very delicate, thin membrane, approximately 0.1 mm thick. It can be easily pierced, which is why sticking anything into our ear canal—like cotton buds—is dangerous. The tympanic membrane is so sensitive that it can even be pierced by very loud sounds or pressure changes as when scuba diving or flying in an airplane. The many nerve fibers in the membrane make the eardrum very sensitive to pain. The tympanic membrane vibrates with the different sounds that enter the ear canal and transmits those vibrations through to the bones of the middle ear where they are amplified for hearing.
1.2.2 The Middle Ear
The middle ear consists of the space between the tympanic membrane and the oval window. This hollow space of the middle ear is known as the tympanic cavity, and is surrounded by the tympanic bone, which can function as a bone conductor. The tympanic cavity works as an amplifier that takes the vibrations from the tympanic membrane and transmits them to the inner ear via three tiny bones called the ossicles: the hammer (the malleus), the anvil (the incus), and the stirrup (the stapes). The malleus and incus bones developed through evolution from the upper and lower jaw bones in reptiles, which has been traced in fossil records. Our frequency range and sensitivity is determined by the shape and arrangement of these bones, which is why some mammals can hear ranges of sounds that humans cannot. The last of the three bones, the stapes, is situated inside a membrane-covered window in a bony separation between the middle and inner ear, known as the oval window.
The tympanic cavity is connected to the nasal cavity by the eustachian tube, which allows us to equalize pressure in our ears. We can manually adjust the pressure (for instance if we are scuba diving) using what is known as the Valsalva maneuver, in which we pinch our nose closed and then blow out gently. Blowing too hard can damage the ear, so this must be done carefully.
1.2.3 The Inner Ear
When the stapes vibrates, it moves the fluids in the inner ear. Unlike other areas of the ear, the inner ear is filled with fluid and is responsible for both sound and balance. The inner ear contains the three components of the semicircular canal—three ring-like structures that are responsible for determining our sense of balance. As fluid, called endolymph, moves around the canals with the position of our head, sensors are triggered that allow our brain to determine our head position. Two vestibular sacs in the inner ear—the saccule and utricle—provide information about linear acceleration and gravity.
The other area of the inner ear, more important for sound, is the cochlea, a snail-shaped organ consisting of many tiny hair-like structures known as cilia. The entire length of the cochlea is lined with these cilia, and these are each attuned to different frequencies. As a sound wave moves in the cochlea, different frequencies will trigger different cilia by bending them slightly, sending electrical signals via the auditory nerve to our brain to tell us which frequencies were heard. There are many thousands of these cilia (between 12,000 and 24,000) gathering sound waves and sending impulses to our brain.
A large part of the cochlea is dedicated to the middle frequencies, with a peak range of 3500–4000 Hz. Most of what we hear in our world—including music and speech—is in this range of our hearing. In fact, when the gramophone was invented, most records (78 RPM shellac discs) until the mid-1940s had a top range of about 4,200 Hz (see Browne and Browne 2001). Even though we can hear much higher frequencies, most of our hearing takes place below about 8,000 Hz, and we are particularly sensitive to the speech range.
1.2.4 Hearing Development
The hearing organs start to grow in a fetus at just three weeks of pregnancy, and by week eighteen a baby will begin to hear sound. Soon after, the baby will begin to respond to voices or other noises it hears. Because there is a barrier between the baby and the world; the volume is muffled to about half of what we would hear outside the womb. But the baby can also hear sounds in the mother’s body—the grumbling of the intestines, the heartbeat, and so on, and these would be heard much louder than by someone outside the body. Before a baby is even born, it can recognize the sound of its mother’s voice. You may have heard of attempts to increase a baby’s intelligence by playing music to it while in the womb, but there is no evidence that this works. What babies do learn, though, is the rhythm and cadence of what will become their native language—they can tell the difference between English and French, for instance, and can recognize the rhythm and pattern of stories that have been read to them in the womb after they are born.
Exercise 1.14 Our First Sounds
Write in your journal what it must be like for a baby hearing sound from the womb. What sounds would they not be able to hear because of the muffled barrier of the womb? What sounds would they hear more loudly because of where they are?
Exercise 1.15 Hearing, not Listening
We hear constantly, even in our sleep, to the point where sounds can shape our dreams. Can you recall any dreams you’ve had where an external sound entered your dream? I know I’ve heard my phone ring in my sleep, then gotten up and discovered it hadn’t rung after all. Try setting a timer on your computer or phone to play a sound quietly just before you wake up (before your alarm clock, if you set one), and see if it gets incorporated into your dreams.
1.3 Human Hearing Ability
Humans generally have a hearing frequency range of about 20 Hz to about 20,000 Hz (20 kilohertz, or KHz)—twenty vibrations per second to up to twenty thousand vibrations per second. As we age we lose the higher frequencies, with this deterioration beginning about age eighteen. Most people over about the age of thirty have already lost the top few thousand frequencies. Fortunately, there isn’t much in that range that humans need to hear, so you likely will not notice. Currently there is nothing we can do to combat this age-related hearing loss. Some people have used this type of hearing loss to their advantage: the “mosquito” ringtone for phones is at a frequency of about 17 KHz, and is designed for young people to use (in classrooms, for instance) without the knowledge of older people, who won’t be able to hear it. Older people have also used this to their advantage with “mosquito alarms,” which are played outside some convenience stores to deter teenagers from hanging around.
Sound above our hearing threshold is called ultrasound. You’re probably familiar with dog whistles: dogs can hear tones above our own hearing range, and most dog whistles are about 22 KHz. But even dog hearing is unimpressive compared to some other creatures: bats echolocate at frequencies of up to about 200 KHz. The wax moth can hear sounds as high as 300 KHz. On the other hand, some creatures can hear frequencies well below our hearing threshold, called infrasound—humpback whales have been recorded singing as low as 3 Hz, and the mantis shrimp, which can make sounds as high as 100 KHz, is also capable of sounds as low as 1 Hz.
Exercise 1.16 Imagining the Hearing of Others
We now know that some animals hear sounds we can’t hear. But what’s even more remarkable is the way that some animals hear. Some fish have cilia along a line on their sides, known as the “lateral line,” so their whole body responds to sound waves. One type of squid, the longfin inshore squid, changes color based on sound—its chromatophores respond to changes in the environment, including sound. This exercise is a practice in creativity: imagine your sonic environment from the perspective of another creature, and write down what your listening environment sounds like.
You can test your hearing using a tone generator, which you can find at studyingsound.org. Use headphones. Set the volume of your computer to a comfortable level. Start in the middle range, which as you learned when discussing the cochlea is not the technical middle, but the range where our hearing ability peaks, at about 3,500 Hz. Reduce the frequency to the point where you can no longer hear the sound. Record the lowest frequency that you can hear. Note that the low sounds may drop off because your headphones or computer can’t reproduce those frequencies, not because your hearing is damaged. Now try going up in the other direction. What is the highest frequency that you can hear? A professional audiologist will test your frequency range for speech, but rarely tests above or below speech levels (in my experience, an audiologist tested only 200 to 8,000 Hz). You may need to use a subwoofer or studio speakers (monitors) to get a more accurate representation of your low frequency threshold.
Exercise 1.18 The Cocktail Party Effect
Most hearing tests will play multiple sounds at once to see how well you differentiate speech from other background sounds. The cocktail party effect is the brain’s ability to focus on and differentiate sound in a noisy environment—like trying to listen to someone talking to you at a busy party. How loud can background sounds get before you can no longer hear what is being said? This speech differentiation is often the first thing many people notice if they have hearing loss.
When it comes to loudness, humans can hear sounds between 0 and 140 decibels (dB). Decibels measure perceptual hearing level, not loudness, so while there are sounds below 0 dB, we can’t usually hear them with our ears (we’ll come back to decibels in the next chapter). We can also hear sounds that are more than 140 dB, but it’s painful, will cause permanent hearing damage, and will likely rupture our ear drum, so it’s not practical for us to hear above that threshold.
We begin to cause damage to our hearing at about 80 dB if we’re exposed to the sound for many hours, as in some workplaces, and the damage can build up over time. The European Union cutoff safety point for sound in workplaces is 80 dB. At 90 dB it takes less time for our hearing to become damaged, but for short periods of time 90 dB is usually safe. At 115 dB (which is quieter than many rock concerts!), even a very short sound will cause irreversible damage. The cilia in our ears do not regenerate, so once damage has been done, our hearing is permanently damaged. Although we can’t help age-related hearing loss, we can control noise-induced hearing loss.
Table 1.1 Approximation of sound loudness
Rocket launch (measured on the platform)
Gunshot (at close range)
Plane taking off
Loud concert, yelling at maximum volume, siren
Pneumatic drill, jackhammer
Subway train, power mower
Bass drum, legal limit for industrial noise in many places, motorcycle, loud club
Busy restaurant, EU limit for noise exposure in workplaces without protective hearing
Hairdryer, alarm clock, traffic
Busy street, talking loudly
Mosquito near you
Quiet room, recording studio background level
Leaf falling on the ground
1.3.1 Equal Loudness
Different frequencies have different perceptual volumes, since human hearing sensitivity varies with frequency. Lower frequencies drop off sooner, so low-frequency sounds are often given a boost by built-in equalizers in our stereo systems, to appear to balance out the frequencies. To demonstrate our hearing sensitivity, we can use what is called a Fletcher–Munson curve. These equal loudness contours measure decibel sound pressure level (dB SPL) over the entire frequency spectrum, providing a uniform appearance of loudness with pure sine wave tones.
To read these diagrams, first look at the bottom of the chart: these are the frequencies. Note that frequencies are not evenly spaced. This chart shows frequencies from 20 Hz to about 15 KHz. On the left are the decibels going up the chart. As stated above, we hear sounds of different frequencies at different perceptual volumes. So for a sound of 100 Hz, we would need a decibel level of nearly 40 dB to hear the sound. At 1,000 Hz, where we are more sensitive, we can hear the sound at 0 dB, and in the range we are most sensitive to (about 3,000 Hz), we can actually hear below 0 dB in optimal conditions (it’s unlikely you can hear these frequencies at that level anywhere but in a specially designed studio and only if you have excellent hearing).
In simple terms, our ears are not very good at hearing the lower frequencies compared to the higher frequencies. As loudness increases (the higher lines on the graph), the lower frequencies tend to flatten out as the level of volume increases. This means at higher sound levels the ear is more sensitive to (better at hearing) lower frequencies. Once we hit about 6,000 Hz the ear becomes less sensitive again. When we listen to music, we tend to “crank it up” because the added bass we can hear that comes with the volume increase means that the music feels richer, since we hear those bass frequencies more effectively at the higher volumes.
Exercise 1.19 Remanence
Remanence “is the continuation of a sound that is no longer heard” (Augoyard 2009, 87), like a musical earworm. The sound gives the impression of remaining after it’s no longer there. Keep your notebook handy and track any remanence you hear in a day. Are there any common traits you hear among sounds that lead to remanence for you?
Exercise 1.20 Sudden Silence
Turn the power off where you live. How many sounds were there in the background that you hadn’t noticed before? When you turn the power back on, how many new sounds are added back into your daily environment that your brain had learned to tune out?
Exercise 1.21 A Day without Sound
For this exercise you will need some equipment: at a bare minimum, a set of very good ear plugs. Ideally, you will use earplugs and then wear safety ear muffs over those. Remove sound from your life for one day (or half a day will suffice). Be sure you are going to be safe by staying with a friend or staying at home. Write a journal entry of your time without sound. Once you’ve spent a few hours without sound, how does returning to sound change the way you hear sound? What new sounds do you hear that you hadn’t noticed before?
Exercise 1.22 Listening to Auditory Streams
In any soundscape, there are usually different things making sounds. We can think of these like different instruments in an orchestra. Find a busy soundscape, and spend a minute listening to each separate stream, or auditory source, focusing on the individual sounds and then on the whole. How many separate streams can you hear? What is the busiest place you’ve found in your sound walks? What is the least busy?
Exercise 1.23 Listening to Media
Once you have had some practice listening to a variety of natural environments, try comparing that with listening to a film or video game. If you’re alone, it’s easiest to do this exercise with a film, but if you have a friend with you who can play a game while your attention is on the sound, you can do it that way, too. Pick a film that you know well and have watched already at least once. Turn your back to the screen and just listen to the film. What do you hear that you didn’t notice before? What sounds don’t resemble the real world you’ve been listening to, and why?
1.4 Protecting Your Hearing
Probably the greatest damage to your ears is going to come from loud sounds, whether it’s from your iPod, long-term exposure to a noisy workplace (everything from nightclubs and rock concerts to landscaping with power tools), or being exposed to a sudden loud sound. Fireworks at close range (150 dB), gunshots (140 dB), race car engines (140 dB), and industrial machines are the biggest culprits in urban life, but natural events can also cause great damage—thunder at close range is about 120 dB, and earthquakes have reached at least 250 dB. It’s been estimated that Krakatoa was about 180 dB and ruptured the eardrums of people forty miles away. The Tunguska meteor explosion in Russia in 1908, the loudest known sound, was about 300 dB. Even the blue whale sings at nearly 200 dB! In other words, there are some things we can’t control that we may be exposed to in our lifetimes, but most of the time we have control over the noise pollution by using earplugs (which usually reduce sounds by about 20 to 30 dB), ear protectors (a good pair will reduce sounds by about 30 to 40 dB), and not playing our music too loudly.
The canal from the outer ear to the tympanic membrane contains ear wax. The ear wax, called cerumen, may be wet and waxy or dry, depending on your genetics. The wax protects the ear from dust, microorganisms, and foreign material. Great caution should be taken when cleaning your ear wax with a cotton bud or other foreign body. It is better to wipe away any wax that has already exited the ear canal, and not put anything into your ears to remove the wax yourself. Not only do you run the risk of perforating your eardrum accidentally, but you can end up pushing the wax deeper inside, and impacting it inside your canal where it will reduce your hearing ability and must be taken out by a doctor with a special instrument. Don’t use ear candles, tinctures, or medications to clean the wax unless you are under the care of a physician.
Cold weather can also cause damage to your hearing over time, so it’s worth wearing a hat or earmuffs in the winter if you live in a northern climate. This damage is known as surfer’s ear, a form of exostosis. It’s not going to happen overnight, but over time the tympanic bone will thicken and develop new bony growths in an attempt to protect the inner ear from the cold. The thickened bone can actually trap water in your ear and lead to infections. If you spend a lot of time in cold water or outside in the cold, be sure to invest in something to keep your ears warm.
Tinnitus, often described as a ringing in the ear but which can also present as a hiss, a grinding sound, or other auditory phenomena, is often the first sign of hearing damage. Hearing damage can be caused by a number of factors: disease, injury, exposure to noise, stress, and medications can all affect hearing ability. Some common over-the-counter and prescription medications like acetaminophen, narcotics, antidepressants, and anticancer drugs can cause temporary or permanent damage to hearing, called ototoxicity. If you’re serious about a career in sound, or want to protect your ears, it’s important to discuss ototoxicity with your doctor and pharmacist whenever you are taking a new medication. Not all doctors are aware of the ototoxic effect of some medications, and you may not notice until it’s too late, so do your own research. If you are taking medication and develop tinnitus, be sure to see a professional right away to discuss whether the cause could be your medication. In addition to diminishing your ability to work as a sound designer and to listen to music, studies have shown that hearing loss can contribute to dementia and depression, so it’s worth caring about your ears!
1.5 Headphones Guide
Actively listening to sound is also referred to in sound design terms as monitoring, since you are monitoring what is occurring. Unless you have a home studio set up, headphones are the best way to monitor sound. A frequency response chart resembling the Fletcher–Munson curve graph is usually included on a box or leaflet with a set of headphones. Flat response is ideal for monitoring, but as long as you know where the peaks and valleys are on your headphones, you can make adjustments. For example, if you know your headphones have a bump at about 500 Hz, you can remember that when you go to listen to, mix or master files (more about that later).
There are several types of headphones to be aware of:
In-ear/earbuds: In-ear headphones, the earbuds that fit into your ear canal that come with your phone, iPod, and the like are cheap, portable, and convenient for jogging. They are often good for noise isolation—they filter out a lot of background noise. They are not good for monitoring, though, as there is little sound response in the lower frequencies. It’s better not to use these except for convenience.
On-ear: On-ear phones are usually lighter and cheaper than over-ear (see below), and they are smaller, since they are designed to sit on, not over, the ear. These come in closed-back or open-back style. Closed-back (also called closed-ear or circumaural) headphones are sealed off to external sound, so you don’t hear a lot from outside the headphones. Open-back headphones (also called open-ear or supra-aural headphones) are (as the name suggests) more open to outside noise, and there is more leakage to the outside world, but these are more comfortable for long periods and usually used in a studio for mixing and monitoring, where there isn’t a concern about outside noise. These tend to be more expensive, as they have more expensive components and are the best quality overall. The sound quality is clear and there isn’t a lot of leakage of sound, so a singer listening to a backing track while they sing might use this type of headphone. However, these can be heavy and less comfortable for long periods of time. Cheaper models tend to have a boost in the high-frequency range, which can become tiring to listen to. Bluetooth models allow for cord-free experiences, but interference can still occur, so it’s recommended that you plug in while monitoring.
Over-ear: A decent pair of over-ear headphones, sometimes called studio cans, will serve you best for long periods of monitoring. Usually they will only have one cable attached to the headset, which is always on the left ear’s side, to keep your right hand free of the cable for working (sorry, lefties). They also come in closed-back and open-back styles, and are good at isolating sound.
Noise-canceling: We don’t use noise-canceling headphones for monitoring; these are convenient for travel when you have to block out steady noise like the hum of an airplane, but are not purposed for monitoring or music listening. We’ll discuss how these work in the next chapter.
Reading and Listening Guide
Each chapter introduces some reading and listening suggestions. Take the time to read, listen, and answer the questions as well as add your thoughts about the readings or listening in your journal.
Michel Chion, the “Three Listening Modes” from Audio-Vision (1994)
Film sound theorist Michel Chion describes three ways of listening to sound. The most common is by identifying the cause of the sound, which we saw above in our own listening practice. He calls this causal listening (not to be confused with casual listening!). As we discussed, we usually talk of sounds in terms of the cause or source of the sound: a car motor, a bird, and so on. The second is semantic listening: this is what we do when we listen to people talking. The sounds are a part of a linguistic code, and we listen to the code as much or more than the sound itself. Chion draws on the musique concrète composer Pierre Schaeffer to describe focusing on the traits of the sound in reducedlistening. To describe sounds in a reduced way we need a language to talk about sound, which we’ll cover in the next chapter: we can focus on the texture, qualities, or timbre of a sound. We must also listen to a sound many times in order to separate out its acoustic properties from its cause. These three listening modes, however, fail to capture many of the other ways in which we may listen (and Chion acknowledges this). What other ways of listening can you imagine? Think about your listening journal and the ways you’re practicing listening. Is listening to music different from listening to sound? Why or why not?
Pauline Oliveros, Deep Listening: A Composer’s Guide to Sound Practice (2005)
Oliveros’s book offers a different type of listening practice, focused on years of studying Zen and meditation. Oliveros reflects on her retreats and workshops and presents several deep listening exercises influenced by meditative practice. Perhaps most useful in my opinion is her slow walk, a meditation walk in which one attempts to walk as slowly as possible while listening. She tells us to “walk so silently that the bottoms of your feet become ears.”
R. Murray Schafer, “I Have Never Seen a Sound” (2009)
Acoustic ecologist R. Murray Schafer explains his journey into studying the soundscape (analogous to landscape) of an environment. What sounds have been introduced to the soundscape during your lifetime? What would where you live sound like one hundred years ago? Five hundred years ago? What are the politics and power structures in the sounds of your environment?
There are many collections of soundscapes available, with a Spotify playlist linked on the studyingsound.org website. What do soundscapes tell you about the places you’re listening to? What are the key sounds that differentiate them? Should we record our present soundscapes? Why or why not?