Sound

Sound has always played a part in cinema; in the “silent” era, most audiences would experience live musical accompaniment, even if it was only a piano. The sound era begins in 1927 with the release of The Jazz Singer, which featured synchronized sound during portions of the film. Please note that film historians point to films that experiment with sound before 1927, but The Jazz Singer marks the introduction of sound film to a large audience due to the movie’s massive popularity.

Once sound formally entered the filmmaker’s toolkit, the art of cinema took a momentary step backward. Before the introduction of sound, filmmakers were using fluid camera movements. With the need to hide microphones on the set to capture dialogue and sound effects, the camera became stationary and the action of characters limited by the range of those microphones. Characters would enter a space and cluster around a hidden microphone; film was in danger of becoming static.

Of course, quite quickly, filmmakers of the early 1930s discovered how to work with sound. Directors such as Rouben Mamoulian and Ernst Lubitsch realized that they did not have to use the sound recorded on set; they could add sound, including having the actors dub their lines in the post-production phase. Frustrated with not getting good audio on set, Dorothy Arzner borrowed a fishing pole and created the first boom mic, thus liberating the camera once again.

According to most film scholars, film sound falls into three basic categories: dialogue, music, and sound effects. All three of these categories can be further described as being diegetic or nondiegetic, onscreen or offscreen, simultaneous or nonsimultaneous, synchronous or nonsynchronous, and external or internal.

diegetic vs. nondiegetic

Diegetic sound refers to sound that comes from the world of the narrative/story while nondiegetic sound is from outside of that world. Take a look at the following clip from Tom Twyker’s Run Lola Run (sorry for the dubbing; I need to reload this clip in the original German). In this clip, Lola is trying to get to the grocery store in order to stop her boyfriend Manny from robbing it:

http://www.criticalcommons.org/Members/pfgonder/clips/run-lola-run-crosscutting-and-split-screen/view

The sound of the siren is diegetic; even if it was offscreen and we did not see the ambulance, it would be considered diegetic since it is part of the world of the film. We hear a voice over, what Lola is thinking (we will define voice over below); this voice over is also diegetic since it is internal, i.e. coming from Lola’s thoughts (a character in the film). The techno score or background music, however, is nondiegetic. The characters cannot hear it, and it does not originate from that world of the film but is outside of that world.

To determine if sound is diegetic or nondiegetic, ask if the sound originates from the world of film. It does not matter if the sound is offscreen or a voice over. It does not matter if the sound comes from an earlier or future part of the narrative (for example, a character remembers what her dead husband said, and we hear it in a voice over). If it comes from the world of the film, it is diegetic. In the case of a voice or sound over, the characters seen on film may not be able to hear the sound, but it could still be diegetic. For example, in the clip above, if you cut to Manny and we hear Lola’s voice over/her thoughts, it is still diegetic.

Here is video that distinguishes between diegetic and nondiegetic; it also includes an example of what we will call an internal voice or sound over:

Filmmakers may blur the line between diegetic and nondiegetic, creating semi-diegetic sound. Here is an example of semi-diegetic sound from P.T. Anderson’s Magnolia. The music begins as diegetic source music (diegetic music that has a source from within the scene), being played by one character in her apartment, but the camera shifts to other characters who inexplicably hear the same music, even though the music is coming from the first character’s stereo:

http://www.criticalcommons.org/Members/pfgonder/clips/montage-and-sound-in-magnolia/view

This clip is also a great example of Russian style montage sequence. Note that it does not move the narrative forward but connects the characters through their shared loneliness and is thus an example of tonal montage.

Musicals use semidiegetic sound much of the time. In what is called the integrated musical, people walking down the street will suddenly burst into song, accompanied by what would normally be considered nondiegetic background music. This is in contrast to the backstage musical in which the singing and music are explained by having the characters be performers and the musical numbers their performances on stage (thus making them fully diegetic). A Star is Born (2018) would be an example of a backstage musical.

onscreen vs. offscreen sound

Onscreen and offscreen sound refers to diegetic sound that originates onscreen (we see the source of the sound) or offscreen (we do not see the source, but it is present in the scene). For example, the sound of a car passing by will be first offscreen, then onscreen, and then again offscreen. Some sources use the term “asynchronous” for offscreen sound, but I find students confuse it with the terms below.

external vs. internal sound

Internal sound is sound that comes from within the mind of a given character (for example, Lola’s voice over in the clip above). Most sound in film is external.

For the sound quiz and Exam 2, you do not need to specify when you use diegetic, onscreen, simultaneous, synchronous, or external sound, since these terms describe the vast majority of sound. You would end up using them over and over. Instead, look for moments when you can use the other term in the pair, which tend to be less common. In any case, these distinctions will help us understand all of the other terms on this list.

The work of recording, mixing, and creating sound for cinema can be divided into two phases: production and postproduction. During production, the most important person is the production sound mixer, who oversees the recording of all sound on set and mixing these sounds into a rough cut for the director. Sound mixers work with their assistants as well as individuals such as boom operators. On set, the sound crew will use a variety of microphones, including boom mics (on a pole held above the action), shotgun mics (that can capture audio from a distance), and radio mics (that transmits audio wirelessly).

Much of the work involving sound takes place in postproduction, after all of the footage has been shot. The sound designer oversees the process of creating the final audio track for the film. They will work with the Foley artist (see definition below), the dialogue editor (in charge of assembling the dialogue track), the sound editor (in charge of assembling the sound effect track), the music supervisor (in charge of securing rights to existing music), the composer (who writes original music for the film) and the re-recording mixer (who oversees the creation of the final product). During postproduction, dubbing or looping may take place when actors re-record dialogue, basically lip-syncing with themselves or other actors. Dubbing may occur to fix audio problems that occurred on set, to experiment with a different line reading, to remove profanities for a censored version, or to create a version in another language. We see Pepa and Ivan dubbing an American movie into Spanish in Women on the Verge (of a Nervous Breakdown). In Goldfinger (1964), Gert Fröbe, the actor playing Goldfinger himself, was not particularly fluent in English, which turned out to be a problem. After the entire film was shot, all of his dialogue was dubbed by another actor, Michael Collins.

synchronous vs. nonsynchronous sound

Synchronous sound is onscreen sound that is “in sync” with the action. The vast majority of sound is synchronous. In rare occasions, a filmmaker may use nonsynchronous sound (for example, a character may speak, but their lines do not match their mouth; a vase falls to the floor, but we hear the crash after it breaks). Nonsynchronous sound happens before or after the action that causes it.

Here is an example of nonsynchronous sound in Kelly and Dohen’s Singing in the Rain (one of the smartest films ever made about sound). In this clip, a studio has tried to create its first sound film and fails miserably:

http://www.criticalcommons.org/Members/bnd/clips/sound-in-singin-in-the-rain-4

nonsimultaneous sound

Nonsimultaneous sound describes sound that is diegetic but comes from an earlier or latter part of the film, sometimes referred to as a sonic or auditory flashback or flashforward (sonic flashforwards being much rarer). For example, you could have a shot of a character riding the bus, but we hear a line of dialogue that was spoken earlier in the film. We could assume that she is thinking about the line (making it internal) or that the filmmaker is reminding us of its importance.

Here’s an example of the use of nonsimultaneous sound:

http://www.criticalcommons.org/Members/jbutler/clips/Damages20070814qq00_00_00qq-Desktop.m4v

Note that, during this flashback, we hear the lawyer speaking in the present—“Wait, wait…”—while we are still looking at a freeze frame of the character in the past. We will discuss more specific terms for nonsimultaneous sound later in this reading (for example, sound bridges).

At this point, it will be helpful to think about different qualities of sound, namely volume, pitch, timbre, fidelity, as well as the difference between direct and reflected sound.

Amplitude describes the volume of sound, how loud or soft it is. When the sound mix of the film is created, filmmakers may want to sweeten certain sounds; sweetening refers to how the sound mixer may raise the volume of a key sound effect in order to make it audible when normally it may not be.  For example, a bird may be filmed in an extreme long shot, but the sound mixer may subtly increase the volume of its song (despite the fact that normally, the bird is too far away to be heard).  Usually, viewers do not notice when sweetening occurs, although their attention may be manipulated in this way (see the clip below from Ugly Betty).

Amplitude can be used to let the viewer judge distance, creating what is called sound perspective. We assume sound that is louder is closer and that sound that is softer is farther from us. Here is an example of sound perspective from Douglas Sirk’s Written on the Wind. The music the female character plays in her room is at a loud volume since we are in the room with her. When Sirk cuts downstairs, the volume lessens to signify distance. Notice that when the older man (the young woman’s father) starts to have a heart attack, the music’s volume increases. In this way, Sirk violates sound perspective and use the loud diegetic source music to parallel the heart attack (allowing the music to become semidiegetic). The music could also be a signifier of emotion in that it represents the older man’s mental distress over his daughter’s wild behavior (see the definition of “parallelism” and “music as signifier of emotion” below).

http://www.criticalcommons.org/Members/pfgonder/clips/sound-in-written-on-the-wind/view

Filmmakers often violate sound perspective in ways that do not disturb the experience of the viewer. Notice in the clip below from Ugly Betty how the filmmakers raise the amplitude of the dialogue further away from the viewer within the sound mix, but we do not care since it is the dialogue we want to hear:

http://www.criticalcommons.org/Members/jbutler/clips/uglybetty_hellogoodbye00_33_05.mp4

Pitch describes the frequency of a sound, whether it is low or high pitched.

We can describe the timbre of a sound, which means its tone or quality. For example, the timbre of the sound a train makes coming to a stop could be called shrill or harsh. The Wikipedia entry on timbre makes this useful point—a trumpet and a piano may play the same note (pitch) at the same volume, but you would still hear a difference in timbre (“Timbre”).

Foley artists (named after the pioneering sound technician Jack Foley) work in post-production to create the necessary sound effects for the film that was not captured on set, other than dialogue and music). They will use an ingenious array of devices (often improvising with household items) to capture just the right sound. Here is a video that explains the work of the Foley artist:

It is fascinating to watch Foley artists at work, especially when you realize that the sound of an arm breaking can be created with celery, and there are quite a few videos of Foley artists at work, so many that parodies have arisen. Here is a parody that pokes fun at Foley artists. Feel free to watch this just for fun:

Fidelity refers to how accurate the sound is in relation to its source. Does the thunderstorm we see on screen sound like we would expect a thunderstorm to sound? This relation between sound and source is complicated since the sound used in film may be generated after filming, in postproduction, perhaps by a Foley artist (see below). Plus, the most “accurate” sound may not be what the audience expects or wants. Most fights scenes feature punches being thrown and received, but the sound of the punch we associate with film is not realistic, and if filmmakers used realistic sounds, the audience may perceive them as unrealistic!

Take a look at this clip from Steven Chow’s Kung Fu Hustle as he uses, stretches, and breaks fidelity to great effect:

http://www.criticalcommons.org/Members/pfgonder/clips/sound-effects-in-kung-fu-hustle/view

Direct sound is sound that is recorded from the source without any interference; the sound waves move from their origin into the microphone. Reflected sound is sound that bounces off of a surfaces or surfaces before being recorded and can create a sense of space. Echoes are an example of reflected sound. Those involved in recording sound know that different surfaces in the space of the shot or around the microphone itself will affect sound in various ways, allowing filmmakers to get exactly the timbre they seek.

In this clip from Cat People (1942), Alice finds herself stalked by Irena, her friend’s wife, who believes two things: that she turns into a predatory cat when experiencing intense emotion and that Alice is sleeping with her husband. Notice director Jacques Tourneur’s use of reflected sound when Alice is in the pool as well as his use of sound to suggest what cannot be seen:

http://www.criticalcommons.org/Members/pfgonder/clips/the-pool-sequence-in-cat-people-1942

voice over

A voice over describes nonsimultaneous or internal diegetic dialogue (but not offscreen sound). This may sound confusing, but if you have watched any film or television, you have heard a voice over (sound over being more infrequently used). For example, there is shot of a teenager’s face, and you hear a voice over saying, “When I was 14….” We may rightly assume that the voice over is from the person’s future self, looking back on themselves at a younger age. Once again, we would not call someone speaking offscreen a voice over.

We can distinguish between diegetic and nondiegetic voice over. You often hear nondiegetic voice overs in documentaries. For example, you see a shot of bears in the forest, and you hear a voice over saying, “Bears are surprisingly social animals,” but we do not see the source of the voice.

If the voice over is diegetic—being the voice of a character within the world of the film—we can distinguish between an external and an internal version. An example of an external diegetic voice over would be above, with the shot of the teenager and the voice over of that person’s future self; it is as if they are speaking directly to us. An internal diegetic voice over would be when we hear a character’s thoughts, their “inner voice,” as in the case of Lola’s voice over in the clip from Run Lola Run.

A sound over acts like a voice over with a sound effect rather than a voice. For example, imagine a scene where a person is remembering going to a football game as a young man; as he sits in his quiet house, the noise of the football game from his memory comes in as a sound over. The sound should not have a source in scene itself, even if the source is off screen. For example, the sound of a car honking off screen during a street scene is not a sound over, even if we cannot see the car.

Check out this incredible scene from Robert Wise’s The Haunting for the use of voice over, reflected sound, sound perspective, sound effects, and background music:

http://www.criticalcommons.org/Members/pfgonder/clips/sound-in-the-haunting/view

For the sound quiz and Exam 2, don’t confuse diegetic, off screen sound and a sound/voice over. Off screen sound refers to sound with a source that is present in the scene, but this source is not in the frame. For example, in a shot of couple talking on a sidewalk in New York City, the sound of the cars on the street may be heard even though we do not see the cars. The sound of the cares would be diegetic, off screen sound.

Futzing is when sound, especially dialogue, is altered to become somewhat unintelligible. The best example is the sound of a voice on a phone, heard through the speaker.

Ambient or background noise is sound that is not the primary focus of the scene; it may be present but not actively noticed (the sound of wind, crickets, or traffic in the distance).  Room tone is an example of ambient noise. The murmur of a crowd is called walla sinceextras are often instructed just to say “walla” over and over so that their dialogue becomes mumbled background noise.

In many films, characters wait for someone to stop talking before they start. While this makes it easier for the audience to hear the dialogue, it is not completely authentic in terms of how people talk, especially in certain situations. Often, people “step on each other’s lines,” to use an acting term; they begin talking before the other person ends. In film, this is called overlapping dialogue.  Please don’t get overlapping dialogue confused with dialogue overlap, which is when dialogue overlaps an edit, usually during a shot/reverse shot (see the definition below). The director Robert Altman is famous for using overlapping dialogue, often with multiple speakers:

subjective sound

Subjective sound can be divided into two categories.  The first is perceptual subjectivity, which is simply how sound reflects a character’s physical situation.  Sound perspective is an example of perceptual subjectivity; the source of the sound is farther away from the character, so it is not as loud as something closer.  Mental subjectivity is trickier.  Subjective sound that is “mental” (so to speak) refers to when sound has been manipulated in some way to reflect the psyche of a character, the character’s inner psychological state. For example, imagine a character is worried about news they are going to receive at 3:00.  She looks at the clock, and the sound of the clock ticking hand becomes very loud and harsh. Usually, we would not use nondiegetic music/background music as example of subjective sound; instead, use the term “music as signifier of emotion,” defined below.

Here is a great clip illustrating perceptual subjectivity. When the tank shell hits his apartment, the main character loses his hearing for a moment, and we do as well, allowing us to experience it from his subjective position.

http://www.criticalcommons.org/Members/ogaycken/clips/pianist-internal-sound.mp4

Sound and Editing

In this section, we are going to focus on how sound interacts with editing, especially in how it helps create seamless/invisible editing.

sound bridge

Sound bridges occur when diegetic sound from the end of scene A extends into the beginning of scene B (please note we are discussing scenes not shots in this case). Another possibility is that the diegetic sound from scene B may start before scene A ends. Remember that sound bridges are always diegetic and that they act as transitions between scenes, not shots within a scene. Don’t refer to nondiegetic music/background music as a sound bridge, although we can think about it as a transitional device (see the definition below).

In this clip from M, director Fritz Lang uses a sound bridge to connect the scene of the man selling newspaper (shouting “Extra! Extra!” in German) to a man writing a letter. Notice that you can still hear “Extra!’ being shouted as he cuts to the man alone in his apartment. Before Lang cuts back to the street scene, you hear the newspaper seller again, creating another sound bridge:

            http://www.criticalcommons.org/Members/ogaycken/clips/msoundbridges.mov

Here is an example from Woody Allen’s Sleeper. Notice how the dialogue concerning the plan extends over into the next scene, in which the plan is being put into action.

http://www.criticalcommons.org/Members/bnd/clips/sound-bridge-and-voice-over-narration-in-woody

A common mistake is to confuse dialogue overlap or other terms with a sound bridge. Remember that a sound bridge has two requirements. First, it must connect two scenes, not two shots within a scene. Second, it must be diegetic. This means that the score of the film does not operate as a sound bridge, although you could use the term “music as a transitional device” (see this term defined below) for when a score helps smooth out the transition between scenes. In Moving Pictures, Sharman defines a sound bridge in a much looser way; please follow the definition I give above since it is the most common one.

Dialogue overlap occurs most often with shot/reverse shots.  The line of dialogue extends over the edit.  For example, in shot A, Martha starts her line of dialogue; in shot B, we see Alice listening, but Martha’s dialogue extends over the edit into shot B.  Conversely, in shot A, Martha could finish her line of dialogue, and we hear Alice begin her line before we cut to her.  Dialogue overlap is another tool to make editing appear seamless. Don’t confuse dialogue overlap with a sound bridge; sound bridges always connect scenes, not shots within a scene.

Here is an example of dialogue overlaps from Alexander Payne’s Election:

http://www.criticalcommons.org/Members/ogaycken/clips/election-dialogue-overlap.mov

Notice the end of this clip also features overlapping dialogue.

L-cut or a J-cut

These are relatively new terms that refer to when a diegetic sound crosses over an edit.  These are more general terms for what we are calling either dialogue overlap (when this happens with dialogue within a scene) or sound bridge (when this happens in order to bridge two separate scenes together).  Usually, an L-cut refers to when sound from shot A extends into shot B; a J-cut is when sound from shot B begins while we are still in shot A. It will help if you think about a digital editing program and how the sound track will make an “L” or a “J” shape when it extends over the edit; the “L” refers to how the sound extends forward while the “J” extends backward.

For Exam 2, you should use “L-cut” or “J-cut” in conjunction with a dialogue overlap or a sound bridge to clarify which “direction” the sound is going.

Dialogue hook is when a line of dialogue forecasts the next scene.  Dialogue hooks always occur at the end of a scene.  A character may say, “Well, we better get to the beach.”  The next shot is of the characters arriving at the beach.  For a humorous use of a dialogue hook, see this clip from Sullivan’s Travels:

http://www.criticalcommons.org/Members/pfgonder/clips/eyeline-match-and-dissolve/view

Music

As stated above, music has been the aspect of sound that has been with cinema from the start, and it plays an essential role in shaping the experience of the audience.

Background music or underscoring is nondiegetic music, a musical score that does not have a source in the world of the movie.  The score is often created by a composer, but many filmmakers make judicious use of existing music. Source music differs from background music in that it has a source in the world of the movie and is thus diegetic (for example, music coming from a car radio).

Sneaking in and out refers to how nondiegetic music (i.e. the score of the film) can enter or exit, subtly become louder or softer in terms of amplitude without the audience directly noticing the effect.

rhythm, beat, tempo

The rhythm, beat, and/or tempo of the music can have an enormous effect on the audience. In this clip from Steven Soderbergh’s King of the Hill, two bullies push two younger boys into a game of marbles. Once Aaron takes over and begins to defeat the bullies, a musical cue comes in signaling a montage, shortening the time it takes for Aaron to finish the game. Notice how the tempo and beat of the music (written by Cliff Martinez) becomes quicker, signaling Aaron’s dominance: http://www.criticalcommons.org/Members/pfgonder/clips/king-of-the-hill-1993/view

Mickey Mousing

Mickey Mousing refers to whenever a character or object moves in perfect sync with the nondiegetic, background music, but they are not dancing. The movement must match the music beat for beat. Here is an extreme example of Mickey Mousing from The Errand Boy, featuring Jerry Lewis:

http://www.criticalcommons.org/Members/pfgonder/clips/mickey-mousing-in-the-errand-boy-1/view

In the clip above from Soderbergh’s King of the Hill, watch at the end of the game of marbles; Martinez’s score uses Mickey Mousing, matching chimes with the action, beat for beat.

motif, theme, and leitmotif

There is some discrepancy on the differences between motif, musical theme, and leitmotif. For the sake of this class, we will use the following definitions (please forgive any confusion concerning the difference between musical motif and theme—I’ll try to clarify the difference at the end of this section).

A motif is any type of sound (music or sound effect) that repeats over the course of a narrative; with this repetition, this particular sound takes on a meaning. For example, in Carl Franklin’s One False Move, the sound of a whippoorwill is heard before the murder of a character, linking this sound to death.  A musical motif is when a piece of music gains meaning through this same kind of repetition. For Citizen Kane, the great Bernard Herrmann wrote a short piece of music he called the “Rosebud” theme. Citizen Kane chronicles the life of Charles Foster Kane, one of the richest men in America who dies saying one word: “Rosebud.” It is seemingly meaningless, so a reporter begins a quest to uncover what Kane meant with this last word by interviewing those who were close to him. The “Rosebud” theme plays whenever any of these individuals get close to the answer to this mystery.

A theme is a particular kind of musical motif, a recognizable piece of music associated with a character or the movie itself (for example, television series have a “theme song”). Composer John Williams created Darth Vader’s theme (“The Imperial March”) for The Empire Strikes Back. Listen to it here: https://www.youtube.com/watch?v=-bzWSJG93P8

This theme is used to announce Vader’s presence directly or may be interwoven into another piece of music to signal that a character is thinking of him or to hint at his coming threat. Here is a video detailed the various themes Williams uses in Raiders of the Lost Ark: https://www.youtube.com/watch?v=5fq5O2ALsIg

In this clip from Mission Impossible, composer Danny Elfman plays with the Mission Impossible theme song to great effect. At this point, Ethan tries to stop the villain from escaping on a helicopter, which is—at the moment—chasing a train in a tunnel. Notice how Elfman uses fairly generic “heroic” music before segueing smoothing into the theme at a crucial moment—when Ethan uses the explosive “gum” given to him by the villain earlier in the plot:

http://www.criticalcommons.org/Members/pfgonder/clips/background-music-and-theme-in-mission-impossible/view

Note as well how the score works as an example of parallelism.

Here’s a clip of the ending of Shaft (1971), directed by Gordon Parks. Notice how the music, composed by Isaac Hayes and J.J. Johnson, accentuate the action (as in parallelism), with a stinger when Shaft swings through the window. The tempo of the music increases, reading a crescendo as Shaft’s theme (as heard in the iconic guitar line) sneaks in at the end to announce the hero’s victory.

Some scholars of music make a distinction between a theme and leitmotif, but the distinction between theme, motif, and leitmotif can be tricky. Musical scholars draw a firm distinction that film scholars and filmmakers tend to blur. In my experience, something like the “Rosebud” melody may be referred to as a musical motif or a theme, but something like the “Imperial March” is only called a theme. To clarify, for the sake of this class, if it is a repeated (and usually short) piece of music that gains meaning through the course of the narrative, we will call it a musical motif (although you may find filmmakers calling it a theme); if it is a repeated piece of music that is associated with a character or the film itself, we will call it a theme or leitmotif.

A musical cue is a piece of music, noticeably different from what precedes it, that signals a change in the narrative, that something is happening or about to happen (for example, scary music starts to play at a frightening moment in a horror film). In the clip above, from Mission Impossible, the theme music acts as a narrative cue, signaling the climax of the scene.

In this clip from “Hush” (an episode from Buffy the Vampire Slayer) listen for the musical cue in the score that signals the moment when the two characters discover their shared supernatural power:

http://www.criticalcommons.org/Members/pfgonder/clips/sound-and-editing-in-hush/view

Stingers are moments when the music works to accentuate a narrative event through the use of a noticeable, loud, and abrupt musical cue, such as a blaring of a trumpet.

parallelism vs. counterpoint

Parallelism refers to when music and image are working together and match in terms of tone and effect (for example, somber background music being played during a sad scene).  Counterpoint (sometimes referred to as ironic sound) means that the music works against what is seen (for example, happy music being played during a sad scene). Think of how the music in the clip from Run Lola Run—the fast tempo electronic music with a relentless beat—matches Lola’s determination and focus.

In this scene from John Landis’s American Werewolf in London, the main character transforms for the first time. Notice the ironic use of source music (the song is playing on the radio) that acts as a counterpoint to the horrific action on screen:

http://www.criticalcommons.org/Members/pfgonder/clips/american-werewolf-in-london-1981/view

Claudia Gorbman’s principles of film music

Film scholar Claudia Gorbman’s principles of film music are extremely helpful in understanding how music (particularly background music) operates in narrative film. She identifies the following principles: the invisibility of the score, the inaudibility of the score, music as signifier of emotion, music as narrative cue, music as a source of continuity, music as a source of unity.

the invisibility of the score

Gorbman theorizes that background music is “invisible” since it is nondiegetic and we do not see the source of the music (in contrast to source music).

the inaudiblity of the score

We often do not notice background music; it is “inaudible” in that we hear it, but we do not consciously notice its effect on us.

music as signifier of emotion

Gorbman points out that music is often used to signify emotion, especially the emotional nature of a character or characters. The score may swell when lovers meet or become atonal and harsh when a character is angry.

narrative cueing

Music may signal some event is about to happen or is happening (see musical cue above), or music may help guide our understanding or interpretation of narrative events. Gorbman refers to the latter as a “connotative” cue, in that the music is prompting us to think about the narrative events in a certain way.

Music as a source of unity, music as a transitional device

Gorbman also argues that music often acts as a transitional device, bridging scenes or shots in order to help create unity and continuity.  Please do not get this idea confused with a sound bridge.  A sound bridge also bridges two scenes, but sound bridges are always diegetic; when Gorbman discusses music as a transitional device, she means non-diegetic, background music.

music as a source of continuity

By continuity, Gorbman is describing the way that the score can help create narrative or aesthetic patterns. The primary way that a score does this is through the creation of motifs and/or themes.

Gorbman ends her list with one final point: a musical score can break any of these rules if breaking said rule aids the film to achieve its purpose or goal.

Before we move onto the idea of sound montage, take a look at this fight scene from Ang Lee’s Crouching Tiger, Hidden Dragon and consider all of the sound work involved, paying close attention to the dialogue, sound effects, and music:

https://criticalcommons.org/view?m=jpigrqhmA

In the following video Tony Zhou makes two interesting points. First, he argues that many composers are plagiarizing musical cues from other films, leading to the sapping of creativity from the field. Second, he makes the case that the music in Marvel movies is overly generic:

I would like to argue with his second point. I agree that the music in Marvel movies can be ineffective and bland (with some notable exceptions), but for different reasons that Zhou. He argues that the music is forgettable, that you cannot whistle it after the movie is over. I would assert that the purpose of some film scores is to remain “inaudible” (as Gorbman describes). Many of the greatest film scores do not feature tunes that can be whistled but have a powerful effect on the audience. I think of Jerry Goldsmith’s work on Alien as a prime example. Ridley Scott was in danger of being fired from Alien as the director. After watching a rough cut without the musical score, the producers thought the first 40 minutes of the movie lacked tension; they wanted Scott to show the monster much earlier in the film, something that would have ruined the movie. Scott made a bet; if the producers thought the first 40 minutes still lacked tension after watching a cut with Goldsmith’s score, he would voluntarily quit. He won the bet. Goldsmith’s score cannot be whistled, the audience may not even be aware of it, but it creates an underlying sense of absolute dread and awe.

This is not to disparage composers whose work tends toward the “audible.” John Williams is a great example of a composer for film whose work tends to be memorable and “hummable.” Williams—who wrote the score for Jaws, Star Wars, the Harry Potter series, and many more—excels at creating themes, at composing melodies that become emblematic of a character, film, or entire franchise.

For me, two film score composers who can switch between the “audible” and “inaudible” with perfect grace are Bernard Herrmann and Ennio Morricone, who many critics laud as the best ever to work in cinema. Herrmann and Morricone can create indelible thematic melodies, but they both can allow their music to act in a way that influences the viewer/listener without the audience fully being aware.

Here’s a video with snippets of some of Herrmann’s best work:

Here’s one for Morricone:

http://www.tasteofcinema.com/2015/the-20-best-ennio-morricone-scores/

What you will find in both artists is an incredible range of styles; they can shift and change for whatever the film needs (I find this particularly true for Morricone). Plus, they both had careers that decade-spanning careers. Herrman wrote his first film score for Citizen Kane in 1941 and worked into the 1970s, writing the final score for Taxi Driver (1976), with films such as Jason and the Argonauts, Psycho, and Vertigo in between! Morricone composed his first score in 1959 and just recently scored Tarantino’s The Hateful Eight (2015)!

Ultimately, a film score cannot be judged separately from the film itself; what makes a great composer for cinema is how they can write music that works with the image to further the desired effect of the film.

Sound Montage

As Gorbman states, music can help to create continuity, but it can also work in other, more complex ways.  Sound montage refers to how filmmakers can bring sound (music, sound effects, and/or dialogue) together in the same way that editors bring shots together—sometimes making the edits invisible, sometimes not. In the case of sound, this may happen in the form of layering the sound, which may also work to bring attention to its use and to create a certain, desired effect.  This use of sound is comparable to the use of editing by filmmakers like Eisenstein. In fact, Eisenstein wrote about what he called “vertical montage,” which included how a single image interacts with sound. Instead of worrying about continuity, filmmakers may combine or place sounds in opposition to each other in a way that is not “inaudible” (using Gorbman’s definition) but that calls attention to itself.

Take a look at this longer clip from Anderson’s Punch-Drunk Love for an example of mixing sound into a kind of montage or collage. Notice how Anderson does not prioritize different sounds in the sound mix, bringing them all up to close to the same level. I am interested in your reaction, but for me, this montage of sounds creates a sense of building anxiety:

http://www.criticalcommons.org/Members/pfgonder/clips/punch-drunk-love-2002/view

One final term. I had a professor who taught me an important lesson about analysis. You cannot just look at what is there, on the screen or on the page. You also have to consider what is absent. With that in mind, I would remind you that one of the most important aspects of film sound is the use of silence. As you watch films or television over the course of the next week or so, pay attention to when the filmmakers use silence and the effect it has.