Lip sync
Lip sync or lip synch (pronounced /sɪŋk/, the same as the word sink), short for lip synchronization, is a technical term for matching a speaking or singing person's lip movements with sung or spoken vocals.
This article is about mimed singing or speaking performance. For other uses, see Lip sync (disambiguation).
Audio for lip syncing is generated through the sound reinforcement system in a live performance or via television, computer, cinema speakers, or other forms of audio output. The term can refer to any of a number of different techniques and processes, in the context of live performances and audiovisual recordings.
In film production, lip syncing is often part of the post-production phase. Dubbing foreign-language films and making animated characters appear to speak both require elaborate lip syncing. Many video games make extensive use of lip-synced sound files to create an immersive environment in which on-screen characters appear to be speaking. In the music industry, lip syncing is used by singers for music videos, television and film appearances and some types of live performances. Lip syncing by singers can be controversial to fans attending concert performances who expect to view a live performance.
In video[edit]
Film[edit]
In film production, lip syncing is often part of the post-production phase. Most film today contains scenes where the dialogue has been re-recorded afterwards; lip syncing is the technique used when animated characters speak, and lip syncing is essential when films are dubbed into other languages. In many musical films, actors sang their own songs beforehand in a recording session and lip synced during filming, but many also lip synced to playback singers, voices other than their own. Rex Harrison was the exception in My Fair Lady.[60] Marni Nixon sang for Deborah Kerr in The King and I and for Audrey Hepburn in My Fair Lady, Annette Warren for Ava Gardner in Show Boat, Robert McFerrin for Sidney Poitier in Porgy and Bess, Betty Wand for Leslie Caron in Gigi, Lisa Kirk for Rosalind Russell in Gypsy, and Bill Lee for Christopher Plummer in The Sound of Music.
Some pre-overdubbed performances have survived, such as Hepburn's original My Fair Lady vocals (included in documentaries related to the film), and Gardner's original vocals in Show Boat were heard for the first time in the 1994 documentary That's Entertainment! III. When songs appear in non-musical films, however, the actors sing live on set, but later dub their voices in ADR using a "better" performance of the song.
Lip syncing is almost always used in modern musical films (The Rocky Horror Picture Show being an exception) and in biopics such as Ray and La Vie en Rose, where the original recording adds authenticity. But some early musicals usually use live recordings.
In the 1950s MGM classic Singin' in the Rain, lip syncing is a major plot point, with Debbie Reynolds' character, Kathy Selden, providing the voice for the character Lina Lamont (played by Jean Hagen). Writing in UK Sunday newspaper The Observer, Mark Kermode noted, "Trivia buffs love to invoke the ironic dubbing of Debbie Reynolds by Betty Noyes on Would You" although he pointed out that "the 19-year-old Reynolds never puts a foot wrong on smashers like Good Morning".[61] Reynolds also later acknowledged Betty Noyes' uncredited contribution to the film, writing: "I sang You Are My Lucky Star with Gene Kelly. It was a very rangy song and done in his key. My part did not come out well, and my singing voice was dubbed in by Betty Royce [sic]".[62]
ADR[edit]
Automated dialogue replacement, also known as "ADR" or "looping", is a film sound technique involving the re-recording of dialogue after photography. Sometimes the dialogue recorded on location is unsatisfactory either because it has too much background noise on it or the director is not happy with the performance, so the actors replace their own voices in a "looping" session after the filming.
Animation[edit]
Another manifestation of lip syncing is the art of making an animated character appear to speak in a prerecorded track of dialogue. The lip sync technique to make an animated character appear to speak involves figuring out the timings of the speech (breakdown) as well as the actual animating of the lips/mouth to match the dialogue track. The earliest examples of lip sync in animation were attempted by Max Fleischer in his 1926 short My Old Kentucky Home. The technique continues to this day, with animated films and television shows such as Shrek, Lilo & Stitch, and The Simpsons using lip syncing to make their artificial characters talk. Lip syncing is also used in comedies such as This Hour Has 22 Minutes and political satire, changing totally or just partially the original wording. It has been used in conjunction with translation of films from one language to another, for example, Spirited Away. Lip syncing can be a very difficult issue in translating foreign works to a domestic release, as a simple translation of the lines often leaves overrun or underrun of high dialog to mouth movements.
Language dubbing[edit]
Quality film dubbing requires that the dialogue is first translated in such a way that the words used can match the lip movements of the actor. This is often hard to achieve if the translation is to stay true to the original dialogue. Elaborate lip sync of dubbing is also a lengthy and expensive process. The more simplified non-phonetic representation of mouth movement in many anime helps this process.
In English-speaking countries, many foreign TV series (especially anime like Pokémon) are dubbed for television broadcast. However, cinematic releases of films tend to come with subtitles instead. The same is true of countries in which the local language is not spoken widely enough to make the expensive dubbing commercially viable (in other words, there is not enough market for it). However, other countries with a large-enough population dub all foreign films into their national language cinematic release. Dubbing is preferred by some because it allows the viewer to focus on the on-screen action, without reading the subtitles.
Finger syncing[edit]
The miming of the playing of a musical instrument, also called finger-syncing, is the instrument equivalent of lip syncing.[65] A notable example of miming includes John Williams' piece at President Obama's inauguration, which was a recording made two days earlier and mimed by musicians Yo-Yo Ma, Itzhak Perlman. The musicians wore earpieces to hear the playback.[66] During Whitney Houston's performance of "The Star Spangled Banner" with full orchestra, a pre-recorded version was used: "At the game, everyone was playing, and Whitney was singing, but there were no live microphones," orchestra director Kathryn Holm McManus revealed in 2001. "Everyone was lip synching or finger-synching."[65]