Robotic voice effects became a recurring element in popular music starting in the second half of the twentieth century. Several methods of producing variations on this effect have arisen.

Vocoder

edit

The vocoder was originally designed to aid in the transmission of voices over telephony systems. In musical applications the original sounds, either from vocals or from other sources such as instruments, are used and fed into a system of filters and noise generators. The input is fed through band-pass filters to separate the tonal characteristics which then trigger noise generators. The sounds generated are mixed back with some of the original sound and this gives the effect.

Vocoders have been used in an analog form from as early as 1959 at Siemens Studio for Electronic Music[1][2] but were made more famous after Robert Moog developed one of the first solid-state musical vocoders.[3]

In 1970 Wendy Carlos and Robert Moog built another musical vocoder, a 10-band device inspired by the vocoder designs of Homer Dudley which was later referred to simply as a vocoder.

Carlos and Moog's vocoder was featured in several recordings, including the soundtrack to Stanley Kubrick's A Clockwork Orange for the vocal part of Beethoven's "Ninth Symphony" and a piece called "Timesteps".[4] In 1974 Isao Tomita used a Moog vocoder on a classical music album, Snowflakes are Dancing, which became a worldwide success.[5] Since then they have been widely used by artists such as: Kraftwerk's album Autobahn (1974); The Alan Parsons Project's track "The Raven" (Tales of Mystery and Imagination album 1976); Electric Light Orchestra on "Mr. Blue Sky" and "Sweet Talkin' Woman" (Out of the Blue album 1977) using EMS Vocoder 2000's.

Other examples include Pink Floyd's album Animals, where the band put the sound of a barking dog through the device, and the Styx song "Mr. Roboto". Vocoders have appeared on pop recordings from time to time ever since, most often simply as a special effect rather than a featured aspect of the work. Many experimental electronic artists of the new-age music genre often utilize the vocoder in a more comprehensive manner in specific works, such as Jean Michel Jarre on Zoolook (1984), Mike Oldfield on QE2 (1980) and Five Miles Out (1982). There are also some artists who have made vocoders an essential part of their music, overall or during an extended phase, such as the German synthpop group Kraftwerk, or the jazz-infused metal band Cynic.

Other examples

edit

Though the vocoder is by far the best-known, the following other pieces of music technology are often confused with it:

Sonovox
This was an early version of the talk box invented by Gilbert Wright in 1939. It worked by placing two loudspeakers over the larynx and as the speakers transmitted sounds up the throat, the performer would silently articulate words which would in turn make the sounds seem to "speak." It was used to create the voice of the piano in the Sparky's Magic Piano series from 1947, many musical instruments in Rusty in Orchestraville, and as the voice of Casey the Train in the films Dumbo and The Reluctant Dragon[citation needed]. Radio jingle companies PAMS and JAM Creative Productions used the sonovox in many of the station IDs they produced.
Talk box
The talk box guitar effect was invented by Doug Forbes and popularized by Peter Frampton. In the talk box effect, amplified sound is actually fed via a tube into the performer's mouth and is then shaped by the performer's lip, tongue, and mouth movements before being picked up by a microphone. In contrast, the vocoder effect is produced entirely electronically. The background riff from "Sensual Seduction" by Snoop Dogg is a well-known example. "California Love" by 2Pac and Roger Troutman is a more recent recording featuring a talk box fed with a synthesizer instead of guitar. Steven Drozd of The Flaming Lips used the talk box on parts of the group's eleventh album, At War with the Mystics, to imitate some of Wayne Coyne's repeated lyrics in the "Yeah Yeah Yeah Song".
Pitch correction
The vocoder should also not be confused with the Antares Auto-Tune Pitch Correcting Plug-In, which can be used to achieve a robotic-sounding vocal effect by quantizing (removing smooth changes in) voice pitch or by adding pitch changes. The first such use in a commercial song was in 1998 on "Believe", a song by Cher, and the radical pitch changes became known as the 'Cher effect'.[6] This has been employed in recent years by artists such as Daft Punk (who also use vocoders and talk boxes), T-Pain, Kanye West, the Italian dance/pop group Eiffel 65, Japanese electropop acts Aira Mitsuki, Saori@destiny, Capsule, Meg and Perfume, and some Korean pop groups, most specifically 2NE1 and Big Bang.
Linear prediction coding
Linear prediction coding is also used as a musical effect (generally for cross-synthesis of musical timbres), but is not as popular as bandpass filter bank vocoders, and the musical use of the word vocoder refers exclusively to the latter type of device.
Ring modulator
Although ring modulation usually does not work well with melodic sounds, it can be used to make speech sound robotic. As an example, it has been used to robotify the voices of the Daleks in Dr Who.
Speech synthesis
Robotic voices in music may also be produced by speech synthesis. This does not usually create a "singing" effect (although it can). Speech synthesis means that, unlike in vocoding, no human speech is employed as basis. One example of such use is the song Das Boot by U96. A more tongue-in-cheek musical use of speech synthesis is MC Hawking. Most notably, Kraftwerk, who had previously used the vocoder extensively in their 1970s recordings, began opting for speech synthesis software in place of vocoders starting with 1981's Computer World album; on newer recordings and in the reworked versions of older songs that appear on The Mix and the band's current live show, the previously vocoder-processed vocals have been almost completely replaced by software-synthesized "singing".
Comb filter
A comb filter can be used to single out a few frequencies in the audio signal producing a sharp, resonating transformation of the voice. Comb filtering can be performed with a delay unit set to a high feedback level and delay time of less than a tenth of a second. Of the robot voice effects listed here, this one requires the least resources, since delay units are a staple of recording studios and sound editing software. As the effect deprives a voice of much of its musical qualities (and has few options for sound customization), the robotic delay is mostly used in TV/movie applications.

References

edit
  1. ^ "Das Siemens-Studio für elektronische Musik von Alexander Schaaf und Helmut Klein" (in German). Deutsches Museum. Archived from the original on 2013-09-30.
  2. ^ Siemens Electronic Music Studio in Deutsches Museum (multi part) (Video). Archived from the original on 2021-12-19.
      details of Siemens Electronic Music Studio, exhibited on Deutsches Museum.
  3. ^ Harald Bode (October 1984). "History of Electronic Sound Modification". Journal of the Audio Engineering Society. 32 (10): 730–739.
  4. ^ Spencer, Kristopher (2008). Film and television scores, 1950–1979 : a critical survey by genre. Jefferson, N.C.: McFarland & Co. ISBN 978-0-7864-3682-8.
  5. ^ Mark Jenkins (2007). Analog synthesizers: from the legacy of Moog to software synthesis. Elsevier. pp. 133–4. ISBN 978-0-240-52072-8. Retrieved 2011-05-27.
  6. ^ Sound On Sound, February 1999. Sue Sillitoe. "Recording Cher's 'Believe'". Historical Footnote by Matt Bell: "Cher's 'Believe' (Dec 1998) was the first commercial recording to feature the audible side-effects of Antares Auto-tune software used as a deliberate creative effect... As most people are now all-too familiar with the 'Cher effect', as it became known..."