One morning early this winter, I was in the car listening to the radio, growing restless with NPR’s downer economic news. I decided to check in on the Pop music world and surfed between a couple of Top 40 stations. I stopped on “Love Lockdown,” the new single from Kanye West.
It’s a pretty interesting and unique track, with some great beats and killer melodies. But the most noticeable aspect of the recording is the sound of Kanye’s singing voice (which, judging by his live TV performances recently, is just plain awful). And that sound is anything but unique in today’s mainstream music.
Kanye’s warbling, robotic voice comes courtesy of a studio trick that corrects the pitch of a vocalist (i.e. it forces the sound into key). Best known as “Auto-Tune,” the brand name of the software, the corrective tactic is the overused trend of the year, audible on Pop, R&B and Hip Hop songs up and down the charts. Once referred to as “the Cher effect” (the use of the trick on her hit “Believe” a decade ago was the first time most heard the sound), it is now often referenced as “the T Pain effect.” The hit singer/rapper/producer uses Auto-Tune as much as Jimi Hendrix used his guitar.
In Pop and Hip Hop production, once something original is proven to be successful it becomes omnipresent for the next several months. When Missy Elliott got huge, Timbaland’s sideways, cicada-like beats and chirps were mimicked successfully by a multitude of other producers. Likewise, The Neptunes’ minimalist approach and Kanye West’s use of sped-up old R&B hooks became the sound of the charts once those artists scored huge hits.
So, today, it seems like every other song on Pop radio features the Auto-Tuned sound. On MTV recently, P Diddy said that he was giving T Pain (who is doing work on the Didster’s new album) extra royalties simply for his use of Auto-Tune, as if it were something T owned outright.
Though he certainly is one of the biggest over-users of the effect, T Pain is far from the first to use the software. The trick has been a trade secret for quite a while. When “Believe” hit in 1998, the producers claimed Cher was singing through a vocoder, a device pioneered by electronic music innovators Robert Moog and Wendy Carlos and popularized by artists from ELO to Roger Troutman to Herbie Hancock.
Some have speculated that Cher’s producers were simply trying to protect their studio secret. Today, Auto- Tune is anything but hidden, but the original intent of the software was to covertly fix bum notes. On the poppier Emo band recordings of the past five or six years, it seems especially prevalent, though hardly blatant. Most of the time the average person can’t even tell when something has been Auto-Tuned.
Some producers initially balked at Auto-Tune, viewing it as a crutch for the undertalented (no doubt there are many who still feel this way). But hardly any producer is a purist when it comes to recording. Distortion, delay, chorus, flanger, reverb and compression are just a few of the “artificial” tricks that almost all producers use. And overdubbing instruments and vocals — hardly a “natural” process — is done on almost every recording you buy.
There are very few singers who go into a studio and sing a song from start to finish in one take. So those who think Auto- Tune is “cheating” should ask themselves if a producer asking a vocalist to sing the same line of a song over and over until he or she gets it right is similarly deceiving.
But with the rise of Auto-Tune as a flourish, the method has gone from a quick, subtle fix to an artistic statement. In my experiences in recording studios, pitch correction has been alternately viewed as a joke or a last resort. But today’s artists and producers have gone from faking it to flaunting it.
It’s kind of like if guys stopped stuffing socks down the front of their pants to make them appear well-endowed and instead started wearing 12-inch dildos on the outside of their pants, a la Marky Mark in Boogie Nights.
The overly-corrected sound of mainstream R&B, Hip Hop and Pop could be viewed as simply part of the evolution of recording technology, for better or worse. Most music being made today sounds different than the music made 35 years ago. Today, records are made to be listened to on tiny iPod earbuds, so the sound is louder and compressed to the point where recordings lose high and low dynamics. (A friend of mine recently told me he was burning some of his vinyl onto CDs, but it was difficult because the software would read any “quiet” spots on the albums as the end of the song. So a soft breakdown is lost in technology; when the track kicks back in, it is read as a new track.)
Some purists argue that Auto-Tune is just the latest example of the coldness modern technology has brought to music. The stripping away of humanity.
Maybe the next trend will be actual robots that sing. Maybe someone will soon figure out a way to input lyrics into a computer and have it sung back in the melody of one’s choosing. Recording software like GarageBand has already made the process of putting music to tape (or to, uh, “digital file”) about as easy as playing a video game.
Or maybe the next trend will be back to basics, a direction that, fads be damned, always seems to make its way back around. In the ’40s, musicians would simply set up in a room and play. The engineer would capture the sound through a few microphones set up around the room. And the sound consumers would hear would be the sound those musicians heard when they created it.
Now there’s a trick I’d like to see T Pain pull off.