Finding Your Voice Through Auto-Tune

How the infamous pitch correction software set the stage for a generation of artists to unlock new modes of expression, spanning decades and enduring different tides of popularity in the process.

Music Features Scene Report
Finding Your Voice Through Auto-Tune

There are few musical technologies that are as ubiquitous, maligned and misunderstood as Antares’ Auto-Tune. What was conceived in 1997 as a discrete tool for cleaning up vocals and optimizing the recording process has become an inescapable staple of modern music, with Pitchfork going as far as dubbing Auto-Tune as “the sound of the 21st century so far” in 2018. Every time naysayers have proclaimed the effect has reached its peak, it has further permeated the popular music landscape—and, on its 20+ year run, has made its way into pop, hip hop, electronic, R&B and indie music (not to mention broke ground on the foundations for genres like hyperpop).

There are a multiplicity of reasons for Auto-Tune’s sustained relevance, and while its aesthetic compatibility with our oversaturated, digital culture is notable, I am most fascinated by the freedom it allows artists to find and experiment with their voices. Rather than applying Auto-Tune after the fact to correct imperfect takes, many artists sing or rap through the technology from the start, finding ways to use their voices that were previously unthinkable. Just listen to the first 30 seconds of wokeups’ “Fragged Aht,” or the last 30 seconds of Charli XCX’s “Thoughts”—these explosions of heavily Auto-Tuned runs and synthesizers would not have happened if the vocal processing wasn’t embedded in the creative process. When vocalists use Auto-Tune while recording, they unlock a new world of sonic possibilities.

Alexander Panos, an electronic musician and composer who frequently experiments with vocal manipulation and has thoroughly investigated the inner workings of Auto-Tune, comments that “Many vocalists have developed techniques for performing and recording with Auto-Tune on their voice. There are ways you can sing that affect the behavior of the pitch correction; you can really wield it like an instrument.” This phenomenon, like the technology itself, has bled into a range of musical styles, and you’re just as likely to encounter it on the next chart topping hip hop album as you are on your older brother’s favorite obscure electronic record. While its path into the limelight has not been without controversy, Auto-Tune has helped shape the vocal deliveries of an array of artists ranging from rappers to bedroom singers to pop stars. But how exactly did we get here?

When mathematician and geophysical engineer Dr. Andy Hildebrand was working for ExxonMobile, he wrote a series of algorithms that processed seismic waves to help the company find drilling sites. However, after leaving Exxon, going to music school and founding Antares Audio Technologies in 1989, Hildebrand uncovered a wildly different application for his technology. When asking his colleagues at lunch about what needed to be invented, someone replied, “a box that lets me sing in tune!” While a joke at the time, the response sent Hildebrand down a pivotal rabbit hole, in which he realized his same algorithms could be applied to pitch. Auto-Tune was born.

Hildebrand’s original vision, however, was quite different from the Auto-Tune sound we all know and (maybe) love. It was intended for more subtle pitch correction, and it was designed to modify pitch while retaining other timbral qualities; this could expedite the recording process by allowing artists to finish songs with fewer takes, as less splicing was required. Embedded in the software was a dial to control how quickly the pitch corrects, as different tempos required different speeds for the correction to sound natural. This re-tune speed dial essentially controls how noticeable the effect is. “Just for kicks, I put a zero setting, which changed the pitch the exact moment it received the signal,” Hildebrand said. At the zero setting, the pitch is snapped to the grid immediately, and generates the hard-tune sound Auto-Tune became known for. Music would be very different today had he not added this feature, one which was merely an accidental aspect of the initial design.

Fundamentally, Auto-Tune receives an input audio signal, detects the pitch in real time and adjusts it to a predefined scale or set of pitches (the speed at which it makes this correction is determined by the aforementioned re-tune dial). It does this with incredibly low latency (about 2 milliseconds). Panos explains that this is essential to its real time usability. “Other pitch detection methods that employ techniques like the Fast Fourier Transform require many more samples to accurately represent pitch information—especially at perceptually relevant frequencies—resulting in higher latencies,” he notes. Auto-Tune’s fast pitch detection algorithm uses autocorrelation, “a function which measures how similar a signal is to a time-delayed copy of itself.” I’m barely scratching the tip of the iceberg here, and if you’d like to delve deeper into the software’s inner mechanisms, Panos suggests checking out the patent as well as Xavier Riley’s lecture on the topic.

Auto-Tune was the first to do it, and as such became colloquially associated with all pitch correction software, but it is now only one of many similar tools. Nonetheless, it being the first resulted in its particular qualities and quirks taking center stage, and among a sea of pitch correction plug-ins, Auto-Tune is still omnipresent (after receiving various updates, Antares eventually reintroduced “Classic” mode due to popular demand). One notable counterpart is Melodyne, which, launched by musical software company Celemony in 2001, is distinct from Auto-Tune in that it does not correct pitch in real time. Instead, it allows for more complete and subtle control over pitch and a number of other audio qualities that previously could not be edited independently. When a boomer complains about music being “too Auto-Tuned,” but there are no noticeable artifacts, the culprit is likely actually Melodyne—as it can yield a perfect performance more discreetly. On the flip side, when Auto-Tune artifacts can be heard, it is usually intentional, for the effect.

The first instance of hard tuning was presented in Cher’s 1998 chart topping single “Believe”—the effect is on full display as soon as the first verse hits, where she sings, “And I can’t break through,” and “It’s so sad that you’re leaving.” It was one of the highest-grossing singles of all time 25 years ago, and her employment of Auto-Tune forever changed popular music—as many scrambled to uncover the secret behind her vocal sound, Cher’s producer Mark Taylor tried to play it off as a vocoder, a technology which sounds somewhat similar but completely replaces the original vocal tone. Auto-Tune, on the other hand, can achieve a robotic effect while retaining a human quality, and the song’s massive success rendered Taylor’s efforts to conceal its use futile. Soon after “Believe,” Madonna would use the effect in 2000’s “Impressive Instant” and Daft Punk followed suit on their 2001 album Discovery (see “One More Time” and “Digital Love”). Auto-Tune was off to the races.

The technology gained more traction in the mid to late 2000’s, with T-Pain being one of the first artists to fully embrace the sound. His 2005 debut record, Rappa Ternt Sanga, put Auto-Tune front and center (see “I’m Sprung”), as did his follow up Epiphany in 2007 (see “Buy U A Drank (Shawty Snappin’)”). Auto-Tune became integral to T-Pain’s signature sound and immediately influenced the following years of popular music, particularly in hip-hop—where a tectonic shift began: The genre started to gravitate toward melodies instead of lyrics. Lil Wayne helped cement this change, with 2008’s “Lollipop” becoming his first #1 hit single, notably without the dense rapping he was previously known for.

Jamie Foxx, Snoop Dog, T.I., Akon and the Black Eyed Peas were also experimenting with Auto-Tune around the same time, but no use of the effect was as notorious at the time as Kanye West’s on 808s & Heartbreak, a record which Kanye referred to as “Auto-Tune meets distortion, with a bit of delay on it and a whole bunch of fucked-up life.” The whole album is dripping in the effect, which is all the more apparent due to the extremely pitchy input vocals, causing Auto-Tune to produce more artifacts. 808s would go on to have unrivaled influence on hip-hop in the decade to come, not just for its blatant use of Auto-Tune but because of how Kanye employed it. He used Auto-Tune as a conduit for conveying his depressed state, with half-human vocals enhancing the bleak and somewhat resigned emotions he was channeling. This spawned a world of “emo rap,” opening the doors for MCs to exhibit more vulnerability and wield Auto-Tune to manifest their self-expression.

While Auto-Tune would go on to dominate music in the following decade and beyond, many were not happy with this turn of events. In 2009, Jay-Z expressed this sentiment in “D.O.A. (Death of Auto-Tune),” and in 2010, TIME placed Auto-Tune on its list of the 50 Worst Inventions. But why was it (and still is to a degree) so controversial? Guitarists aren’t ridiculed for playing with frets, and Auto-Tune is functionally the vocal equivalent. If we come to know guitarists by how they play the guitar, we should be able to pick up on the intricacies with which vocalists play Auto-Tune. When critics argue the effect contributes to the homogenization of music, they miss the reality that artists can wield Auto-Tune in wildly different ways, becoming an extension of their voice rather than a reduction of it. Even if Kanye and T-Pain both use Auto-Tune, it would be quite difficult to mistake one’s vocals for the other, and Kesha’s pioneering implementation of the effect should’ve silenced those who complain that the technology “diminishes personality.”

Some critics also complain that Auto-Tune is inauthentic, failing to grasp that vocals have been comped together and heightened with delay and reverb long before the software existed. Panos elaborates further, commenting that “many [critics] feel that if a piece of music was performed live or recorded with minimal post-processing, it is intrinsically ‘better’ than music that wasn’t. But it shouldn’t matter if someone can physically perform a musical idea; what’s important is whether or not the idea resonates with its creator. If technology enables you to fully realize your artistic vision, why impose limits on yourself?”

Naysayers also argue that Auto-Tune is de-skilling and requires no talent to use. When T-Pain demonstrated his vocal chops without any tuning, it was all over the news—as many assume Auto-Tune is always used as a crutch, even though Justin Veron and James Blake proved this narrative false a decade ago (albeit in indie music, where some might be more willing to ascribe artistic merit to unconventional choices). Bad Auto-Tune still sounds bad, and as T-Pain put it in a recent NPR interview, “You can’t just throw on Michael Jordan’s shoes and think that you’re going to be the greatest basketball player of all time.” Furthermore, Auto-Tune has allowed artists to unlock new modes of expression, and learning to affect your voice to sing with Auto-Tune is a skillset in itself. “I might intentionally sing out of tune to activate the pitch correction at key moments because I desire the artifacts produced by the tuning process,” Panos notes. “I also get a lot of mileage out of making odd vocalizations or designing vocal-like sounds and running these through Auto-Tune.” These novel ways of utilizing the voice are becoming commonplace in modern music and started to really gain steam over the last decade.

Around the turn of the 2010s, a new sound in hip-hop emerged that would take the genre to its mainstream peak: trap music. Complete with rattling hi-hats and booming 808s, the sound became so ubiquitous that, by the start of the 2020s, had permeated pop and country music as well. The other staple of trap music is, of course, Auto-Tune, and it was none other than Future who brought the software into that world. Notably, unlike many aforementioned Auto-Tune pioneers, Future utilized the effect while rapping. “When I first used Auto-Tune, I never used it to sing. I wasn’t using it the way T-Pain was,” he said in an interview with Complex. “I used it to rap because it makes my voice sound grittier.” Future saw potential in this vocal processing to heighten the coldness of his delivery, and in doing so inspired countless successors to follow suit.

Following Future, the sound became inescapable. Some of the biggest artists to emerge from the scene, like Travis Scott, Lil Uzi Vert, Migos and Lil Yachty, all made Auto-Tune essential parts of their vocal chains. Migos’ application of the plug-in is notable in that each member took a different approach, using varying degrees of subtlety to differentiate their performances. It helped Scott realize a new psychedelia in hip hop, where reverb-soaked Auto-Tuned rapping and adlibs lay the foundation for his signature spacey trap atmospheres. Across the board, it also contributed to a blurring of rapping and singing, where with Auto-Tune on, artists could seamlessly switch back and forth between the two, somewhat dissolving what used to be a robust distinction.

Importantly, Future rapped with Auto-Tune already on, learning to tailor his vocal approach to the effect. His late engineer Seth Firkins elaborated on this process, noting that “while Future is tracking his vocals, he is hearing himself with Auto-Tune, a quarter-note delay, and reverb. When he hears himself with these effects, his confidence is up, and he can experiment a lot more. Because Auto-Tune pegs him to the right pitches, he can try any shit, and it’ll still sound cool.” In further popularizing the effect, Future also helped legitimize it, turning it into practically a prerequisite for trap without receiving the same ridicule T-Pain and Kanye did. The tides were changing and more people became charitable to the artistic merits of Auto-Tune, and the plug-in became a far-cry from its intended use as subtle pitch correction. Firkins emphasized that Auto-Tune “is, in a sense, an instrument. It’s not just some [effect]—it’s an integral part of [Future’s] emotion and his sound and his delivery, and how he expects things he says and notes he hits to be bent and twisted.”

A more recent development in hip-hop is rage music, a visceral sub-genre of trap that features dense, repetitive synth layers and vocal acrobatics made possible by Auto-Tune. The ad libs are half the battle, and Auto-Tune liberates vocalists to experiment with and find extreme affectations that become signatures of their sound—signatures that likely wouldn’t have been attempted if no effects were on their voice while experimenting to begin with. Rage’s pioneer, Playboi Carti, uses smaller doses of the effect and is incredibly good at altering his voice to work with his vocal chain (akin to Young Thug). Yeat, another popular rage artist, leans more heavily on vocal effects, ranging from Auto-Tune to formant shifting (which can be done inside Auto-Tune or via alternative plug-ins) to distortion and more. These vocal effects have become part of certain artists’ (Future, Travis Scott, Yeat, etc.) core identities, essential to their sound and fuel for eager imitators to take to the internet to dissect the chains.

Bladee has been making waves on the internet alongside his rap collective Drain Gang for over a decade, and his signature style revolves around the use of extreme Auto-Tune. His experimentation shines brightest when he layers many vocal takes, creating a cascade of Auto-Tuned runs that coalesce into something heavenly, drugged out and disorienting, or both. Listening through his discography, it becomes clear that he has evolved greatly as a singer, always alongside Auto-Tune. In his early work, his vocals were often grating, producing harsh artifacts which were fairly polarizing. However, after years of singing through the effect, he has unlocked a variety of ways to wield it. In a sense, he has simply become a better vocalist—but, more precisely, he has learned to use his voice in ways that allow him more control over his pitch correction, and therefore more flexibility in the soundscapes he creates.

Not only has Auto-Tune had massive influence over the direction of pre-existing genres, it has also played a major role in the inception of newer ones—such as hyperpop. Emerging in the 2010’s out of PC Music and other pockets of the internet, hyperpop often takes a tongue-in-cheek approach to deconstructing popular music, employing experimental techniques including extreme vocal editing with Auto-Tune, pitch shifting and more. A.G. Cook’s take on Auto-Tune often involves a breathy delivery, while Caroline Polachek has investigated new patterns by running her falsetto through the hard tuned effect. Interestingly, there are two ways in which the genre (and others that have spawned in its wake) uses Auto-Tune and vocal processing (that are distinct from utilizing them as instruments to find novel vocal possibilities, although this of course also happens in hyperpop): comfort/confidence and identity affirmation.

Auto-Tune makes artists more confident when they sing, resulting in a reduced barrier to entry to this aspect of the creative process—and a heightened delivery when recording. So much of a musical performance, particularly a vocal one, comes down to psychology, and if an artist feels emboldened to crush a take, they are far more likely to do so. In a 2017 interview with The FADER, Charli XCX echoed this sentiment, stating, “I record with Auto-Tune all the time, and I basically have done [it] for the last three years solid. I feel very insecure singing without it. I feel nervous and really panicked. It allows me to be way more fearless when I’m in the studio, because I hate the sound of my own voice without it.”

Charli is far from alone in feeling as though the effect has redefined her vocal process. Panos tells me that Auto-Tune was pivotal in him coming into his own as a vocalist. “In many ways, Auto-Tune (and pitch correction in general) enabled me to begin my journey as a singer,” he says. “Allowing a novice to hear a locked-in version of an otherwise shaky vocal performance gives them the confidence to pursue the practice further. The empowerment this technology offers is exciting.” Panos’ experience is far from singular, and as many new artists are learning to use their voice as they learn to write and produce music, starting with a solid vocal foundation can be an incredibly motivating aspect of what might be an otherwise difficult process.

Auto-Tune’s ability to be a tool for identity affirmation is obvious when listening to the vocal manipulation of 100 gecs, a trailblazing hyperpop duo consisting of Dyland Brady and Laura Les, and taking notice of the many qualities that stand out. For one, it is obvious that Brady and Les, while both using pitch correction, dial their effects in very differently, setting the two apart in their music in a way not dissimilar to Migos. They often employ pitch-shifting, sometimes in the name of silliness, but in Les’ case the incredibly high “nightcore”-inspired vocals serve another purpose: her trans identity. Exemplified in her song “how to dress as a human,” Les has noted that her use of Auto-Tune and pitch-shifting help alleviate the dysphoria she experiences when listening to her unprocessed vocals. In this way, her processed voice, in a sense, becomes her “real” voice—and in live shows, Les is known to leave these effects on in between songs. Of course, not all artists who implement similar vocal processing techniques use it for this intention, but some notable counterparts are Dorian Electra and Shamir.

So much of what we know, love, loathe, revere and admonish in contemporary music has been informed by what was once a tool intended for discrete pitch correction. While this is fascinating in itself, it also speaks to a broader trend: Vocal experimentation and manipulation is at the forefront of innovation in modern music. Although pitch correction was not the first technology implemented to this end—Prince used pitch shifting in the ‘80s to assume an androgynous alter ego on Sign O’ The Times, and artists were playing with vocoders and talk boxes decades before then—Auto-Tune greatly expanded the ways in which vocal processing is possible. There’s something uniquely powerful about hearing the human voice, and in turn hearing it warped, sped up, mangled and mutilated to the point of it being barely recognizable—but nonetheless it is. “Our brains are especially sensitive to anything that resembles a vocalization,” Panos says. “I don’t believe I’m alone in thinking that the human voice is the most interesting, complex, and expressive sound.” He is definitely in good company; musicians in the 21st century are feverishly finding new avenues to realize their artistic visions through manipulated vocalizations, uncovering invigorating ways to articulate our humanity in a digital age. In a sense, it all goes back to that fateful day in October, 1998, when Cher released “Believe” into the world, changing the possibilities of vocal expression forever.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin