Where’s generative AI going? Look to how tech changed music recording

A couple weeks ago, I shared an initial take on the uses of generative AI and ChatGPT. Since then, each day seems to bring another story about this technology’s uses and impact. (In fact, check out the AI-generated image accompanying this blog post.) Recently, the New York Times had two articles on generative AI on a single day—one highlighted how professors at colleges and universities have had to adapt their approach to teaching, and another discussed ChatGPT’s threat to the foundations of our democracy. The conversation has shifted from awe and excitement to trepidation at lightning speed.

So how does one make sense of generative AI and gain some perspective on what lies ahead? The history of how technological advancements changed the music industry could offer an instructive point of reference—or be interesting, at the very least. A few inflection points come to mind.

Magnetic tape ushers in multitrack recording, everyone rejoices

Before the 1940s, recording was primitive. It involved gathering all the musicians in a studio, arranging them around microphones, and having them play a song together in its entirety. On the other side of the glass, engineers would mix the performance as it was cut into a wax master disc, which was then used to create vinyl records. If one of the musicians hit a clunker, the whole group had to start over. You can imagine how cumbersome and slow it made the process.

In the 1940s, the incorporation of magnetic tape enabled multitrack recording, in which individual parts could be recorded onto the tape at different times. That meant a band could record the backing track and then allow the singer or a soloist to add their part separately. All of the tracks could then be mixed later, resulting in better fidelity. No one was complaining that multitrack recording was somehow a perversion of real-time performance. Just the opposite—it opened up exciting new possibilities and allowed everyone involved to work more efficiently.

For instance, the Beatles recorded most of their albums using just four tracks. The limitations of the four tracks forced the band to get creative in recording, much to the chagrin of the engineers, who felt some of these methods led to a loss in quality. But in the process, these recordings introduced new sounds to listeners and created a template for legions of bands to follow.

Roland launches the drum machine, drummers despair

For many musicians and music fans, 1980 was a critical year. Roger Linn launched the Linn Drum, which sampled real drums and featured a huge amount of memory. Meanwhile, Roland introduced its TR-808, a drum machine that had distinctive sounds (such as snare, bass drum, and hi hat). These products immediately addressed one persistent challenge of music recording: human musicians are imperfect; they speed up, slow down, and play louder when they get excited. In the age of multitrack recording, songs were often built from the ground up, with drums providing the foundation. Bands and producers had to get the drum part right before they could start putting layers over the top. That could take a lot of time. (Oasis sacked their original drummer because he wasn’t capable of laying down drum parts quickly.)

Not with the Linn Drum and 808, however. They provided a perfect, metronomic beat at a consistent volume. Artists from Marvin Gaye to Hall and Oates used drum machines on hit songs. Rap groups made the 808 a foundational part of their sound. As Questlove from the Roots commented, “The 808 was ‘the rock guitar of hip-hop.’”

While drum machines made a lot of people happy, and many consumers didn’t care about the trade-off, drummers were pissed off. Technology had just taken them out of the picture, and they decried what this development would mean for music. Now, of course, the drum machine has become just another sonic color in the recording palate, and drummers (the good ones) still have gigs. But it did cut the heart out of the sessions business for them. They would soon have a lot of company.

ProTools allows everyone to record in their bedroom, studio owners despair

For decades, recording studios had an unassailable competitive advantage: producing professional, high-quality music recordings required a ton of analog equipment (such as recording consoles, tape machines, and effects like reverb and compression). The price tag was beyond the reach of smaller studios, so bands and producers had no choice but to rely on recording studios to get a good sound.

All of that began to change when ProTools hit the market in the 1990s. It served as a digital audio workstation—essentially a digitized mixing console that could be augmented with a nearly infinite catalog of plug-ins. Want to duplicate two measures of a song? Just cut and paste to your heart’s desire. Want your guitar to sound like a ’57 Les Paul through a vintage Fender amp? No problem! Suddenly, every home studio had access to features and functionality that not even the best analog equipment could match. (It’s a little like being able to stream nearly every song in the world. Who needs a record collection?)

When I was playing music, I experienced the change firsthand. We recorded our first album using tape and other analog equipment; the next one was ProTools and hard drives all day. Mess up a part? Just take it from another section of the song. Were the bass and drum parts a bit off on the second verse? Line them up digitally. It’s the equivalent of using Word to move text around.

The impact of ProTools has been multifaceted. The recording process became cheaper, allowing more middling bands to put out more middling music. Some would say technology allowed artists who couldn’t carry a tune to top the charts, cementing the style-over-substance trend that MTV kicked into overdrive. ProTools played a major role in destroying the business model for multiple professional studios. And it put more power in the hands of producers to create their own tracks without a musician in sight.

*****

So where does that leave us? Regardless of technological advances, a successful artist still needs to have something new or interesting to say and the ability to realize their sonic vision. For every Jack White, who has become synonymous with carrying the torch for analog recording methods, you have a Mark Ronson, who was able to build the tracks for Amy Winehouse’s Back to Black album using digital technologies before turning the detailed guide tracks over to the Dap-Kings to play live—which Ronson said sounded “a million times better” than computer-treated tracks. (Watch this video if you want to get a sense of how tracks are built from the ground up.) Both find creativity in limitations—White in analog technology, Ronson in his ability to play what he’s hearing in his head.

Justin Hurwitz, who has worked with director Damien Chazelle on all of his films, including the recent Babylon, similarly used technology to create complex arrangements in the studio that served as a soundtrack for shooting key scenes.

When technology facilitates the creative process and enables artists to bring their ideas to life, it should be celebrated. And if generative AI can support the development and articulation of ideas, we owe it to ourselves to figure out how best to harness it.

But don’t confuse generative AI as a replacement for true creativity and a unique perspective. Even with this technology, people still make the world go round—for the time being.

Scott Leff

Scott is the founder of LEFF. He’s spent his career helping executives and subject matter experts tell their story in a compelling way. In the process, he’s had the opportunity to work with C-suite executives, politicians, academics, and Olympians, not to mention dozens of talented writers, editors, and designers in the business world. Scott developed the concept of “lean content creation” as a cost-effective way to support comprehensive, integrated communication strategies.

Leave a Reply