Played by Humans
- 12 hours ago
- 3 min read

By Jason Toney
It hasn’t happened to me recently, but late last year, my regular Saturday morning ritual of exploring new music was often derailed by self-published, artificial sound-alikes. Most times, I sensed artifice in seconds and moved on, but occasionally, I’d tap along a while before questioning what I was hearing. In each instance, I left disappointed and deflated by AI-generated music.
There’s a lot of concern about the impact of artificial intelligence on our lives today. I’ve heard claims that AI-generated artists are dominating the charts in France and Korea, but after some digging, I found no evidence to support them. In the States, a few of these algorithmically generated songs have appeared on Billboard or Spotify charts, yet their influence is far less than headlines suggest. Recently, an AI artist topped the US Country Digital charts with a formulaic song that mimics the genre. The reality: the song earned about 3,000 downloads that week, enough to manipulate the charts. Its novelty faded quickly.
Research shows that even as synthetic music becomes harder to detect, consumers don’t want it.
Just because AI might be able to generate something inexpensive, fast, and perhaps better than what the novice could do on their own, doesn’t mean it’s good. It doesn’t make it art.
That doesn’t mean it isn’t an additional hurdle for artists as they navigate the already precarious world of digital streaming platforms. Jazz pianist and composer Jason Moran took to his social media accounts to explain how he came to learn of a phony, AI-generated album that had been published under his name on Spotify—a service to which he doesn’t publish directly. He was able to get it removed, but as Morgan Hayduk of Beatdapp (a company that provides fraud detection for music streaming) explained to The Guardian, “AI has become an accelarant,” to fraud in the digital streaming age. Despite the fact that these songs are rarely or never heard by actual people, the onus is on the artist to be vigilant about protecting their name, likeness, and brand, as DSPs have struggled to tamp down the problem even before AI became a tool for malicious actors.
Adrian Younge, the Los Angeles-based composer, producer, and multi-instrumentalist, puts it simply: “You should know if what you’re listening to is coming from the soul of another person.” Younge and his longtime collaborator Ali-Shaheed Muhammed (DJ, producer, bassist, and songwriter, best known from A Tribe Called Quest) believe in this so deeply that they’ve developed a solution called “Played by Humans.” Ironically, it’s powered by the same machine learning models that drive generative AI, but this tool helps creatives and others detect artificially generated audio.
Last summer, the UK-based free jazz trio Sveið released Latent Imprints, an album integrating AI as a responsive, improvising partner. Some in the avant-garde jazz press saw it as a legitimate artistic statement. Federico Reuben, the University of York professor who developed the technology, describes AI as a post-digital myth: a repository of glitched, shape-shifting creatures whose uncanny sounds disrupt and extend human music-making. The key is treating it as an instrument, rather than a replacement for artistry. Without the three human musicians in this group, there was no possibility of emotional depth or the unique innovation that comes from pouring personal, cultural, and ethnic history into the composition.
Used ethically, this technology can either enhance a true artist’s work or lower barriers for aspiring artists to share their recordings with a wider audience. However, it cannot make art on its own. Art requires soul. People want to hear music that expresses real lived experience and a true mastery of instruments and sound.
This is especially true of jazz. Jazz is best experienced live because its essence lies in improvisation, collaboration, craft, and adaptability. These qualities make it difficult for synthetic musicians to participate in any jam session authentically.
That isn’t to say we won’t see inventive, experimental jazz made with an AI-powered device beside a synthesizer, drum kit, or sampler. As Sveið proves, that’s already happening. What we marvel at, though, is the real musician’s ingenuity and their ability to forge fresh connections to the human condition. This matters more than any software's ability to mimic a century of recorded music.
When that happens, I’ll eagerly head to the stage and groove along, inspired by the boundless creativity of artists who blend technology with skill and imagination, knowing I am witnessing something extraordinary.
Extraordinarily alive.
Jason Toney is a digital strategist and storyteller with deep experience in entertainment, media, and culture. Human-centered insight to every engagement—from high-impact analytics to audience-first storytelling, Toney has blogged in some form or another since 2002. He have worn many leadership hats in the entertainment industry, recently providing insights and intelligence on content, audience, product, and subscription trends related to premium streaming.

