Three days after Charlie Kirk was publicly slain, a video appeared on YouTube announcing the debut of singer-songwriter Adele’s latest single, a soaring tribute to the conservative activist’s legacy.
“Rest in peace, Charlie Kirk!” a voice that sounded somewhat like Adele’s sang out over a video[1] showing Kirk with his young daughter. “The angels sing your name. Your story’s written in the stars, a fire that won’t wane.”
According to YouTube posts, the celebrity tributes were plentiful. They came from Ed Sheeran, Eminem, Taylor Swift, Celine Dion, Lady Gaga, Rihanna, Post Malone, Dax, Lil Wayne, Jelly Roll, Selena Gomez, Justin Bieber and Imagine Dragons.
But none of them were real. They were all generated using artificial intelligence. And they often featured fake thumbnail images that showed the artists in tears or with mournful expressions.
PolitiFact analyzed 24 YouTube videos claiming to show tribute songs dedicated to Kirk. Combined, these videos gained more than 4.6 million views, and some were shared to other platforms.
Sign up for PolitiFact texts
Two tribute songs to Charlie Kirk were attributed to Ed Sheeran. But they were made with AI. (Screenshot from YouTube)
One was uploaded on the day of Kirk’s assassination, illustrating the speed with which creators can generate songs related to a major event.
Nearly all the videos contained disclaimers saying they contained altered or synthetic content, yet many viewers left comments that show they believed these songs were real.
“Thank you, Justin,” wrote a commenter who thought she was responding to Bieber.
“Did not expect this from Bieber,” wrote another.
Comments on a YouTube video titled “Justin Bieber – You’ll Be Missed Charlie Kirk (A Tribute To Charlie Kirk).” The song was made with AI. (Screenshot from YouTube)
The musical tributes sound like they might have taken a full band a week in a studio, said Bryan Pardo, Northwestern University computer science professor and head of the Interactive Audio Lab. That’s part of why people believe them.
“Most people don’t realize how far AI-generated content has come,” he said.
Experts said AI tools can passably mimic artists’ voices, especially when a listener hasn’t spent a long time listening to these artists. But there are ways to identify sonic abnormalities typical of AI use.
Tools can generate songs quickly based on simple prompts, even ones similar to an artist’s style
AI music generators such as Suno and Udio are trained using a vast set of existing songs. With a short user prompt, they create new music quickly and inexpensively.
PolitiFact tested it.
“Make a song about grieving the sudden death of an inspirational figure in politics, in the style of Taylor Swift,” we wrote in prompts for Suno and Udio’s free accounts. The tools rejected the prompts on grounds they don’t have permission to generate an artist’s likeness.
We tried again using more generic language: “Make a song about grieving the sudden death of an inspirational figure in politics, in the style of a 35-year-old pop singer with a young, primarily female demographic.”
Both tools produced songs within moments — two different songs per prompt.
Screenshot from Udio
This does not make musicians happy. Major music companies, including Sony Music Entertainment, Universal Music Group and Warner Records, through the Recording Industry Association of America, sued[2] both AI platforms in June 2024, accusing them of using copyrighted music to train their AI tools.
Suno said in a court filing that the model was trained using “tens of millions of recordings,” but argued the process constitutes fair use.
Artists including Imagine Dragons, Katy Perry, Nicki Minaj and Stevie Wonder argued in a 2024 letter that[3] training AI bots on artists’ work “will degrade the value of our work and prevent us from being fairly compensated for it.”
“We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem,” the letter read.
Those tools use technology similar to the kind used by ChatGPT, said Tom Collins, University of Miami associate professor of music engineering technology. Such models are trained on massive datasets with the aim of producing human-like language.
Vocals, lyrics and instrumentation can reveal generative AI clues
Some AI artists[4] have gained a following, and at least one AI artist has secured a record deal[5].
Through video comments like those on the tribute songs, fans sometimes show they don’t realize they’re listening to AI-created music.
“These models are definitely at the stage where, if you don’t know an artist’s lyrical capabilities, or the quality of their singing voice, based on, you know, several hours of listening earlier in your life or recently in your life, I think you can be fooled relatively easily,” Collins said.
So what are some cues people can listen for?
Siwei Lyu, University at Buffalo computer science and engineering professor, said that playing the music on loop for a few rounds can reveal some abnormalities. The vocals, for example, may sound “overly smooth or robotic,” contain unusual breathing patterns, have smeared consonants or unrealistically regular vibrato, he said. Lyu analyzed the songs without AI disclaimers and found synthetic elements in the audio, including sudden changes in vocal quality and clarity.
For example, one Kirk tribute song[6] falsely attributed to Dax and Lil Wayne starts out with a low vocal quality, before shifting to a louder and clearer sound, Lyu said. He also detected voice clarity changes at the 0:46 and 0:56 timestamps.
Lyu and Collins each said that vocal and instrumentation mixing may not be as crisp as in a real track’s studio version.
Collins compared the creation of AI music with image generation: “If you’ve generated an image of a person, maybe that person has six fingers, or has, like, two fingers that are kind of melted together. That kind of melting, or that muddiness, the audio equivalent of that are like individual notes that don’t really appear to be a guitar note or a piano note but something in between.”
AI-generated lyrics are often generic and may have abrupt changes in quality, Lyu said. The rhyme schemes may be too regular, the language might be cliché, and the music may also lack the emotional nuance typically found in real performances,.
Collins said that AI-generated lyrics are typically formulaic and simplistic as they aim to fulfill the prompt’s directions. A prompt about mourning might produce lyrics that contain the words “candles” and “heaven.”
GPTZero analysis found a 98% chance that lyrics of one tribute duet supposedly made by Adele and Ed Sheeran were AI-generated.
Collins said he expects these tools to become more sophisticated, producing music that is less easily identifiable as AI. “If we go forward another couple of years, I could imagine that the approximation, that the mimicry, would be far better, and it would be harder, even for me, to tell,” Collins said.
YouTube’s policy as of 2023[7] requires creators to “disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.” If creators don’t disclose that information, they could have their content removed, be suspended from the YouTube Partner Program, or face other penalties.
In many of the tribute songs, that disclosure is not prominently displayed; it is buried under the video’s description that won’t be seen on desktop unless the viewer clicks “more.”
“There’s a responsibility on streaming platforms that they’re not really fulfilling about verification,” Collins said.
PolitiFact Staff Researcher Caryn Baird contributed to this report.