OpenAI is reportedly exploring a new frontier in generative AI music creation. According to the source, the company is developing a tool capable of producing music from both text and audio prompts. To support this project, OpenAI has been collaborating with students from The Juilliard School, who are helping create training data for the system.

Sources familiar with the project said the AI could eventually generate full musical accompaniments, such as guitar backing for vocals or background scores for videos. The goal appears to be expanding AI’s creative capabilities beyond text and images into the sound and music domain.

While it remains unclear how advanced the project is, one source told The Information[1] that Juilliard students were asked to annotate music scores. These annotations would serve as valuable data for training the model to understand rhythm, melody, and structure.

This isn’t OpenAI’s first experiment with music generation. The company previously explored similar ideas, and interest in AI-powered music tools has grown across the industry. Startups like Suno and ElevenLabs have already launched their own music-generation platforms, aiming to reshape how audio content is produced.

However, the rise of AI-generated music has also brought challenges. Streaming platforms are already struggling with a flood of AI-produced tracks, and earlier incidents, like the “Velvet Sundown” controversy, highlight the potential risks of synthetic media.

References

  1. ^ The Information (www.theinformation.com)

By admin