top of page
Background

How We Premiered an AI-Generated Hit at DOU Day

How We Premiered an AI-Generated Hit at DOU Day
🎤 How We Premiered an AI-Generated Hit at Ukraine’s Largest Tech Conference

What happens when you combine academic music theory, artificial intelligence, and the ambition to create a song that resonates with millions? That’s exactly what our CEO, Dmitry Sofyna explored during his talk at DOU Day 2025 — the largest IT conference in Ukraine.


How WINSTARS Premiered an AI-Generated Hit at Ukraine’s Largest Tech Conference
Dmitry Sofyna, CEO at R&D Center WINSTARS.AI

This year’s event once again proved that Ukraine’s tech industry isn’t just keeping up—it’s leading. AI, digital platforms, big data, cybersecurity, and deep analytics all took center stage.


Among this wide spectrum of topics, we decided to approach AI from a less expected angle — through music. Not just as an engineering tool but as a creative partner capable of interacting with emotion, culture, and storytelling. In our presentation, we demonstrated how modern algorithms can deconstruct hit songs, analyze listener preferences, and even contribute to creating new tracks.


Dmitry’s talk, “The Mathematics of Music, or How We Teach AI to Write Hits,” immediately set the tone. From the beginning, it was clear: this wasn’t a thought experiment — it was about how today’s tools are already helping teams compose songs that sound like tomorrow’s charts.


He began with a powerful idea: music isn’t magic — it’s physics. What we perceive as melody or harmony is actually the wave-based nature of sound — frequencies, amplitudes, harmonics, waveforms. These sounds interact with our nervous system, triggering emotional responses in the brain: the limbic system, auditory cortex, even motor regions. By understanding this structure, our team explored how emotional perception in music can be modeled using data and AI.


How WINSTARS Premiered an AI-Generated Hit at Ukraine’s Largest Tech Conference
Music isn’t magic — it’s physics

The first step was collecting and analyzing music data. Hundreds of popular songs from Spotify, YouTube, and Billboard were broken down into components: rhythm, harmony, melody, tempo, and lyrics. NLP models helped us identify recurring themes, emotional patterns, and lyrical styles. On the musical side, we used machine learning to analyze chord progressions, key changes, and rhythmic structures.


Next came the music generation. We applied neural networks — GANs, RNNs, Transformers — to create melodies, beats, and harmonic layers. Among the tools explored in this space is Suno, which can turn a simple text prompt into a full-fledged song. For lyrics, we used language models that considered emotional tone, cultural trends, and song structure — generating content that feels familiar and fresh at once.


Then came arrangement and production. Generated melodies were imported into DAWs, layered with basslines, synths, and drums. Dmytro highlighted how tools like Tacotron, WaveNet, and Synthesizer V make it possible to synthesize natural-sounding, expressive vocals. He also explained that some teams in the industry use deepfake-style voice cloning, replicating real voices — always with permission and ethical safeguards. In our case, we focused on what’s technically possible rather than applying every available technique.


Mixing and mastering was another area where AI played a role. Tools like LANDR and iZotope Ozone AI were used to automate this part of the workflow — helping us achieve professional-quality sound with fewer manual steps.


Finally, we focused on testing and optimization. The track was evaluated on platforms like TikTok and Instagram Reels. Listener reactions — attention span, engagement, shares — informed tweaks to structure and mix. Spotify analytics helped us track plays, playlist placements, and audience reach.


The highlight of the talk was a preview of our demo track “Blue Eyes,” created by our team using the abovementioned approach. It wasn’t just a demo — it was a real, emotional, AI-assisted music product, already resonating with early listeners.


Naturally, one big question came up from the audience: Does this mean AI will replace musicians?


Dmitry’s response was clear:

AI isn’t here to replace the artist — it’s here to expand them. Humans define what art is. AI helps us get there faster, with more clarity and control.

This isn’t the end of creativity. It’s the beginning of a new kind — where humans bring the vision, and machines amplify it.


DOU Day 2025 once again proved that technology is no longer confined to isolated spaces — it’s becoming part of culture, storytelling, and the arts. And music is just the beginning.


🎧 The official music video for “Blue Eyes” is already live:



Take a moment. Think about the last song you heard. Are you sure it was created entirely by a human?


How WINSTARS Premiered an AI-Generated Hit at Ukraine’s Largest Tech Conference
Audience reaction to the song "Blue Eyes"

bottom of page