As a part of my project “Mycelium and Sound Collectives”, I am currently doing some research into the question of machine learning and music.

The project is scheduled for autumn 2024, as an outdoor performance, which introduces a rather peculiar ensemble – an amalgamation of a saxophone player, an analog synth with a computer, and then a rather unconventional performer: a mycelium network forming a fairy ring in the soil where the concert unfolds.

In my current research, I am delving into the intersection of machine learning and music composition, questioning the potential for computers to genuinely improvise music.

My examination uncovers the complexities of the artistic process, exploring choices within sound composition. I scrutinize loop pedals, sequencers, and the Markov model, viewing them not merely as tools but as integral components shaping a dialogue between live musicians and evolving machine capabilities.

As documentation for the process, I am doing a vlog, and in the first episode, I am asking: “Can my computer learn to improvise music?”.


In this vlog episode, I don’t provide a definitive answer to the question posed in the title. Instead, I aim to unfold the various aspects at play when delving into the complexities of this inquiry. This involves discussing diverse examples of machine learning models, ranging from the rudimentary to the advanced, in connection with music. As you’ll see when you are watching it, I also demonstrate their functionality within the MAXMSP software, using concrete sound examples. Additionally, I delve into the conceptual framework essential for making informed artistic choices at the intersection of art and technical solutions.

Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!

Leave a comment