VLOG 4: Nearing True Machine-Musician Interaction? Exploring Rhythm through Fractal Proportions

We are nearing our objective of enabling our computer to interact with a live musician in a manner that closely resembles the interaction with another musician. In this fourth vlog, we explore the user interface developed by Samuel Peirce-Davies, which employs the ml.markov object to learn music from MIDI files. Our aim is to extend this functionality to accommodate music played by a live musician, which we’ve found requires certain adjustments.

Specifically, we’ve discovered that the realm of rhythm, or the temporal aspect of music, necessitates a distinct approach beyond the straightforward, millisecond-based logic. The key lies in thinking in terms of PROPORTIONS. Essentially, we’re dealing with the relationship between pulse and rhythm. This relationship needs to be quantized into a limited number of steps or index numbers that can be input into the Markov model.

Vlog 4

To achieve this, I’ve employed what might best be described as a fractal approach. We’re investigating the interaction between pulse and rhythm, moving away from a linear methodology that divides the pulse into equally spaced steps. Instead, we aim to determine the proportion, leading us to work with a list of fractions that divide the beat into segments like 1/6, 1/4, 1/3, 1/2, and so on.

By setting the maximum rhythmic duration to 4/1 of a beat, we have distilled the complexity down to just 13 different index values. This is in contrast to an equal steps approach, which would yield 48 index values if each beat were divided into 12 equal parts.

Consider whether you would truly notice a difference between a duration of 48/48 versus 47/48. Likely not, which illustrates why 13 index numbers are more meaningful to the Markov model compared to 48. This is especially relevant when considering Samuel’s approach, where any duration, measured in milliseconds, could potentially be integrated into the Markov model.

Sketch to a fractal concept of rhythmical proportions, turning rhythm into 13 index values to be fed into the markov model.

Update, 2024-02-17

After quite some messing around with geogebra, sheets, papir&pencil, I’ve come up with a visual representation of the fractal like structure of the duration index values.

It’s based on the series of duration fractions dividing the beat into triplets and quadruplets following the logic of double value, then dotted value, etc. Here is the series of fractions: 2/12 3/12 4/12 6/12 8/12 9/12 12/12 16/12 18/12 24/12 32/12 36/12 48/12

And here is how these fractions will look when ‘translated’ into squares with a common center:

Notice the self similarity (IE fractal-ness) of the proportion between the blue, pink and brown squares at each level.

If you want a closer look at the maths behind this graphic, here’s the geogebra-patch, I’ve made.

Useful links:

Pattern Play

Navigating the Intricacies of Markov Models in Sound


In our latest exploration within the “Mycelium and Sound Collectives” project, we dive deeper into the realm of machine learning, focusing on the pivotal step of feature engineering – transforming raw musical data into a machine-readable format. This process is crucial for our goals, ensuring the data highlights the musical characteristics essential for algorithmic learning and interpretation.

Today, we spotlight our third vlog, which delves into the intricacies of the Markov model through MAX/MSP, showcasing how the ml.markov object’s ‘order’ command significantly expands the model’s memory. This allows it to recognize and generate more complex musical patterns, revealing the potential of machine learning in music composition and improvisation.

This exploration not only enhances our understanding of Markov models but also highlights the importance of precise data preparation in machine learning. By improving how data is fed into the model, we can greatly enhance its predictive capabilities, offering new possibilities for musical creativity at the intersection of technology and art.

Stay tuned as we continue to push the boundaries of music, mycology, and machine learning, uncovering new insights and possibilities in this innovative project.

Unveiling Musical Aesthetics

Markov Model Exploration for AI Improvisation

Following up from my last post, I am going more into detail in the project “Mycelium and Sound Collectives”, working more in depth with the question of musical genre and how to make the musical data ready for the machine learning, a process known as feature engineering.

In this 2nd episode of our vlog, we guide musicians and composers in harnessing the power of the Markov model for music improvisation. We dive into the essentials of machine learning, emphasizing tailored feature engineering for unique musical styles. Through a radar chart analysis, we prioritize key parameters….

Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!

Can my computer learn to improvise music?

As a part of my project “Mycelium and Sound Collectives”, I am currently doing some research into the question of machine learning and music.

The project is scheduled for autumn 2024, as an outdoor performance, which introduces a rather peculiar ensemble – an amalgamation of a saxophone player, an analog synth with a computer, and then a rather unconventional performer: a mycelium network forming a fairy ring in the soil where the concert unfolds.

In my current research, I am delving into the intersection of machine learning and music composition, questioning the potential for computers to genuinely improvise music.

My examination uncovers the complexities of the artistic process, exploring choices within sound composition. I scrutinize loop pedals, sequencers, and the Markov model, viewing them not merely as tools but as integral components shaping a dialogue between live musicians and evolving machine capabilities.

As documentation for the process, I am doing a vlog, and in the first episode, I am asking: “Can my computer learn to improvise music?”.


In this vlog episode, I don’t provide a definitive answer to the question posed in the title. Instead, I aim to unfold the various aspects at play when delving into the complexities of this inquiry. This involves discussing diverse examples of machine learning models, ranging from the rudimentary to the advanced, in connection with music. As you’ll see when you are watching it, I also demonstrate their functionality within the MAXMSP software, using concrete sound examples. Additionally, I delve into the conceptual framework essential for making informed artistic choices at the intersection of art and technical solutions.

Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!