Sommeren 2024 – Dansk Komponistforening og Mads Pagsberg har taget initiativ til projektet “Dialogisk Komposition”, hvor komponister inviteres til at skrive for bigband.
Jeg har været heldig at blive udtaget til projektet, og jeg bidrager med værket “Mycelium and Sound Collectives”.
Her er en video, som forklarer om mit værk, om tankerne bag, og om, hvad musikerne har brug for at vide for at kunne forberede sig til at spille det.
We are nearing our objective of enabling our computer to interact with a live musician in a manner that closely resembles the interaction with another musician. In this fourth vlog, we explore the user interface developed by Samuel Peirce-Davies, which employs the ml.markov object to learn music from MIDI files. Our aim is to extend this functionality to accommodate music played by a live musician, which we’ve found requires certain adjustments.
Specifically, we’ve discovered that the realm of rhythm, or the temporal aspect of music, necessitates a distinct approach beyond the straightforward, millisecond-based logic. The key lies in thinking in terms of PROPORTIONS. Essentially, we’re dealing with the relationship between pulse and rhythm. This relationship needs to be quantized into a limited number of steps or index numbers that can be input into the Markov model.
Vlog 4
To achieve this, I’ve employed what might best be described as a fractal approach. We’re investigating the interaction between pulse and rhythm, moving away from a linear methodology that divides the pulse into equally spaced steps. Instead, we aim to determine the proportion, leading us to work with a list of fractions that divide the beat into segments like 1/6, 1/4, 1/3, 1/2, and so on.
By setting the maximum rhythmic duration to 4/1 of a beat, we have distilled the complexity down to just 13 different index values. This is in contrast to an equal steps approach, which would yield 48 index values if each beat were divided into 12 equal parts.
Consider whether you would truly notice a difference between a duration of 48/48 versus 47/48. Likely not, which illustrates why 13 index numbers are more meaningful to the Markov model compared to 48. This is especially relevant when considering Samuel’s approach, where any duration, measured in milliseconds, could potentially be integrated into the Markov model.
Sketch to a fractal concept of rhythmical proportions, turning rhythm into 13 index values to be fed into the markov model.
Update, 2024-02-17
After quite some messing around with geogebra, sheets, papir&pencil, I’ve come up with a visual representation of the fractal like structure of the duration index values.
It’s based on the series of duration fractions dividing the beat into triplets and quadruplets following the logic of double value, then dotted value, etc. Here is the series of fractions: 2/12 3/12 4/12 6/12 8/12 9/12 12/12 16/12 18/12 24/12 32/12 36/12 48/12
And here is how these fractions will look when ‘translated’ into squares with a common center:
Notice the self similarity (IE fractal-ness) of the proportion between the blue, pink and brown squares at each level.
Navigating the Intricacies of Markov Models in Sound
In our latest exploration within the “Mycelium and Sound Collectives” project, we dive deeper into the realm of machine learning, focusing on the pivotal step of feature engineering – transforming raw musical data into a machine-readable format. This process is crucial for our goals, ensuring the data highlights the musical characteristics essential for algorithmic learning and interpretation.
Today, we spotlight our third vlog, which delves into the intricacies of the Markov model through MAX/MSP, showcasing how the ml.markov object’s ‘order’ command significantly expands the model’s memory. This allows it to recognize and generate more complex musical patterns, revealing the potential of machine learning in music composition and improvisation.
This exploration not only enhances our understanding of Markov models but also highlights the importance of precise data preparation in machine learning. By improving how data is fed into the model, we can greatly enhance its predictive capabilities, offering new possibilities for musical creativity at the intersection of technology and art.
Stay tuned as we continue to push the boundaries of music, mycology, and machine learning, uncovering new insights and possibilities in this innovative project.
Following up from my last post, I am going more into detail in the project “Mycelium and Sound Collectives”, working more in depth with the question of musical genre and how to make the musical data ready for the machine learning, a process known as feature engineering.
In this 2nd episode of our vlog, we guide musicians and composers in harnessing the power of the Markov model for music improvisation. We dive into the essentials of machine learning, emphasizing tailored feature engineering for unique musical styles. Through a radar chart analysis, we prioritize key parameters….
Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!
As a part of my project “Mycelium and Sound Collectives”, I am currently doing some research into the question of machine learning and music.
The project is scheduled for autumn 2024, as an outdoor performance, which introduces a rather peculiar ensemble – an amalgamation of a saxophone player, an analog synth with a computer, and then a rather unconventional performer: a mycelium network forming a fairy ring in the soil where the concert unfolds.
In my current research, I am delving into the intersection of machine learning and music composition, questioning the potential for computers to genuinely improvise music.
My examination uncovers the complexities of the artistic process, exploring choices within sound composition. I scrutinize loop pedals, sequencers, and the Markov model, viewing them not merely as tools but as integral components shaping a dialogue between live musicians and evolving machine capabilities.
As documentation for the process, I am doing a vlog, and in the first episode, I am asking: “Can my computer learn to improvise music?”.
In this vlog episode, I don’t provide a definitive answer to the question posed in the title. Instead, I aim to unfold the various aspects at play when delving into the complexities of this inquiry. This involves discussing diverse examples of machine learning models, ranging from the rudimentary to the advanced, in connection with music. As you’ll see when you are watching it, I also demonstrate their functionality within the MAXMSP software, using concrete sound examples. Additionally, I delve into the conceptual framework essential for making informed artistic choices at the intersection of art and technical solutions.
Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!
What does the typical contemporary music ensemble look like? I haven’t been able to find an answer to this question, so I decided to make my own enquiry.
There seems to be three categories: Larger ensembles, typically in the format of sinfoniettas. Ensembles in traditional formats, ie. string quartets, wind quintets, etc. And then: ecclectic, new, experimental formats. I have noticed, that when there are call for scores, it seems as if the formats available are most often within the 3rd category.
I decided to make a small investigation into the question, narrowing my research, so I looked for ensembles within the 3rd category, with these conditions:
the ensemble must be active now, 2022
they must be playing composed music (not excluding ensembles also doing other formats)
consisting of from 3 to 10 players
playing acoustic instruments, although ensembles also including electronics were also ok
I gathered info in a google sheet, and here is the result:
I found (sofar) 18 ensembles
They are based in Argentina Australia Denmark Finland France Germany Irland Italy Serbia UK USA
They have between 8 and 3 members
What instruments are they playing?
All in all, these instruments were used: Voice Recorder Flute Obo Saxophone Clarinet Bassoon Trumpet Trombone Guitar Harp Accordeon Piano Percussion Violin Viola Cello Double Bass Electronics
What is the most frequently used instruments?
1
Piano
79%
2
Clarinet
74%
3
Flute
68%
4
Cello
68%
5
Violin
58%
6
Percussion
53%
7
Voice
26%
8
Viola
26%
9
Obo
21%
10
Double Bass
21%
11
Electronics
16%
12
Bassoon
11%
13
Guitar
11%
14
Accordeon
11%
15
Recorder
5%
16
Saxophone
5%
17
Trumpet
5%
18
Trombone
5%
19
Harp
5%
The piano was part of 79% of the ensembles.
Since the average size of these ensembles were 6 players, I found it useful to look at the six most frequently used instruments, marked in bold above, and these are: Piano, Clarinet, Flute, Cello, Violin, Percussion.
What my mini-research has shown, so far, is that the typical ensemble playing contemporary music is a sextet with the so-called Pierrot ensemble with an added percussion player.
Three of these ensembles are consisting of exactly the six most typical instruments, marked in bold. Two ensembles included the six instruments, while adding 1 or 2 others.
Information
I based my search mostly in this “List of contemporary classical ensembles” (wikipedia). It is of course rather incomplete, and many of the listed ensembles are historical. What I find curious is that I had such a hard time finding information on these things online. It is really difficult for me to find updated databases on the current ensembles, and for that matter, festivals, calls for scores, and so on. In short: Information about the contemporary music scene seems to be really scarce.
This leads of course to my typical last words: Help! If you have information about good, updated ressources for contemporary music, composers, festivals, musicians and so on. If you know about more ensembles to include in my enquiry. And, of course, if you know about real, maybe even academic, enquiries into these questions: Help!
Climate Change has made it quite evident that we can no longer take anything for granted. Many pillars of our existence have been revealed as stages or facets of much broader processes. We are finding that even such things as the Gulf Stream and the Greenland Ice Sheet turn out to be fragile.
When I first started to assemble this album, I meant to focus on those special things whose fleeting or tenuous existence seemed so delicate as to be improbable. But as I thought about the subject, I asked myself, “What is it that makes something seem ‘fragile?’” The answer seemed to be vulnerability, a set of narrow environmental tolerances, and a short life span. This could be summed up by the term “impermanence,” which is a term that could be applied to just about anything.
To me, the world moves forward through the forces of creativity, on one…
Does the capacity for improvising and listening to improvised music improve a person’s (and a society’s) self-esteem? And what might the philosophy of Emmanuel Kant add to this question?
In this episode, composer Casper Hernández Cordes draws on his own experiences with improvisation teaching to reflect on the relation between open artistic practices and selfhood.
The background music to the podcast is an improvisation in itself by the composer, ‘commenting’ on the content of the podcast.