Discovering Hidden Sounds: Art and Science in Nature

Introduction

My latest artistic research explores how to give the forest a voice – literally! It’s a journey blending art, science, and sound design. Using a special microphone called a geophone, I’m trying to uncover the hidden world of vibrations within trees and soil, translating them into musical forms.

The Geophone: A Window into the Unseen

We often think of sound as something we hear through the air. But vibrations exist everywhere, even in solid objects like trees and the ground. A geophone is a microphone that picks up these subtle vibrations by making direct contact with surfaces. It’s like giving the forest a voice, allowing us to hear its hidden language.

The Journey from Vibration to Sound

Although I utilize techniques similar to sonification, my aim is not merely to represent data. Instead, I strive to create a sonic bridge between the hidden world of the forest and human perception. This involves analyzing the unique characteristics of the vibrations and translating them into sound.

Key Parameters: Amplitude and noisiness

I focus on two main characteristics:

This chart, displaying the mean amplitudes for each of 40 frequency windows, reveals that while the four different trees (2 apple trees, a fir and a hazel) share some common traits, they also exhibit significant differences in their vibrational patterns.
  • Amplitude: This is the strength of a vibration. A loud sound has high amplitude, while a soft sound has low amplitude. By tracking amplitude changes, I can understand how the vibrations evolve over time.
  • Spectral Flatness: This measures how “noisy” a sound is. A pure tone has low spectral flatness, while a hissing sound (like wind) has high spectral flatness.
Comparison of baseline spectral flatness between trees and soil.

The Sonification Process

My process involves:

  1. Dividing the audio into frequency bands.
  2. Measuring the spectral flatness of each band over time.
  3. Comparing these values to previously recorded data.
  4. Triggering musical events based on the differences in the data.

My aim is to create a live concert experience in a forest, where the geophone acts as a conduit for the trees and the earth to participate in the music making. The geophone, placed on a tree or the forest floor, will capture the subtle vibrations of the environment and translate them into triggers for musical sequences. A human musician will then improvise alongside these sounds, creating a unique duet between human and nature.

You can hear an example of how this might sound in this video,

which blends raw geophone recordings with musical interpretation. However, for the live concert, I’ll be taking a different approach, building on my previous work exploring the poetry of Inger Christensen.

A Glimpse into the Unseen

This research seeks to give a voice to the silent language of nature. By revealing the hidden vibrations within trees and the earth, I hope to deepen our understanding of the interconnectedness of all living things.

Conclusion

This is an ongoing journey of exploration, and each recording opens up new possibilities. I’m excited to continue refining this method and uncovering the hidden sounds of our more-than-human kin.

P.S. This blog post has been written with the help of AI

Listening to the Forest’s Metabolism

My current artistic research explores the sonic expression of a forest’s metabolism. I’m using specialized microphones to capture the subtle vibrations and sounds produced by living organisms and ecological processes, and then translating these patterns into musical sound. My goal is to create a unique kind of concert experience where the forest itself becomes a musical collaborator.

This involves capturing the often inaudible sounds of life within the forest – the movements of roots, the activities of insects and microorganisms, and the flow of water and air. These expressions, often hidden from human perception, form distinct patterns that can be translated into data and used to generate synthesized sound. This process allows me to give a sonic voice to the trees, soil, and plants.

This research follows a period of exploration into bioelectricity, where I investigated how living organisms express themselves through electrical signals. However, practical considerations and the desire for a more portable setup led me to investigate vibrations and sound as alternative expressions of life.

Finding the Right Tools

To “listen” to the forest, I needed the right tools. After considering various parameters like humidity and pressure, I focused on capturing the vibrations and sounds of the soil, trees, and plants. This led me to two microphones: the Jez Riley French ‘ECOUTIC’ contact microphone and the Interharmonic ‘GEOPHON.’

Initial tests with the ECOUTIC in my garden proved disappointing. While recording the compost bin and an apple tree, the microphone captured a wide range of ambient sounds, making it difficult to isolate the sounds emanating from the source I was probing.

Hoping for a more focused approach, I switched to the GEOPHON. Unlike the contact-based ECOUTIC, the GEOPHON utilizes a magnet and a coil of copper wire, potentially making it less sensitive to ambient noise. Initial tests in a quiet forest environment were more promising.

Decoding the Sounds of Life

I collected recordings from various sources – trees, soil, plants, mushrooms, and decaying wood. While these sounds were largely unintelligible to human ears, I could perceive distinct differences between the recordings from each location. However, knowing that the GEOPHON captures frequencies below human hearing, I needed a way to analyze and interpret these inaudible sounds.

Using MaxMSP software, I’m experimenting with two analysis techniques:

  1. Amplitude Analysis: I divided the sound spectrum into 10 Hz windows and analyzed the amplitude variations within each window. This created 40 distinct “frequency bands,” each with its own fluctuating amplitude. I then translated these fluctuations into synthesized sound, with each band represented by a sine wave at a specific frequency. The result is a droning sound with a subtly shifting timbre, reflecting the dynamic activity within each frequency band.
  2. Brightness and Noisiness: Using a MaxMSP object called “Analyzer,” I tracked the brightness and noisiness of the sounds. This provides a stream of data for each parameter, revealing significant differences between the sound sources. I then used these parameters to control the sonification, with brightness influencing the waveform (from sawtooth to sine wave) and noisiness affecting the waveform’s modulation. This creates a dynamic sonic landscape that reflects the unique character of each sound source.

Continuing the Exploration

I’m eager to continue refining my methods and exploring the sonic world of the forest.

This research is ongoing. I’ve created a video showcasing some of my initial analysis and sonification results:

I welcome your feedback and thoughts on this research. Please feel free to share your insights in the comments below.

PS: I’ve used AI to help me write this blogpost

“Ex Humo”, en musikalsk samtale med mycelium via Inger Christensens ord

Som afslutning på “Common Ground”, en udendørs kunstudstilling 17/8 – 8/9 2024, inviterede jeg en mycelium/svampe til en musikalsk dialog.

Du kan se uddrag fra koncerten her:

Koncerten i sin fulde varighed kan opleves her: https://youtu.be/8kbbb-Jnww4

Under koncerten deltog publikum ved at trække nogle ordkort, 7 i alt, som udgjorde afsættet for ‘samtalen’ med mycelium.

Ordkortene repræsenterede citater fra Inger Christensens digtsamling “Alfabet”. Hvert citat lagde op til en særlig

Du kan læse om de tanker, som ligger bag koncerten her:

Her kan du se partituret til concerten:

VLOG 4: Nearing True Machine-Musician Interaction? Exploring Rhythm through Fractal Proportions

We are nearing our objective of enabling our computer to interact with a live musician in a manner that closely resembles the interaction with another musician. In this fourth vlog, we explore the user interface developed by Samuel Peirce-Davies, which employs the ml.markov object to learn music from MIDI files. Our aim is to extend this functionality to accommodate music played by a live musician, which we’ve found requires certain adjustments.

Specifically, we’ve discovered that the realm of rhythm, or the temporal aspect of music, necessitates a distinct approach beyond the straightforward, millisecond-based logic. The key lies in thinking in terms of PROPORTIONS. Essentially, we’re dealing with the relationship between pulse and rhythm. This relationship needs to be quantized into a limited number of steps or index numbers that can be input into the Markov model.

Vlog 4

To achieve this, I’ve employed what might best be described as a fractal approach. We’re investigating the interaction between pulse and rhythm, moving away from a linear methodology that divides the pulse into equally spaced steps. Instead, we aim to determine the proportion, leading us to work with a list of fractions that divide the beat into segments like 1/6, 1/4, 1/3, 1/2, and so on.

By setting the maximum rhythmic duration to 4/1 of a beat, we have distilled the complexity down to just 13 different index values. This is in contrast to an equal steps approach, which would yield 48 index values if each beat were divided into 12 equal parts.

Consider whether you would truly notice a difference between a duration of 48/48 versus 47/48. Likely not, which illustrates why 13 index numbers are more meaningful to the Markov model compared to 48. This is especially relevant when considering Samuel’s approach, where any duration, measured in milliseconds, could potentially be integrated into the Markov model.

Sketch to a fractal concept of rhythmical proportions, turning rhythm into 13 index values to be fed into the markov model.

Update, 2024-02-17

After quite some messing around with geogebra, sheets, papir&pencil, I’ve come up with a visual representation of the fractal like structure of the duration index values.

It’s based on the series of duration fractions dividing the beat into triplets and quadruplets following the logic of double value, then dotted value, etc. Here is the series of fractions: 2/12 3/12 4/12 6/12 8/12 9/12 12/12 16/12 18/12 24/12 32/12 36/12 48/12

And here is how these fractions will look when ‘translated’ into squares with a common center:

Notice the self similarity (IE fractal-ness) of the proportion between the blue, pink and brown squares at each level.

If you want a closer look at the maths behind this graphic, here’s the geogebra-patch, I’ve made.

Useful links:

Pattern Play

Navigating the Intricacies of Markov Models in Sound


In our latest exploration within the “Mycelium and Sound Collectives” project, we dive deeper into the realm of machine learning, focusing on the pivotal step of feature engineering – transforming raw musical data into a machine-readable format. This process is crucial for our goals, ensuring the data highlights the musical characteristics essential for algorithmic learning and interpretation.

Today, we spotlight our third vlog, which delves into the intricacies of the Markov model through MAX/MSP, showcasing how the ml.markov object’s ‘order’ command significantly expands the model’s memory. This allows it to recognize and generate more complex musical patterns, revealing the potential of machine learning in music composition and improvisation.

This exploration not only enhances our understanding of Markov models but also highlights the importance of precise data preparation in machine learning. By improving how data is fed into the model, we can greatly enhance its predictive capabilities, offering new possibilities for musical creativity at the intersection of technology and art.

Stay tuned as we continue to push the boundaries of music, mycology, and machine learning, uncovering new insights and possibilities in this innovative project.

Unveiling Musical Aesthetics

Markov Model Exploration for AI Improvisation

Following up from my last post, I am going more into detail in the project “Mycelium and Sound Collectives”, working more in depth with the question of musical genre and how to make the musical data ready for the machine learning, a process known as feature engineering.

In this 2nd episode of our vlog, we guide musicians and composers in harnessing the power of the Markov model for music improvisation. We dive into the essentials of machine learning, emphasizing tailored feature engineering for unique musical styles. Through a radar chart analysis, we prioritize key parameters….

Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!

Can my computer learn to improvise music?

As a part of my project “Mycelium and Sound Collectives”, I am currently doing some research into the question of machine learning and music.

The project is scheduled for autumn 2024, as an outdoor performance, which introduces a rather peculiar ensemble – an amalgamation of a saxophone player, an analog synth with a computer, and then a rather unconventional performer: a mycelium network forming a fairy ring in the soil where the concert unfolds.

In my current research, I am delving into the intersection of machine learning and music composition, questioning the potential for computers to genuinely improvise music.

My examination uncovers the complexities of the artistic process, exploring choices within sound composition. I scrutinize loop pedals, sequencers, and the Markov model, viewing them not merely as tools but as integral components shaping a dialogue between live musicians and evolving machine capabilities.

As documentation for the process, I am doing a vlog, and in the first episode, I am asking: “Can my computer learn to improvise music?”.


In this vlog episode, I don’t provide a definitive answer to the question posed in the title. Instead, I aim to unfold the various aspects at play when delving into the complexities of this inquiry. This involves discussing diverse examples of machine learning models, ranging from the rudimentary to the advanced, in connection with music. As you’ll see when you are watching it, I also demonstrate their functionality within the MAXMSP software, using concrete sound examples. Additionally, I delve into the conceptual framework essential for making informed artistic choices at the intersection of art and technical solutions.

Join me in this exploration by watching the vlog episode. If you find it insightful, don’t forget to like and subscribe – you know the drill – but sincerely: your support means a lot. Thanks for tuning in, and stay tuned for more vlog episodes to come!