Jeg er glad for at kunne præsentere mit nye album, “Biernes Geometri”. Albummet er en liveoptagelse af en performance, jeg opførte i august 2025 som en del af udendørsudstillingen “I et landskab”.
Cover til albummet, tv Casper Hernández Cordes improviserer på klarinet og live electronics. Th et udsnit af Morten Plesners skultur “Absorberet Landskab”, hvor du lige kan ane en flittig bi, på vej ind i den sci-fi-agtige indgang, Morten har lavet til bierne.
Konceptet: Klarinetten som lydligt kompas
Inspirationen kom fra værket Absorbed Landscape, der husede en levende bikube. Ligesom bier udfører en “waggle dance” for at vise vej til nektar, fungerer klarinetten her som et instrumentalt kompas.
Ved hjælp af motion tracking og softwaret MaxMSP spores klarinettens retning i landskabet. Når jeg panorerer instrumentet mellem udstillingens skulpturer, skabes et særligt lydforløb i synthen. Jeg har kaldt disse panoreringer “Promenader” (en reference til Mussorgskijs Udstillingsbilleder).
Når klarinetten peger direkte mod en skulptur, aktiveres værkets specifikke “lydarkitektur”. Dette er digitale koder, jeg har bygget ud fra skulpturernes fysiske proportioner, materialer og æstetiske udtryk, som får synthen til at reagere på klarinettens mindste bevægelser.
For at vide mere om forberedelserne til koncerten, om de enkelte værker, og om arbejdet med “lydarkitekturerne”, læs min blogpost her.
Hvorfor et album?
Selvom performancen var stedsspecifik og knyttet til øjeblikket, opstod der i samspillet mellem teknologien, arkitekturen og improvisationerne en helt særlig lydlig verden. Dette album er en dokumentation af det møde – en mulighed for at tage “den mentale næring” fra udstillingen med sig og opleve de geometriske forbindelser gennem lyden.
En overvejelse har været, om lydkvaliteten er god nok til et album. Det må du være med til at vurdere. Det er optaget på en iphone, og for at være ærlig glemte jeg ganske enkelt at optage lyden internt fra computeren (meget at tænke på lige op til koncertens start…..). Jeg har med vilje ikke fjernet støjelementer, som typisk er lydfolks største hadeobjekter (vind-pops, etc.), for at bevare så meget klangfuldhed i optagelsen som muligt….
Hvorfor Bandcamp?
Jeg har valgt at udgive albummet via Bandcamp, da platformen støtter uafhængige kunstnere og giver mulighed for at præsentere værket i den bedst mulige lydkvalitet (lossless). Her kan du både streame albummet og købe det digitalt, hvis du ønsker at støtte mit fortsatte arbejde med elektronisk komposition og performance.
Efter min performance “Korkelmens Dialekt”, blev jeg inspireret af Mortens værk med bierne. Jeg undersøgte, hvad det lige er, bier gør, når de kommunikerer, og blev mindet om, at de jo laver deres “dans”. På engelsk: “Waggle Dance”. Kort sagt går det ud på, at en bi, der har fundet en fødekilde, ved tilbagekomsten til bistaden laver en slags kodet bevægelsesmønster, som meget præcist kommunikerer 1) afstanden til, 2) retningen til og 3) beskaffenheden af fødekilden. Dansen foregår i et ottetal, på bikubens lodrette væg og retningen ift op (modsat tyngdekraften) angiver retningen af fødekilden ift solen. Varigheden af dansens midterste del angiver afstanden, således at 1 sekund svarer til ca. 1 km (alt afhængigt af vindmodstand, etc).
Dette fænomen inspirerede mig til at tænke: Jeg vil være en arbedjderbi, der laver en “waggle dance” for at fortælle fællesskabet om de forskellige (åndelige) fødekilder, denne udstilling byder på!!
Solens placering ved performancen 7/9 kl. 14.15. De stiplede linjer viser afstand fra Mortens værk til de andre værker, samt vinklen ift solens stilling. Sum sum.
Så jeg har travlt! Arbejderbitravlt! Fra min første performance er der 22 dage til at jeg skal performe denne waggle dance!
Grundideen er, at jeg sætter mobilen på min klarinet, og fra den sender en live strøm af data til computeren med klarinettens retning, op-ned og side-til-side. Og når jeg så peger i retning af et af de udstillede værker, åbner jeg for en generativ lyd-arkitektur, som jeg har programmeret til værket. Lyd-arkitektur, fordi jeg har fastlagt nogle mulige tonehøjder, varigheder, filtre, mm., som jeg kan aktivere, og påvirke med klarinettens bevægelser. Som at bevæge sig rundt i et hus og fremhæve forskellige rum, dekorationer, møbler, etc. Generativ, fordi jeg bruge styret tilfældighed, (Markov), til at variere de konkrete lydforløb, du vil høre. Lydene skabes af en modulær analog synthesiser, og på den måde har jeg et relativt begrænset instrument at lege med (begrænset ift computerens uendelige muligheder for lyd-syntese). Et godt benspænd, som fordrer kreative løsninger (hvis ikke det hele skal lyde ens!).
Nr. 3 Freja “ANDETSTEDS” & nr. 15 Dorte “ÖDE”
Frejas og Dortes værker er på mange måder forskellige, men de har det til fælles, at de består af en masse mindre dele, af meget forskellig beskaffenhed, som er sat samme i landskabet.
Jeg tænkte på, at jeg ville lave en slags virtuel lydskulptur ud af hvert af deres værker. Ved at bevæge klarinetten op-ned og vrikket højre-venstre kan jeg navigere rundt i lydskulpturen og aktivere forskellige dele.
Jeg lavede optagelser af de materialer, som skulpturens dele består af, og lavede en analyse af deres lydlige karakteristika.
Her har jeg klippet forskellige dele af Frejas værk ud… og her er deres repræsentation til computeren.
For at kunne aktivere de forskellige lyde, som hører til de forskellige skulpturelle dele har jeg lavet en forenklet repræsentation med klare farver. Jeg sender live data fra min mobil, fastgjort til klarinetten, og når jeg bevæger klarinetten op-ned, side-side, bevæger jeg, hvad der minder om en cursor rundt i billedet; når cursoren f.eks. er over farven grøn, aktiveres den lyd, som jeg har gemt til Frejas keramiske skulptur. Osv. På den måde kan jeg bruge klarinetten som en slags trommestik, der spiller på forskellige skulpturdele..
Nr. 6 Claus “BJARNAVI TAT LOH”
Claus’ værk står som en nærmest alien efterladenskab midt i landskabet, og man tænker: “Hvad laver den der?”. Det får mig til at tænke på den sonde, mennesket har sendt ud i rummet, med nogle geometriske figurer, osv., og som jeg tænker, at en fremmed intelligens nok ikke vil fatte meget af…
Den såkaldte Pioneerplade, sendt med Pioneer 10 og 11 i 1970erne. Ligheden mellem linje-figuren til venstre, og min egen linjefigur ovenfor er ikke-intenderet, – men alligevel et påfaldende sammenfald. Tjek også Freyas værk…I øvrigt interessant, at de kære udenjordiske væsener skulle forestille sig en Planet udelukkende beboet af hvide mennesker..
Nuvel, der er ikke meget i Claus’ værks form, omtalen af det i kataloget – og da slet ikke dets titel – som giver noget hint til mig som beskuer/fremmed intelligensform.
For at finde en vej til dette værk gik jeg i ‘kødet’ på denne del af det:
Det er en aluminiumsplade, og der er ligesom banket nogle huller i, fra den ene og den anden side. Hullerne danner figurer, som ser ud til at være placeret ud fra lodrette/vandrette og diagonale logikker. Figurene er rektangler, cirkler og rette linjer. Og så en figur, jeg ikke kendte navnet på, men som lader til (iflg. google osv) at være en såkaldt “stadium” (eng). Wikipedias illustration af denne form matcher ret godt Claus’ form:
Denne form kommer i 3 forskellige stillinger: vandret, lodret og diagonalt. Jeg tænkte: lad os forestille os, at hullerne udgør toner og varigheder. Afstanden fra et hul til det næste, vandret udgør en varighed; lodret en tonehøjde. På den måde får vi et loop af toner med forskellige varigheder, dvs en melodi. Jeg forestillede mig så, at jeg kunne lade denne form->melodi eksistere virtuelt i min computer, og at jeg, med klarinettens bevægelser, kunne ‘dreje’ den rundt. På den måde, ville betydningen afstanden mellem punkterne gradvist ændre sig, og varigheder ville blive fortolket som tonehøjder og vice versa.
Dette er en tænkning, som jeg selv har fået inspiration til, ved at arbejde med analoge synthesizere. Dette er en verden af spændinger – volt – hvor den samme spænding kan bruges til det ene øjeblik at kontrollere tonehøjde, det næste varighed, det næste klang, etc.
Nr. 7 Johanne “TÅREKAR”
Johannes værk består af 5 keramik-kar. Jeg optog lyden af hvert af dem, ved (forsigt!!) at dikke til dem med fingeren. Jeg analyserede lydene og programmerede grund- og overtoner ind i maxmsp. Rent teknisk har jeg (gen)opdaget metoden wavetable synthesis. Jeg har tidligere stødt på denne metode, men har ikke oplevet den som meningsfuld for mig, indtil nu. Hvorfor? Fordi jeg har givet mig selv det kæmpestore benspænd kun at skabe lyd med (et meget primitivt) modulært synth-setup. Her har jeg kun mulighed for at sende 1 værdi ad gangen til at styre tonehøjde. Så normalt kan jeg kun får synthen til at lave 1 tone ad gangen, f.eks. et C. Hvordan så få synthen til at lave mere komplekse, overtonerige klange? Svaret er wavetable synthesis. Jeg har lavet en meget lille ‘lydoptagelse’ med kun 5 samples: grundtonen og 4 overtoner. Denne ‘lydoptagelse’ bliver så afspillet ind i synthen, hurtigt, og synthen får på den måde ‘besked’ på at spille, hvad der i langsom gengivelse ville være en arpeggio. Men hurtigt afspillet bliver de ellers adskilte toner, til én samlet klang. Lyder det som Johannes tårekar? Nok ikke helt. Men det er et bud på en dialog med værket. Og måske får klangende en klagende karakter, som går i spænd med tårekarenes essens?
Øverst: Klangen af et af lerkarrenes overtoner ‘oversat’ til 5 samples. Nederst de samme overtones amplitude.
Nr. 12 Hartmut “OPERATION HVEPSESALAMANDER”
Her valgte jeg at bruge en alm Stor Salamanders ‘hanekam’, som en form til at skabe tonehøjde- og varighedsforløb. Jeg brugte softwaret https://automeris.io/wpd/, hvor man kan plotte (x,y) par ind på et billede, og trække dem ud som et datasæt.
Jeg valgte at sige x=varighed, y=tonehøjde; for hver ‘tak’ i hanekammen får du så en ‘tone’ til en melodi, hvor tonen gilder opad til et skarpt knæk; Salamanderen laver ikke lyd. Det gør hvepse til gengæld. Så for at lave en virtuel, lydlig, hvepsesalamder blev det til, at salamanderens form udgør lydarkitekturens form (dvs. melodi), og hvepsens lyde (en summen på 170 – 200 hz) udgør selve lydens klang og omfang.
Nr. 14 Regitze “IMELLEM SANDET”
Når jeg ser Regitzes skulptur, er der med det samme noget, der sker, rent lydligt. De her tentakler, eller nervetråde, eller navlestrenge, som trækkes ud af, bores ned i, eller vokser sammen med gruset får mig til at høre lyde, der bevæger sig fra skulpturens ‘hoved’ og ned i jorden. Lydarkitekturen til dette værk består derfor bla. af 5 tonebevægelser, som går oppefra – ned, i en glidende bue. Tonehøjden er bestem af skulpturens (dumpe og dybe) klang.
En ‘forpremiere’, hvor jeg (hjemme) tester hele mit setup af…
My latest artistic research explores how to give the forest a voice – literally! It’s a journey blending art, science, and sound design. Using a special microphone called a geophone, I’m trying to uncover the hidden world of vibrations within trees and soil, translating them into musical forms.
The Geophone: A Window into the Unseen
We often think of sound as something we hear through the air. But vibrations exist everywhere, even in solid objects like trees and the ground. A geophone is a microphone that picks up these subtle vibrations by making direct contact with surfaces. It’s like giving the forest a voice, allowing us to hear its hidden language.
The Journey from Vibration to Sound
Although I utilize techniques similar to sonification, my aim is not merely to represent data. Instead, I strive to create a sonic bridge between the hidden world of the forest and human perception. This involves analyzing the unique characteristics of the vibrations and translating them into sound.
Key Parameters: Amplitude and noisiness
I focus on two main characteristics:
This chart, displaying the mean amplitudes for each of 40 frequency windows, reveals that while the four different trees (2 apple trees, a fir and a hazel) share some common traits, they also exhibit significant differences in their vibrational patterns.
Amplitude: This is the strength of a vibration. A loud sound has high amplitude, while a soft sound has low amplitude. By tracking amplitude changes, I can understand how the vibrations evolve over time.
Spectral Flatness: This measures how “noisy” a sound is. A pure tone has low spectral flatness, while a hissing sound (like wind) has high spectral flatness.
Comparison of baseline spectral flatness between trees and soil.
The Sonification Process
My process involves:
Dividing the audio into frequency bands.
Measuring the spectral flatness of each band over time.
Comparing these values to previously recorded data.
Triggering musical events based on the differences in the data.
My aim is to create a live concert experience in a forest, where the geophone acts as a conduit for the trees and the earth to participate in the music making. The geophone, placed on a tree or the forest floor, will capture the subtle vibrations of the environment and translate them into triggers for musical sequences. A human musician will then improvise alongside these sounds, creating a unique duet between human and nature.
You can hear an example of how this might sound in this video,
This research seeks to give a voice to the silent language of nature. By revealing the hidden vibrations within trees and the earth, I hope to deepen our understanding of the interconnectedness of all living things.
Conclusion
This is an ongoing journey of exploration, and each recording opens up new possibilities. I’m excited to continue refining this method and uncovering the hidden sounds of our more-than-human kin.
P.S. This blog post has been written with the help of AI
Did you ever wonder why live music is so much more exciting than listening to music in your living room? Isn’t it partly because the music you download from iTunes or stream from Spotify is exactly the same each time you listen to it? No surprises!
Recorded music just keeps repeating the same old patterns not caring about who is listening, when, where and how often. Just like some halfway autistic aunt who babbles about herself for hours, not aware that everyone else is sick and tired of listening to the same stories over and over again.
Barefoot Records is a collective of improvisors and composers, who are very active on the Danish music scene. As a proud part of this collective, I have engaged in a journey that will explore ways of challenging the way we publish and distribute music.
First step in this journey is to develop a “living album”. This is a pilot project, and we – the artists at Barefoot Records – are inviting you to collaborate!
You might have heard about crowdfunding? Would you like to be part of a crowdsourcing experiment? You are hereby invited to take part in a collaboration, where anyone on the Web can pitch in with ideas in a creative effort to break new ground in the way we listen to and conceive of music!!
This is how it works:
The Barefoot artists are recording a series of improvisations. In each take a single musician is improvising freely.
The takes, – let’s call them soundscapes – are released on soundcloud.com, please visit the set here.
On Soundcloud, you can comment on the soundscapes, in the sound itself. For each comment, we open a small discussion, and we can add links to other comments, thus building conceptual connections between different parts of the soundscapes. Please share your comments about the improvisations, giving special attention to
what images/ambiences/atmospheres/landscapes do the soundscapes evoke? Where are the aesthetic bridges between the soundscapes? Which parts match, and how? This will help us build banks of sounds, that we can combine in different ways, according to the musical imagery they evoke.
how do you perceive the overall forms of the improvisations? If you consider each improvisation as a narrative, what is the form of the story? These analyses will give us some macro-forms to use when programming the generic structural elements of the album.
imagine that you are listening to the final “living album”. The sounds will combine according to the above mentioned banks of sound and the macro-forms, in a way that is dependent on what happens at the specific time and place where you listen. In which way would it make sense for the sounds to interact with your environment? When programming the album, we can play with time, place, and we can connect with the computer’s “sensors”, i.e. camera, microphone, as well as streams of data from the Internet.
After this process of crowdsourcing, of co-creating the collective living work of sound art, composer Casper Hernández Cordes will compile sounds, forms and interaction patterns in a living album, an application, that you can download for free. Every time you open the application on your computer, you will hear new pieces.
The app will enable you to save the pieces, and you are invited to upload them to soundcloud. There, you will comment on the piece, and on the living album, sharing your experiences with the community of co-creators.
How about that? Click here, and you can participate! Join us! Comment! Share! Create!
Recommendation: Watch the video clip while listening to the sound:
The participants made sounds with the bikestruments and formed them with their voices through the Anthropomorfer. When working with the sounds via smartphones, how could we do with only very little space? I hadn’t prepared a silver concept for that part. Somehow someone came up with the idea to follow the passing cars, bikes and people with the sounds. So each participant chose one feature, for instance “everything yellow”, and each time a yellow car, bike og hat passed by, he/she would ‘follow’ the moving object with the sound, panning from left to right or vice versa. In this way, a new aspect was introduced in the the-street-as-an-instrument-concept: SILENCE!! Since there were not yellow things passing by all the time the participant following yellow things had to keep silent in the pauses. But since each participant followed his/her specific feature, – bike helmets, taxis, blondes, etc., a great variety of movements and velocities naturally came to the improvisations, thus creating variation and new mixes/collisions. QRaaaaa QRaaaa!
Tubes from the underground. Now getting a sonorous afterlife in a streetstrument/ giving-back-the-noise workshop
Concept: During a street art decoration of the wall around the local metro construction site, akutsk is making a ‘streetstrument’ workshop.
The workshop is going to be in two parts:
a streetstrument building workshop. In these weeks, I’m collecting debris from the construction. At the event August 11, these objects will be the material for instruments.
a Anthropomorfer workshop, where the participants form the sounds and improvise collectively via laptop and smartphones
Everything is on location, anyone can join, and afterwards we paste up QR-codes linking to recordings of the impros.
Here is the Facebook event.
The fundamental reason why I work with these “street interventions”, using the Anthropomorpher as a tool for inviting passers by in the street to make collective improvisations on the street’s sounds, is because I want to ignite a trend where people start making sound art as street art. There is no official name for this, – I have suggested ‘fonografiti’ (intended misspelling), ‘proto urban folklore’, or ‘soundtagging’. I have only seen few examples of sound art as street art (see my page with examples), and I have seen no examples of collective street improvisations using electroacoustic tools, – a part from my own project, that is. As always, I invite the reader to give feedback: if you have examples of sound art as street art, and collective electroacoustic improvisation in the street, please give me a comment!
But why electroacoustic collective street improvisations, you might ask? Why is this approach important, necessary, indispensable?
Why street? Public space is the only place where there is a possibility for random meetings between people, across differences in gender, age, occupation, income etc. People are different, and when they engage in problem-solving activities in contexts with only people of a similar kind, the way the problems are solved will tend to exclude the viewpoints of other kinds of people. Therefore, the street is a potential motor for balancing contrasting interests. Public space is invaded by commercial and municipal interests to an extend where we ‘ordinary people’ tend to forget that the street is a common space for all of us. The visual ‘screen’ of the street is already full, so to say, BUT there is a whole new ground for expression left untouched: sound!
In the unfair battle between commercial- municipal interests and the citizens, there is space left open, allowing ordinary people to express themselves through sound, while adding a virtual track to the all encompassing visual track of public space.
Why collective improvisations? In posing the problem: how does this street sound, and what can we express through the sounds of this street, on this time and place, the concept I’m developing is setting up an example showing that collective problem solving involving different kinds of people is indeed possible. Improvisation is a very important aspect of inclusive problem solving activities, where a predefined agenda will always favour the interests of some to the expend of others. An improvisation in the akutsk sense is a reflexive collective activity, involving a process where the participants agree on a common ad hoc script for the collective improvisation, thus giving the example of a collaboration around a problem solving activity, with sound as a medium.
In general, people regard public space as a distance they have to cross in order to get to the places that are important to them, all of which contain people that are in general of the same kind. A street sound improvisation can work as a interest-free brake for this stream of people hurrying from A to B. A collective street sound improvisation is not aimed at serving specific interests, the participants being the ones defining the aim and content of the activity within the given framework. It is an attempt at setting up an ideal form of interchange between contrasting interests, an experience that can be scaled to a broader, societal perspective.
Why electroacoustic? I talked to a guy who has a lot of experience in performances, street theatre and the like, and he was sceptic about the level of technology required for the Akutsk concept of collective street sound improvisations. There are many examples of people making musical activities in the street, ranging from street musicians, over flash mobs to stomp inspired activities. Common to these activities is that they do not challenge the traditional duality between performer and audience, not succeeding in inviting the latter to reflexive (co-)creativity.
Although the relatively high level of technology required for the akutsk approach is a challenge for the possible spreading of the concept, it is an obstacle worth the while considering the benefits it entails. Reflection and informed decision making is central to the concept, and the way I use the computer eases the path to this considerably. With the possibility for the participants to record a sound, listen to it, form it with their voice, and subsequently listen to it together with the other participants, the Anthropomorpher bridges the gap for most people not trained as musicians, that inhibits them from expressing themselves in a creative and reflected way through sound. In addition, smartphones connected with the computer through (portable) wifi, serving as individual remote controls for each participant’s sound, facilitates the use of space as a ‘resistance’. The participants move in a traced field on the ground representing the sound’s virtual space as gestalted by changes in volume and panning by each of the participants.
This could be done acoustically, – people using their voices or improvised instruments, but this is probably to far from most peoples zone of comfort. Using a smartphone as a remote is comfortable. Using your voice in a mic, and walking around in a field in the street with strangers are sufficient barriers to overcome, and although they exclude some people, I think that they exclude people in less predictable ways than the traditional acoustic variant of street performances favouring people who consider themselves ‘musical’ or ‘extrovert’.
With my aspirations for a proliferation of this approach – or similar approaches – it is essential that the procedures are simple,and easy to copy. For the time being, I think that the complexity of the electroacoustic method and the technical requirements are posing a serious barrier.
In a global context, you can argue that the approach is excluded for most of the communities in the Global South. You might say though that we northerners are more challenged when it comes to spontaneous expression in a public setting, and that we need technical aid to overcome this. For the sake of people interested in engaging in activities inspired by the akutsk approach, I have made this list of tools needed for the aspiring street sound art activist:
Though summing up to I don’t know how many years of wage for an average garment worker in Bangladesh, I believe that it is not inconceivable that a local group of activits in a Northern welfare state could get their hands on these things.
We are all virtuosos with our voices. Imagine being able to improvise over the sounds around you using your voice as an infinitely fine-tuned controller. While real time jamming with someone on the other side of the planet.
The mission is: I want to find the optimal tool to allow people to improvise sound art in collectives across the planet, in a creative, pleasurable, and reflective way. I have developed the Anthropomorfer as a desktop application, allowing collectives to improvise, while being in the same place. Now, I want to extend the functionality from a local wifi based context to a global web-based one.
The tool is intended for anyone interested in working with sound as a means for expression, but these contexts are of special interest:
working with children developing their analogue literacy and their divergent thinking
in organisations enhancing communication skills
What will the participant experience:
1) Open your app. Start a group or sign up for one. Select a sound, either by recording it on the spot, or from a database of sounds that other users have chosen. 2) You now hear your audio while viewing it as a waveform on your smartphone. You choose which part of the sound you want. 3) When all the participants in your group has chosen and cut their sound, start your session 3… 2… 1…. and: 4) improvise together. You can turn volume up and down, pan, and you can shape your sound with your voice via the phone’s microphone. 5) afterwards, you listen to the improvisation, give it thumbs up or down, and if a majority votes for it, the improvisation is saved on the server. Here you can comment and discuss it.
What lies behind:
Technically, there must be a server where the program runs, and audio files are stored. From each cell phone the server receives 1) an upload of a short sound file (max. 15 seconds). Or a selected audio file, which is already on the server. 2) A flow of analysis of the voice. Not the voice. Just analysis of pitch and volume. The server streams audio from the collective improvisation to the participants.