The Sound Beneath Our Feet

Project team:
Dr Haftor Medbøe (PI) – Professor of Music in the School of Arts & Creative Industries, Edinburgh Napier University
Dr Andrew Bell (Co-I) – Senior Lecturer in the School of Geosciences, the University of Edinburgh
Dr Iain McGregor (Co-I) – Associate Professor in the School of Computing, Edinburgh Napier University
Harry Docherty (RA) – Doctoral candidate in the School of Arts & Creative Industries, Edinburgh Napier University
Project background and introduction
The Sound Beneath Our Feet is a project grounded in both interdisciplinary and transdisciplinary approaches. It brings together fields of geosciences, audio processing and creative practice towards seeking new understandings and perspectives within respective disciplines whilst offering a synthesised output of findings and audience experiences underpinned by methodological frameworks developed by the team
The project’s genesis was in a guest lecture delivered in 2016 by Andrew Bell to students of composition at Edinburgh Napier University and students from a variety of creative disciplines at Edinburgh College of Art, University of Edinburgh. The mixed cohort was tasked with individually responding to the lecture content through the lens of their own creative discipline. The lecture explored Bell’s fields of specific interest and expertise within geosciences giving particular focus to seismic consequences, and related hazards, associated with volcanic eruptions.
Within this discussion Andrew Bell’s practice of ‘listening to volcanoes’ was presented through examples of sonified seismic data. The students originated a wide range of responses to an invited audience at Edinburgh College of Art, University of Edinburgh. Interestingly, in that students of music formed the majority, none employed the sonic data shared with them, preferring to abstract concepts towards nevertheless fascinating creative ends. Following this event, Andrew Bell and Haftor Medbøe gradually continued ad hoc discussions around the potentials of developing a project that both informed the practice of ‘listening to volcanoes’ through developing novel audification processes and by employing the source data directly in artistic output.
In 2022 a successful bid was written in response to a call from Creative Informatics, Edinburgh Futures Institute, allowing a limited range of ambitions to be formalised and methodologies to be developed to these ends. Funding enabled the appointment of research assistant Harry Docherty and for the project team to draw on the experience of Iain McGregor in the School of Computing, Edinburgh Napier University, whose research interest and expertise lies in sound design and listening.
Creative aims and objectives
The aim of this pilot project was to work with source materials from seismometers located at sites of significant volcanic activity while exploring both the scientific and artistic potentials of artistic intervention.
Approaches to working with the selected data were developed in response to questions from within the team, from both intra and extra-disciplinary perspectives. These may be summarised as:
- How can the experience of ‘listening to volcanoes’ from a geo-scientific perspective be augmented or afforded additional nuance through creative and artistic intervention?
- How may the trivial sonification of seismic information provide the basis from which to create an experiential artistic response that meaningfully reflects or speaks to the nature and origins of data employed?
The data, its nature and context
The Earth is constantly vibrating. Seismometers record these vibrations. By analysing seismic data, scientists aim to better understand the processes generating the vibrations and the nature and structure of the earth that they pass through. Active volcanoes are a source of particularly strong and diverse seismic signals. Monitoring how these signals change in time and space is a key method for tracking changes in volcanic activity and forecasting future eruptions.
Most analyses of seismic signals involves a stage of data visualization, either of the primary data themselves (in time or frequency domains), or of secondary data derived by algorithmic processing of the primary data. Such methods have limitations, and it can be impossible to represent many features of rich, complex datasets in a static graph or image. Consequently, seismologists have long employed techniques of data sonification to understand and analyse seismic data. Simple processing steps can be used to convert the vibrations in the earth recorded by seismometers into vibrations in the air at frequencies that can be heard by the human ear. The ear and brain are capable of processing auditory signals across a very wide range of frequencies, and identify varied textures and timbres.
Bell has been working with seismic data recorded at volcanoes in Ecuador since 2015. He uses these data to study the movement of magma and gases within the volcanic system, and the way in which the volcanic edifice responds to these movements. By listening to sonified versions of these data, he has been able to identify a rich variety of volcanic processes, and new understanding of these systems. However, it became apparent that there is a far greater potential for scientific and artistic exploration of these data.
For this project, seismic data were chosen from significant volcanic events at Tungurahua volcano, Ecuador. Tungurahua was active from 1999 to 2016. Through much of this period, a single seismometer, ‘RETU’, operated by the Instituto Geofísico de la Escuela Politécnica Nacional and located close to the volcanic crater at 4000m elevation, recorded data and transmitted them to the volcano observatory. RETU recorded signals associated with volcanic earthquakes and explosions – many hundreds of thousands of them – providing a record of changes in the activity of the volcano through time. The samples chosen for this work depict significant explosions or changes in activity during episodes that formed key components of volcanological publications by Bell. The instrument converts the vertical component of ground velocities into an analogue voltage, recorded by a digitizer at sampling rates of 50 or 100 samples per second. Signals related to volcanic activity are typically contained within a frequency band of 0.5 – 20 Hz, below the sensitivity of the human ear. Speeding these signals up by a few hundred times brings them into the audible range – a speed up of 360 times converts 24 hours of data into 4 minutes, and a signal frequency of 0.7 Hz becomes a pitch of middle C.
Method and process
Using this technique of seismic data extraction to make seismic data audible, what can then be heard is a sonically diverse arrangement of clicks, cracks, pops, bangs from different amplitudes and intensities of seismic events. We chose to use two particular data sets from the site of the Tungurahua volcano in Ecuador for this project for their sonic qualities, each chosen to offer contrast and variety to work with in the curation of the listening experience. The first, captured during an event on July 14th, 2013 has a major eruption around halfway through, bookended by a variety of tremors in the build-up to and aftershocks of the eruption. The second, from an event on February 27th, 2016, has quite a different palette of sounds, mostly consisting of otherworldly screaming or wailing noises attributed to liquid and gas movements within the volcano. From the perspective of Docherty and Medbøe as musicians, they were largely focused on which data sources offered the most potential opportunities for sampling and sonic manipulation rather than solely their scientific significance, but naturally it was subsequently discussed in relation to Bell’s perspective in attempts to find a balance between artistic and scientific value.
An FFT (Fast Fourier Transformation) approach was adopted to separate out individual frequency bands from the initial audifications to extract individual streams of data for auditory modification. These narrowbands could then be utilised as the basis for sonic exploration based around dynamics, spectrum, spatialisation and temporality. All of which could be expanded or contracted to convey both the macro and micro aspects of the waveforms captured by the seismometers. These isolated frequency tracks were then processed further through the use of delay, reverb, pitch manipulation, granular synthesis and more. All the isolated audio files were converted into harmony, melody and drum MIDI tracks using Ableton’s audio to MIDI converter, yielding a range of interesting results. Due to these tracks only consisting of a very narrow range of frequencies, often the resulting MIDI information would just be a single note played repeatedly.
MIDI instruments were then created from edited audio samples and assigning them to particular notes which were triggered by these automatically converted MIDI files. Certain sections of audio samples were chosen due to their distinct transients, which could then be sliced by transient in Ableton Live’s Sampler device. The Expression Control Max for Live device was used to map velocity and keytrack to plug-in parameters (e.g. dry/wet, pitch, formant) and control them automatically using the MIDI tracks. The decision in relying on the seismic data to control the composition of the music was arrived at in order to emphasise the notion of natural phenomena; to, in a sense, let nature take creative control.
Output formats and public engagement
A work-in-progress installation of this project took place on the 10th of November, 2022 at the Creative Informatics studio space at Edinburgh Napier University’s Merchiston Campus. The event was advertised across the University and promoted particularly within the School of Arts & Creative Industries and the School of Computing and through invitation to a number of interested parties external to the University.
For reproduction purposes a cube orientation of eight speakers was adopted which facilitated phantom centres in all orientations, which was further enhanced by a single sub bass unit located on the floor in the centre of the cube, acting as an LFE (Low Frequency Effects) channel. The audio setup was managed through Ableton Live, where all of the audio tracks were routed to nine auxiliary channels, each of these outputs going to one of the nine speakers. The final audio tracks used in the installation consisted of the original compressed waveform audio, the isolated frequency tracks (using the FFT process), and the processed audio samples (with effects). These were routed and ordered into the auxiliary channels in ascending order of approximate frequency from 1-8, with channel 9 being reserved for the sub bass speaker for anything below 100Hz.
A significant feature of this installation from the perspective of public engagement was enabling members of the public to control the spatial mix of the audio channels in the room. This was achieved by connecting a MIDI controller with faders to the Ableton Live session, each fader assigned to an auxiliary channel. As previously stated, these channels were arranged in ascending order of frequency (1 being the lowest frequencies and 8 being the highest), and were assigned to the volume faders on the MIDI controller from left to right, intending to provide clear visual mapping for visitors with which to engage with the spatial mix.
Visual accompaniments were created for this exhibition to provide a more immersive and stimulating experience for guests and visitors. The visuals used in this exhibition also originate from volcanic regions in Ecuador, this time through captured video footage. One short video of a pan around the volcanic landscape was transformed and heavily processed employing Max/Jitter visual effects and by using amplitude and frequency changes from the original seismic waveform data to control the effect parameters, again with the intention of ceding creative control to the natural arrangement of data.
Audio and visual expamples can be accessed at the project website: https://vulcasonic.uk/
Disciplinary reflection
Even after processing and manipulation of the data, key features of the volcanic processes generating the original signals were evident to the expert ear. The juxtaposition of layers of data from different volcanic episodes highlighted some important differences between them. The sub-bass speaker added an additional experiential dimension to the data.
The spatial aspects of the work appeared to be especially effective in terms of listener comprehension of the data. Commonly audifications and sonifications make limited use of this aspect, making it more difficult for untrained listeners to segregate individual auditory streams. Spatial separation of sources help reduce perceptual masking, and consequently the cognitive load required to interpret content, as commonly evidenced through the cocktail party effect. This also allows finer sonic detail to be perceived concurrently, which is further enhanced through the ability of each listener to self-orientate to the 8.1 loudspeaker cube array.
The team identified considerable creative potential in the separate tracks of treated data, whether through use in improvisational or compositional practices, as isolated or concurrent tracks of audio, or as remixed or resampled sonic building blocks employed to new artistic ends. The creative abstraction of data allowed for the creation of new sound palettes or sonic tool-boxes that are both able to stand alone from, or afford playful reinterpretations of, their scientific origins.
Future development
The next stage of our research is to extend the audifications with sound sources captured from the relevant locations; these can be both natural and anthropogenic in nature. Highly directional ultrasonic loudspeakers can be utilised to create discrete pools of sound within the cubic array to convey the impact of seismic events on the surrounding inhabitants. It will also be able to explore further the nature of macro and micro, so that the way in which seismic waves travel around the globe can be conveyed, as well as tease apart the fine detail of which sonic pulses interact with each other according to geographic factors, and evolve over variable time scales.
It is intended that methods and processes developed during the pilot project for the basis for a toolkit to be made publicly available on a digital platform for others to work with the data to their own ends, whether from within interest-fields of science or humanities. We are particularly interested in sharing this toolkit with those in closest proximity to the data source to afford new ways of understanding and engaging with their environment by interacting with science through creative practice.
Follow-on funding has been sought through the British Academy Apex Fund to allow us to extend project ambitions and engage with a wider audience. If successful, this will represent an important step towards a significantly larger project with a greater range of outputs and impacts.
