• Visual Art
  • SoundSpace
  • (Beyond) Text
  • Multimedia
  • INTERMEDIA
  • Langscapes
  • THIS IS THE WORD
  • ABOUT

HOME





Invisible Hand





Inspired by “The Elusive Digital Frame and the Elasticity of Time in Painting” (2012), a Fine Art PhD thesis by Anne Elizabeth Robinson that I’ve recently read, I conducted an experiment with digital painting recorded in real time, directly from the computer screen. The aim was to observe my painting process, from an outside perspective, to see how the layers change and transmute the painting over time, how my attention shifts across the painting, and how it affects the overall process.


The painting session/original recording, 1 hour and 14 seconds long, was sped up 16 times, then divided into 9 smaller screens consisting of different parts of the painting, zoomed in (from top left to bottom right corner), and re-arranged randomly into a new video. The result was an unexpected painterly motion comic strip, with the “narrative” taking place across multiple panels simultaneously, framed by gutters opening up spaces for us to imagine what happens outside the panels, between the moments of activity and inactivity.

I want to continue these experiments and see what comes out of them next (maybe even including sound)...





INTERMEDIA INSTRUMENT - VERSION 1.1!





A digital intermedia "instrument" that I've been working on lately, created in Max (a visual programming language) - version 1.1! The audio piece used is The Subtle From The Gross from my upcoming solo album Lingua Arcana, and the videos are snippets of the longer videos I filmed in Limerick, Galway and Cashel, Ireland (2023). Detailed description below:

One of the aims of my PhD research is to create an interactive intermedia experience and/or a mechanism that enables the visuals, sound, music and text to freely and fluidly transform into and merge with one another, in long (endless?) streams of iterations, converting back and forth into each other. As a result, the media would transcend their initial forms, and generate new, hybrid, unnamed media, or rather modalities, that are not visuals, sound, music, text, etc. from which they initially arose but all of those at once and nothing of those.

Using Max, I managed to create a multi-modal piece (or rather an instrument) which, to a certain extent, demonstrates this intermedia-like fluidity between sound, music, visuals and text/lyrics (shown in the video in the presentation mode only, i.e. without the processes behind it).

The piece/instrument consists of a playlist with 3-9 second long manually interchangeable videos (7 for now), and an audio track, separate from the videos. Only 1 audio track can be used at the time for now, until I find a practical way of controlling both the video and audio playlists via the same laptop keyboard or an external controller. The hue, saturation and overall visual content of the video playlist output are directly connected to and in turn altered by the changes/fluctuations in the frequency spectrum and amplitude of the audio.
The audio track is uploaded twice: one copy is connected with the frequency spectrum analysis outputs, and the other with the amplitude analysis outputs.

When the audio starts playing, the real-time detected frequency spectrum numerical values — outputting a number when a pitch/note is detected — are converted into a real-time visualization of the spectrum (the white lines moving in and out of the black background). A frequency shift function is included as well, so one can increase or decrease the frequency/pitch of the audio in real time (to the extremes of both positive and negative values, which gives interesting/unexpected results in both the audio and video outputs). The frequency can thus be manipulated with the frequency shift function, the result of which is shown in the real-time generated frequency spectrum visualization window. The frequency spectrum can be additionally manipulated by moving the frequency band sliders up and down (20 Hz-20 kHz) in the graphic equalizer, which further affects the audio output and through it the frequency spectrum visualization. The real-time detected frequency/pitch numerical values are connected with the Saturation dial in the HUSALIR (Hue, Saturation and Luminance) plug-in, so as the frequencies/pitches change, so does the saturation.
The frequency spectrum real-time detected pitches/notes are also converted into a real-time visualization of the changes in saturation and luminance of a monochromatic panel (blue).

The amplitude numerical values, i.e. the real-time detected amplitude peaks, are connected with the Hue dial in the HUSALIR plug-in, so as the amplitude changes so does the hue. The amplitude numerical values are also converted into a real-time visualization of the changes in saturation and luminance of another monochromatic panel (yellow).

The frequency spectrum visualization (white lines on the black background) is connected with the video playlist output (not visible in the presentation mode) so that they are blended/intertwined in the final audio-visual output. Importantly, it is possible to change the blending modes of this output in many different combinations, i.e. how one visual output (frequency spectrum visualization) affects/filters through the other (video playlist) in real time, affecting parameters such as hue, saturation, luminance and others.

The HUSALIR plug-in is thus connected with the final audio-visual output, in which we see the frequency spectrum visualization and the video playlist output being filtered through one another continuously in real time, affected by the changes in hue and saturation caused by the changes in the audio amplitude and frequency spectrum respectively. The final audio-visual output is also affected in real time by the changes made in the EQ frequency bands, and the frequency shifts, and this is both audible in the sound and visible in the visuals.

This is how audio/text becomes video in this piece, and how the video is affected/controlled by the changes in the audio/text.

Watch the videos below to learn how (I learned how) to build some of the parts of the intermedia "instrument": Max 8 Tutorial #33: Audio Becomes Video
Max 8 Tutorial #34: Video to Audio to Video To Audio To Video...
Audio Reactive: Amplitude to Color - Audiovisual Max/MSP Tutorial for Beginners





One Day I Will Find The Right Words





In this piece, I explored the possibilities for breaking free from the fixed media, to open up unpredictable, ever-changing intermedia spaces, using the sound mixer as an instrument for performing and composing music in real time. The sonic/music component was thus performed/recorded live on the mixer, consisting of 4 pre-recorded audio tracks (fixed media), and live input from an electromagnetic field microphone (non-fixed media). The spoken text used for composing the 4 fixed-media audio tracks was “One day I will find the right words, and they will be simple“, a quote from Jack Kerouac's novel The Dharma Bums, (pre-)molded using different audio effects. Using the mixer and audio effects allowed me to go directly into each track and mold it as a sculptor would a piece of clay. As a result, the tracks would sound different with each new “performance“, both individually and in conjunction with each other, with virtually no way of predicting what the entire piece will sound like.





One Day... performed live, at ISSTA Spatial Audio Concert, in the Sonic Arts Research Centre, Belfast





The Irish Sound, Science and Technology Association (ISSTA) hosted two public events March 7-9 2023 featuring 4 emerging international audio artists, Jenn Grossman, Cameron Clarke, Neil Quigley and myself, to workshop and present our work at the Spatial Audio concert, in the Sonic Arts Research Centre, Queen's University Belfast.
I performed One Day I Will Find The Right Words, using Sonic Lab's sound mixer for real-time mixing of the sounds. The performances took place in the Sonic Lab, where our sound art pieces were spatialized using a 40+ loudspeaker spatialization system, distributed across 4 levels: basement, ground, cinema height, and ceiling.
All loudspeakers (and lighting) are controlled via a massive mixing desk (which I used for my performance) and a few computers/monitors at the back of the Sonic Lab.

​ Read more about the tech specifications of this amazing space here: https://www.qub.ac.uk/sarc/facilities/soniclab/





I Bhfad as Amharc (Outro)





A music video I made for an experimental song of mine, "I Bhfad as Amharc (Out of Sight) (Outro)", from my EP "Seanchroí", in which I'm exploring the topic of communication (or lack thereof), and the difficulties of trying to move away from the events, things or persons that are causing us distress. Do our efforts to move away from them mentally/emotionally and physically only result in them getting closer and closer to us?

The video encapsulates an early attempt at using language (in this case Irish), or rather de-/re-constructing it to turn it into a tool for composing music, and depicting ideas/concepts conveyed with language through music/sound, as a painter would use a photograph as a reference for a painting they're creating.





Mary's Wedding





An experimental video I made for my slightly atypical cover of the well-known Irish traditional song, "Mary's Wedding" - I kept the lyrics as they were, but turned the original melody into dismal, by which the cheerful lyrics acquire dark undertones and new layers of meaning. For the video, the idea was to use abstract imagery which in itself doesn't convey any emotion, atmosphere or meaning in particular, but would do so as the video is being watched. The imagery was generated in real time, by hand, using a digital painting software, while listening to the music; this process was recorded straight from the computer screen, with the aim of creating a video piece in which the sound directly "affects" the visuals being generated in real time in conjunction with the music.



© 2010-2023, JELENA PERIŠIĆ. ALL RIGHTS RESERVED.