Performance

Content type
Author
Year
Publisher
Language
Publication Type
Platform/Software
License
Public Domain
Record Status
Description (in English)

"Leaning Haiku" took form as an Augmented Reality Face Filter to respond to the question during Co(de)-Po(etry)-Jam , 2020 :"How to bridge the digital layer to the physical reality via means of text as a bridge ?".

The medium of instagram filters was chosen as a platform to answer this question. Mixing my interest in digital generative artwork with that of Japanese Haiku, the concept of 'Leaning Haiku' took its form.

The infamous "indian head nod" became the trigger to generate 'Haikus'. The poetry of this amusing head nod overlaid with the literal Japanese poetry gave it a depth into both physicality of human motion and mystery of random haikus. The haiku was structured as a random selection of 'Observation , Evocation & Truth'. With an added easter egg where more left nods gave a sad emotion to haiku in contrast with joyful outcomes of right nods.

As people started using it , an interesting layer of engagement emerged from it. They were using the AR filter over images, dancers, movie clips and at times layering an already generated haiku with another layer. This gave the poetic nature of work a participative engagement as observed with the public art exhibits. The emotional acknowledgement to the poem was found to be cryptic, whimsical and at times funnily bizzare to people's own lives. ( link to few of such engagements )

 

Description (in English)

TBD is a work of intensive translation that understands translation not as an activity bound to building bridges between languages, but as an immanent material act on the way to utopia. The work began with a reading of Gilles Deleuze’s Bergsonism and continues to persist in a cross-platform evolution searching for a utopic platform to come. As it moves, it takes on new phase-states according to the affordances of a variety of media and platforms. It begins with the codex, but has moved through digital photography, photographic manipulation, After Effects animation, Twitter, Googleslides, .gifs — and it will continue to evolve, leaping from one platform to another, binding disparate materials and platforms to its identity even as it transforms into something else, in search of perfection. Informed by an implicit poetics latent in Deleuze’s book, it is also an outgrowing line from it.

 

The project poster would link to a video and googleslides providing an overview of the steps taken so far. First, I read Deleuze’s Bergsonism and filled the margins with drawings, graphs, and diagrams of my reading. Then I photographed the drawings, and isolated and manipulated them in photoshop (there are about 240). This yielded an Henri-Micheaux like set of hieroglyphics — an asemic translation of Bergsonism. Then I digitized each drawing in Photoshop, creating an infrathin space between the haptics of the hand and the smooth surface of the screen. Each drawing was then animated according to its inner logic of movement using After Effects. These animations (.movs) were translated into .gifs, then placed in small gatherings of about 6 or so at a time, which were posted to twitter, and subsequently gathered into Google Slides. Each set resembles a strange living creature endlessly performing its repeated action. Together, they are like a murmuring surface of matter underway. The next step in the process is to write descriptions of each animation, thus performing an odd form of translation. A strange re-writing of Gilles Deleuze’s Bergsonism will emerge from this process. These animations and their paired poems, something like medieval emblems, will be gathered on a website that will allow a user to click each one and hear its story. This is where TBD ends for now, but it will continue in search of a utopic platform to come that it has yet to discover.

Screen shots
Image
TBD PowerPoint
Multimedia
Image
TBD gif
Image
TBD gif
Description (in English)

Me, Myself and I in Dystopia is part of the large project called, “Humanity: From Dystopia to Survival” which is an interactive survival game and audio-visual music performance. Although the original concept of this project was to involve audience members to play the real-time interactive survival game, this version explores how the pandemic has shifted the sense of collectivity to individuality and depicts what it means to remain in a dystopian society alone by re-creating my own versions of self. In this video, I take multiple shots of my own photos (represented as an outline of faces) and speak to the microphone to interact with the video in real-time. The voices and air that I blow to the microphone affect my own photographic representations in the video by blurring the image. I will be saying random words to the microphone and also exhale air as a gesture of meditation during the crisis. The interactive video will demonstrate how an individual yearns to survive dystopia as he/she struggles to fight the solitude, controls, and chaos of society. As each photo of myself is taken, it will also trigger a new sound file as a way to intensify the mood of being alone. This work was created by an interactive design prototype for Max-MSP software (programmer: Martin Ritter), whereby I design how people can talk to the microphone and affect the visuals to symbolize saving one another and humanity as a whole. While demonstrated this work alone, it also powerfully suggests what it means to adapt in the constantly evolving covid era.

 

Source: exhibition documentation 

Multimedia
Remote video URL
Description (in English)

Breathing, video, 10 min. Based on Utterings' recorded live performance of the same title during the STWST48x6 MORE LESS festival in Linz Austria. Edited by Daniel Pinheiro. In this video through the mixing of six audio and video streams emerges an image of a phantomatic breathing, pulsing entity. An entity that thrives through affection, attention, glitches, delays and even voids in a connected environment mediated by machines, cables and compression algorithms.

Screen shots
Image
several heads overlaid over each other
Image
Heads overlaid over each other
Multimedia
Remote video URL
Description (in English)

NTERTWINGLING is a work for the web and for live performance, which involves hypertext and improvised music. The hypertexts are very diverse and include aphorisms, parodies, poems, fragments of narratives, and quotations. These are connected by hyperlinks, which allow the screener to take many different pathways through the work, so each screening will be different (and not all will include every text). In a live performance, the improvising musicians must respond to the hypertexts sonically, but they can do so in any way they choose. The hypertexts were written and visually designed by Hazel Smith, with image backgrounds supplied by Roger Dean. The sound is taken from a live performance of the work, given in December 1998 at the Performance Space, Sydney, which involved extensive digital processing of electronic and acoustic sound, played by the austraLYSIS Electroband (Roger Dean, Sandy Evans, and Greg White). The recorded sound has been slightly edited, and is presented playing both forwards and backwards, in streaming audio. Intertwingling is a word used by T.H. Nelson (one of the pioneers of hypertext theory and practice) to describe the process in hypertext whereby everything interweaves and intermingles with everything else. It conveys the way the piece "intertwingles" different media, different types of text, and different kinds of subject matter (travel, place, desire, economics and ideas about narrative).

Description (in English)

Time, the magician (2005) is a collaboration by Hazel Smith and Roger Dean written in the real-time algorithmic image-processing program Jitter. The piece begins with a poem, written by Hazel, on the subject of time:  influential on the writing of the poem was Elizabeth Grosz’s The Nick of Time.  The poem is initially performed solo, but as it progresses is juxtaposed with live and improvised sound which includes real-time and pre-recorded sampling and processing of the voice. The performance of the poem is followed (slightly overlapping) by screened text in which the poem is dissected and reassembled. This screened text is combined in Jitter with video of natural vegetation, and the sound and voice samples continue during the visual display.

The text-images are processed in real time so that their timing, order, juxtaposition, design and colours are different each time the work is performed. This Quicktime Movie is therefore only one version of the piece. The sound is from a performance given by austraLYSIS at the Sydney Conservatorium of Music, October 2005. The performers were Roger Dean, computer sound and image; Sandy Evans, saxophone; Hazel Smith, speaker; Greg White, computer sound and sound projection.

Screen shots
Image
Screenshot of Time the Magician.jpg
Description (in English)

 soundAFFECTs, employs the text of 'AFFECTions' by Hazel Smith and Anne Brewster, a fictocritical piece about emotion and affect as its base, but converts it into a piece which combines text as moving image and transforming sound. For the multimedia work Roger Dean programmed a performing interface using the real-time image processing program Jitter; he also programmed a performing interface in MAX/MSP to enable algorithmic generation of the sound. This multimedia work has been shown in performance on many occasions projected on a large screen with live music; the text and sound are processed in real time and each performance is different. Discussed in Hazel Smith 2009. “soundAFFECTs: translation, writing, new media, affect” in Sounds in Translation: Intersections of Music, Technology and Society, Amy Chan and Alistair Noble (eds.), ANU E Press, 2009, pp. 9-24. (Republication of earlier version of the article published in the journal Scan).

Screen shots
Image
Description (in English)

“The Text That Talks Back” is my most ambitious of these experiment thus far. As the title suggests, my performance will consist of a direct dialogue between myself and the text displayed on the screen. The interaction won’t be entirely rehearsed, either, as the text will be coded to vary its responses at random. The text will ask me questions, challenge me, offer me advice, disagree with me, grow angry with me, and then ignore me altogether and address the audience directly. In shifting power away from the author, “The Text That Talks Back” will illuminate and challenge the very terms of the reader-writer-text relationship.