Published on the Web (online journal)

Description (in English)

Clay conversations arose out of collaborative conversations I had with British ceramicist Joanna Still. After several meetings and exchanges, Joanna created some ceramics which evoked various forms of communication, for example a clay book, a calendar, and an abacus, but which also had an abstracted connection with the objects to which they refer. I wrote several short poems in response to Joanna’s ceramics, conversations we had, and textual material she sent me (such as a newspaper cutting about Haitians eating clay plates because they could not afford food).  My poetry also drew on experiences I had independently, which seemed to connect with the project, such as a visit I made to the Asian Art Museum in San Francisco.

I then started to experiment with the video program Final Cut Pro, and with a variety of techniques and processes such as split screens, superimposition and merging of images, and a range of filters for image transformation. Besides the images of the ceramics, I worked with photographs and emails resulting from Joanna’s travels in Zambia and Ethiopia, where she was sponsored by Voluntary Service Overseas to conduct workshops with local communities.  These were very inspiring and suggestive, and seemed to fit well with my own increasing interest in a cosmopolitan poetics,which moves between different cultures in the same work. I adapted some of the poems I had written for the video, often fragmenting and reorganising them in new ways to optimise integration with the visual images, and to exploit the possibilities of the split screen dynamic.

To accompany the video (presented here as a quicktime video), Roger provided a recorded soundscape: it reflects both the violence and love with which clay and ceramics are treated. With one short exception, all the sounds here are found sounds directly involving clay and pots. Several are recordings of Joanna at work, others are of stone/pot interactions recorded by Roger, while a significant selection of the sounds are taken from the freesound online sonic database maintained in Barcelona. Notable amongst these recordings is a five minute recording of clay gradually distributing itself as it hydrates in a body of water, made with an underwater microphone by KG Jones. We would also like to acknowledge, in keeping with the Creative Commons license which applies, the use of material from Benboncan, Heigh, homejrande, NoiseCollector, Robinhood and volivieri.

Clay Conversations, which runs for just over 10 minutes, was first presented at a performance by austraLYSIS at the Sydney Conservatorium of Music in December 2009.

It was published in Scan Gallery in 2010 and in Hyperrhiz in 2012. 

Description in original language
Screen shots
Image
Screenshot from Clay Conversations
Description (in English)

NTERTWINGLING is a work for the web and for live performance, which involves hypertext and improvised music. The hypertexts are very diverse and include aphorisms, parodies, poems, fragments of narratives, and quotations. These are connected by hyperlinks, which allow the screener to take many different pathways through the work, so each screening will be different (and not all will include every text). In a live performance, the improvising musicians must respond to the hypertexts sonically, but they can do so in any way they choose. The hypertexts were written and visually designed by Hazel Smith, with image backgrounds supplied by Roger Dean. The sound is taken from a live performance of the work, given in December 1998 at the Performance Space, Sydney, which involved extensive digital processing of electronic and acoustic sound, played by the austraLYSIS Electroband (Roger Dean, Sandy Evans, and Greg White). The recorded sound has been slightly edited, and is presented playing both forwards and backwards, in streaming audio. Intertwingling is a word used by T.H. Nelson (one of the pioneers of hypertext theory and practice) to describe the process in hypertext whereby everything interweaves and intermingles with everything else. It conveys the way the piece "intertwingles" different media, different types of text, and different kinds of subject matter (travel, place, desire, economics and ideas about narrative).

Description (in English)

Time, the magician (2005) is a collaboration by Hazel Smith and Roger Dean written in the real-time algorithmic image-processing program Jitter. The piece begins with a poem, written by Hazel, on the subject of time:  influential on the writing of the poem was Elizabeth Grosz’s The Nick of Time.  The poem is initially performed solo, but as it progresses is juxtaposed with live and improvised sound which includes real-time and pre-recorded sampling and processing of the voice. The performance of the poem is followed (slightly overlapping) by screened text in which the poem is dissected and reassembled. This screened text is combined in Jitter with video of natural vegetation, and the sound and voice samples continue during the visual display.

The text-images are processed in real time so that their timing, order, juxtaposition, design and colours are different each time the work is performed. This Quicktime Movie is therefore only one version of the piece. The sound is from a performance given by austraLYSIS at the Sydney Conservatorium of Music, October 2005. The performers were Roger Dean, computer sound and image; Sandy Evans, saxophone; Hazel Smith, speaker; Greg White, computer sound and sound projection.

Screen shots
Image
Screenshot of Time the Magician.jpg
Screen shots
Image
Screenshot from The Lips are Different
Contributors note

The Lips are Different  is about the Canadian citizen Suaad Hagi Mohamud — born in Somalia — who was accused of not being a Canadian citizen when she tried to return to Canada from Kenya in 2009. The work links over-surveillance, racial discrimination, photography, media representation and issues of identity. It comprises real-time video written in Jitter; improvised music based on a comprovisation score and both performed text and screened text. An article about the piece Creative Collaboration, Racial Discrimination and Surveillance in The Lips are Different  containing the piece itself can be found at https://thedigitalreview.com/issue00/lips-are-different/index.html

 

Description (in English)

The Character Thinks Ahead (version 2) by Hazel Smith and Roger Dean is focused on the computerized generation of creative writing using deep learning neural nets. It knits together visual, sonic, linguistic and literary elements that all interact with each other. Of the three dynamically rolling columns of text in the upper part of the screen, the middle presents three pre-composed poetic texts that suggest ideas, feelings and contexts to do with war, hierarchy and competition respectively. The two columns on either side display text generation using deep learning nets: in the left column the text is generated by character, in the other it is generated by word. In the bottom part of the screen there are also three distinct elements to the display. An animated word cloud in the middle highlights features of the ongoing texts. To the left of it is a dynamic spectral visualisation of a (pre-recorded) rendering of the live speech: this is live-transformed to provide a sonic output visualized spectrally on the right. Besides the visual elements, the live speech and the live sonic transformation, there is also pre-formed sound — composed and improvised by Roger Dean, Sandy Evans (saxophone) and Greg White (computer). The spoken text (performed by author Hazel Smith) plays on different senses of the word character: character as part of a word, as a register of behaviour or as a fictional being. It also relates to “thinking ahead”, central to the predictive aspects of deep learning (or even “thinking a head”). Ideas of competition between word and character, which are a feature of the spoken text, are also a feature of the process of text generation. These ideas are further explored in the screened pre-composed poetic texts (middle text panel). Description by the authors adapted from Dean, R. T., & Smith, H. (2018). The character thinks ahead : creative writing with deep learning nets and its stylistic assessment. Leonardo, 51(5), 503-504. https://doi.org/10.1162/leon_a_01601

Screen shots
Image
The Character Thinks Ahead (Screenshot)
Contributors note

 

 

Description (in English)

nstabilities 2 [...] subjects a discontinuous text to various kinds of processing. The screen is divided into three sections which counterpoint each other. The top section consists of a video made by Hazel Smith comprising twelve short texts. The middle section consists of the same material processed in the program Jitter by Roger Dean, and involves various forms of overlaying, erasing and stretching of the words. In a third section of the screen the same texts together with others which do not appear in the top movie are processed in real-time by Roger Dean by means of a Text Transformation Toolkit (TTT) written in Python. The processing substitutes words and letters so that new text emerges, together with a spoken realization of some parts of the text, new and old. The pre-written fragments circle around the idea of social, historical, and psychological instabilities, but during the processing new instabilities syntactical, semantic, and phonemic also arise.  Improvised and composed music is performed by Roger Dean, Greg White, Phil Slater and Sandy Evans. In addition, computer-synthesised voices add an aural dimension to textual change.

Screen shots
Image
Contributors note

I

Description (in English)

 soundAFFECTs, employs the text of 'AFFECTions' by Hazel Smith and Anne Brewster, a fictocritical piece about emotion and affect as its base, but converts it into a piece which combines text as moving image and transforming sound. For the multimedia work Roger Dean programmed a performing interface using the real-time image processing program Jitter; he also programmed a performing interface in MAX/MSP to enable algorithmic generation of the sound. This multimedia work has been shown in performance on many occasions projected on a large screen with live music; the text and sound are processed in real time and each performance is different. Discussed in Hazel Smith 2009. “soundAFFECTs: translation, writing, new media, affect” in Sounds in Translation: Intersections of Music, Technology and Society, Amy Chan and Alistair Noble (eds.), ANU E Press, 2009, pp. 9-24. (Republication of earlier version of the article published in the journal Scan).

Screen shots
Image