ambient

Description (in English)

Seedlings_ is a digital media installation that plants words as seeds and lets them grow using the Datamuse API, a data-driven word-finding engine. It is at once an ambient piece in which words and concepts are dislocated and recontextualized constantly, and a playground for the user to create linguistic immigrants and textual nomads. In Seedlings_, a word can be transplanted into a new context, following pre-coded generative rules that are bundled under the names of plants (ginkgo, dandelion, pine, bamboo, ivy…). These generative rules consist of a series of word-finding queries to the Datamuse API such as: words with a similar meaning, adjectives that are used to describe a noun, words that start and end with specific letters. They are then grouped in modules to represent the visual structure of the corresponding plant and can be constrained with a theme word. A new plant can be grafted on top of the previous plant by switching to a new starting point from the latest generative result. Other than words in monospace font, lines of dashes are the only other visual element in the piece, expressing the minimalist aesthetics in these potentially infinite twodimensional linguistic beings. In distributional semantics, words that are used and occur in the same contexts tend to have similar meanings. Based on this hypothesis, words are processed by n-grams, represented and manipulated as vectors in contemporary Machine Learning. With the help of algorithms, we can now identify kinships between words (through similarity or frequent consecutive use) in milliseconds. Seedlings_ reconfigures existing technologies and services in Natural Language Processing as the virtual soil to generate alternative linguistic plants: it seeks new poetic combination of words by encouraging unusual flow of words and concepts.

Screen shots
Image
Description (in English)

Re:Cycle III is an extension of my previous generative video art piece Re:Cycle (exhibited at ELO 2012). The current version is part of an ongoing exploration into the combined poetics of image, sequence, motion, computation, and meaning. The Re:Cycle system includes a database of video clips, a second database of video transitions, and a computational engine to select and present the video clips in an unending stream. The computational selection process is driven by a set of metadata tags associated with the content of each video clip. The system can incorporate video clips of any content or visual form. It is currently based on nature scenery: mountains, rivers, ice, snow, waterfalls, trees. (Future versions will incorporate urban and human imagery.) The original version was completely committed to the aesthetic of ambient experience. Like Brian Eno's "ambient music", it was not intended to capture or hold your attention. However, it was required to give visual pleasure whenever you did choose to gaze at it. As the system is evolving, this commitment to ambience is gradually giving way to a more engaged and prolonged experience. The change is driven by the incorporation of increased semantic and visual coherence. The original version relied completely on random shot selection and sequencing. An early modification introduced a low level of semantic coherence based on simple metadata tags. The current version has taken this commitment to semantic coherence further. First, the shots are getting more varied, and the tagging system is getting more complex. This increase in the variety of the metadata textual tags is amplified by the application of more complicated algorithmic sequencing processes. The old system could present a series of short sequences made up of clips with shared visual content (e.g. -­‐ "trees", or "waterfalls"). The new system will incorporate that short-­‐term sequencing logic, but will nest it within a set of larger segments. The larger segments will be based on more sophisticated concepts of progression, arc, time and closure. The system is based on text at its most fundamental level. The decision making relies on the tags -­‐ descriptors of video clip content. The system reads, selects and sequences using these tags. The driver is text, the experience is visual. At a higher level, the work is evolving towards a more complicated sequencing logic that will combine a heightened sense of flow and progression with an increased commitment to meaning. One can see it as a visual poetry machine, one that has advanced from doggerel to a more expressive semantic and visual output. (Source: Author's Abstract)

Part of another work
Other edition
Technical notes

Re:Cycle III runs from a Macintosh computer running Max software. It is designed ideally for screen-­‐based display (30-­‐50" screen), but can also be shown using a projection system. There is no audio. The artist will install necessary software, system and video files. If necessary, the artist can supply a computer, but not a screen. (Source: Author's Abstract)

Description (in English)

On the occasion of the ELO 2010 conference celebrating Robert Coover, I have devised a 24-channel sound installation/performance.  Given the theme of the conference (Archive & Innovate), I chose to investigate the sonic literary archive, utilizing recordings of Robert Coover in the reading his own work as a framework for this composition.  Through a computational process of spectral analysis, editing, and re-synthesis, solo speech is transformed into a chorus of diffused instrumental timbres.  Time is stretched, allowing the ebb and flow of the original readings to be heard very slowly - creating an ambient, electro-acoustic arena.

By Eric Dean Rasmussen, 21 June, 2012
Author
Language
Year
Record Status
Abstract (in English)

Ambient video art is designed in the spirit of Brian Eno's ambient music - it must never require our attention, but must reward our attention whenever it is bestowed. It comes in many forms, ranging from the kitsch of the Christmas yule log broadcast to more mature moving image art created by a number of contemporary video artists and producers. The author has created a series of award-winning ambient video works. These works are designed to meet Eno's difficult requirements for ambient media - to never require but to always reward viewer attention in any moment. They are also intended to support viewer pleasure over a reasonable amount of repeated play. These works are all "linear" videos - relying on the careful sequencing and meticulous transitioning of images to reach their aesthetic goals. Re:Cycle uses a different approach. It relies on a computationally generative system to select and present shots in an ongoing flow - but with constant variations in both shot sequencing and transition choice. The Re:Cycle system runs indefinitely and avoids any significant repetition of shots and transitions. The system selects shots at random from a database of video clips, and joins them with transitions drawn at random from a separate transitions database. The transitions are based on abstract graphic values, so each specific visual transformation is unpredictable and complex. Compared with the linear videos, the computational system has sacrificed a measure of authorial control in order to maximize sequencing variability and therefore long term re-playability. The presentation describes in detail a series of specific artistic decisions made by the author and his production team. Each of these aesthetic design decisions is explicated as a balance between two fundamental variables: aesthetic control and system variability. The advantages and trade-offs of each decision point are identified and discussed. These artistic directions are analyzed in the broader context of generative art. This context situates the project within the discourse of generative art, and in the specifics of generative works in a variety of media, including visual art, sound art, moving image and literary works. The presentation also describes how metadata encoded within the shots and the transitions will be used to modulate the essentially random operation of the basic system in order to increase visual impact and flow. Future work on the system will incorporate this use of metadata - tagged as form and content variables for each shot, and as form variables for each transition. These metadata tags will provide increased coherence and continuity to the visual flow of the work. They will nuance and modify - but not completely supplant - the random processes at the heart of the generative system. The presentation concludes by describing how the system will be further revised to present emergent forms of generative narrative. It details how these storyworks could run indefinitely while mediating a dynamic balance between two seeming oppositions: random algorithmic selection and the coherence of sequencing necessary for narrative pleasure. (Source: Author's abstract, 2012 ELO Conference site)

Creative Works referenced
Description (in English)

Re:Cycle is a generative ambient video art piece based on nature imagery captured in the Canadian Rocky Mountains.  Ambient video is designed to play in the background of our lives.  It is a moving image form that is consistent with the ubiquitous distribution of ever-larger video screens. The visual aesthetic supports a viewing stance alternative to mainstream media - one that is quieter and more contemplative - an aesthetic of calmness rather than enforced immersion.  An ambient video work is therefore difficult to create - it can never require our attention, but must always rewards viewer attention when offered.  A central aesthetic challenge for this form is that it must also support repeated viewing.  Re:Cycle relies on a generative recombinant strategy for ongoing variability, and therefore a higher measure of re-playability.  It does so through the use of two random-access databases: one database of video clips, and another of video transition effects.  The piece will run indefinitely, joining clips and transitions from the two databases in randomly varied combinations.

Creative input to the system derives in large part with the selection of shots that the artist uses.  I've been fortunate to collaborate with a brilliant cinematographer - Glen Crawford from Canmore, Alberta.  Re:Cycle's landscape images include a range of elements such as snow, trees, ice, clouds and water - reflecting a deep respect for the natural environment.  These images also produce the ‘ambient’ quality I am seeking.   They are engaging when viewed directly, but also move easily to the background when not.  Another artist might choose very different images, and the resulting work could be completely different.  While I enjoy the complete control offered with traditional linear video art, I am intrigued by the different set of artistic decisions this simple generative platform can support.

The current version of the generative engine for Re:Cycle also incorporates a deeper level of artistic intervention through the integration of metadata into the dynamics of the system.  Each video clip is given one or more metadata tags - reflecting the content of the individual shot. I have used the tags to nuance the random operation of the engine, and group and present images in sequences that share a common content element (such as "snow" or "water").  The resulting generative video work presents a stronger sense of visual flow, and the sequencing begins to exhibit a degree of semantic continuity.

The overall design of the piece incorporates a series of decisions (number of shots, quality of shots, transition selection, algorithmic process) that strike a balance between replayability/variation on the one hand, and aesthetic control on the other. 

 

Screen shots
Image
Image
Multimedia
Remote video URL
Technical notes

Original program in MaxMSP-Jitter.   Revised program in Max6

Contributors note

Director of Photography:  Glen Crawford

Version 2 Programming:  Sayeedeh Bayatpour, Tom Calvert

Original Programming:  Wakiko Suzuki, Brian Quan, Majid Bagheri

Producer:  Justine Bizzocchi

Description (in English)

Howe’s new piece, “Automatype,” which can be seen as either ambient text art, a weird game of solitaire for the computer, or an absorbing ongoing puzzle for a human viewer, is an apt demonstration of some of the powers of “RiTa,” as it uses algorithms to find the bridges between English words, Six-Degrees-of-Kevin-Bacon-style — not bridges of garbled nonsense but composed of normative English. You will spend either 10 seconds or 5 minutes staring at this thing; you will also see either a bunch of random words, or occasionally, if not always, engaging samples of minimalist poetry.

(Source: The ELO 2012 Media Art Show.)

I ♥ E-Poetry entry
Screen shots
Image
Automatype, installation at the School of Creative Media, Hong Kong, Nov. 2012.
Image
Description (in English)

wotclock is a QuickTime "speaking clock." This clock was originally developed for the TechnoPoetry Festival curated by Stephanie Strickland at the Georgia Institute of Technology in April 2002. It is based on material from What We Will, a broadband interactive drama produced by Giles Perring, Douglas Cape, myself, and others from 2001 on. The underlying concepts and algorithms are derived from a series of "speaking clocks" that I made in HyperCard from 1995 on. It should be stressed that the clock showcases Douglas Cape's superb panoramic photography for What We Will.

(Source: Author description).

I ♥ E-Poetry entry
Screen shots
Image
Image
Image
Technical notes

After loading, wotclock runs continually as a time-piece. The numerals that traditionally circle the clock face are replaced by letters, and these letters are used to construct phrases in the center that tell the time. The first two words tell the hour, while the second two tell the minute, the seconds being counted on the clock face itself. On the minute, a new photograph from What We Will is displayed. Clicking and dragging the upper pane or lower pane rotates the panorama.

Contributors note

with photographs and additional production by Douglas Cape

Description (in English)

This text begins as a short memory, recalled and composed by the author. Periodically and involuntarily the words are replaced in real-time by synonyms and coordinate terms extracted from the Wordnet database. After a certain amount of time has elapsed the text enters a second state where it attempts to "remember" its original form, where the text longs to reconstruct the original memory as it was first remembered and composed. In this state (in which it ceaselessly remains), the text attempts to cycle back through the word replacements and is more likely to "remember" than "forget," although there exists the possibility that the text will drift toward new replacements, new significations. As Walter Benjamin once wrote, "Memory is not an instrument for exploring the past but its theatre." Indeed, this text is an experiment in the involuntary performance of memory - forever departing from the moment of its inscription while forever attempting to return to the script and source of its unfolding.

(Source: Author's description from the Electronic Literature Collection, Volume Two)

I ♥ E-Poetry entry
Screen shots
Image
Technical notes

Java is required. Simply open the appropriate folder for your OS and double-click the application. One can close the application by clicking on the word 'stop' in the lower left corner of the screen. Uses Processing, and the RiTa library by Daniel C. Howe.

Description (in English)

John Cayley’s “windsound” is an algorithmic work presented as a 23-minute recording of a machine-generated reading of scrambled texts. The cinematic work presents a quicktime-video of white letters on a black screen, a text written by Cayley with a translation of the Chinese poem “Cadence: Like a Dream” by Qin Guan (1049-1100). As a sensory letter-by-letter performance, the work sequentially replaces letters on the screen, so that what starts as illegible text becomes readable as a narrative, and then again loses meaning in a jumble of letters. Cayley calls this technique “transliteral morphing: textual morphing based on letter replacements through a sequence of nodal texts.” Sequences of text appear within up to 15 lines on the same screen, thus presenting and automatically replacing a longer text on a digitally simulated single page-a concept Judd Morrissey also applies in "The Jew´s Daughter." Unlike Morrissey’s piece, Cayley’s doesn´t allow the user to interact with the work. Instead the work appears as a self-sufficient text-movie with ambient sound, murmurs of voices, windsound and synthetic female and male voices reading the non-readable to the viewer. As with the shifting letters, narrative perspectives also morph and switch fluidly between the lyrical-I, Christopher, Tanaka or Xiao Zhang. Thus, the sentence: "‘We know,’" Tanaka had said in English/"‘Tomorrow if we meet/I will have to kill you myself/’" is, in the algorithmic process of the work, later spelled out by the I-narrator. At the very end of the work, John Cayley dedicates “windsound” to the memory of Christopher Bledowski. What remains after the black screen and a re-start of morphing letters before they vanish conclusively, is windsound. At a certain point in the movie the text says "you have to be/to stay/silent/to hear it," and it seems like the reader has to be silent, too, listening to what he cannot understand, patiently waiting for the moment of legibility.

(Source: record written by Patricia Tomaszek originates from the Electronic Literature Directory)

I ♥ E-Poetry entry
Screen shots
Image
Description (in English)

Author description: Letterscapes is a collection of twenty-six interactive typographic landscapes, encompassed within a dynamic, dimensional environment. Wordscapes is a collection of reactive one-word poem landscapes, one for each letter of the alphabet.

I ♥ E-Poetry entry
Screen shots
Image
Technical notes

Please note: "Wordscapes" does not work on the Google Chrome browser.