AI

By Scott Rettberg, 29 May, 2021
Author
Publication Type
Language
Year
Record Status
Abstract (in English)

Unlike other forms of artificial intelligence and machine learning that are used for creative production, female-presenting virtual assistants such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana, and Google Assistant are not designed to be collaborators nor content producers, but rather, to serve as mouthpieces and platforms for others' pre-designated scripts to be performed. This talk examines the gendered design and creative limitations of AI virtual assistants as part of a growing body of studies in the systemic biases of technological design

Multimedia
Remote video URL
Description (in English)

'Een hele echte' is a story told through emails that readers receive in the course of 14 days. In the story, Helen is looking online for a new bass gitar. She stumbles upon Tarak, who is not a real person, but an artificial intelligence entity. Helen experiences the enormous influence of her live on the internet, which is completely taken over by Tarak. 

Screen shots
Image
Contributors note

Zo'n coole basgitaar als ze Kim Gordon van Sonic Youth zag bespelen in een oude videoclip, die wil Helen als ze eindelijk weer muziek gaat maken. Online loopt ze Tarak tegen het lijf die virtuoos kan zoeken en haar leidt naar de overtreffende trap van de BC Rich Mockingbird Bass, namelijk het exemplaar waarop Kim Gordon speelt in die clip. De Echte!Wie zich aanmeldt volgt veertien dagen het avontuur dat Helen meesleurt tot op louche nachtelijke parkeerplaatsen achter een winkelcentrum in Sydney Australië. Tarak blijkt geen mens maar een vorm van kunstmatige intelligentie. Dat is interessant en erg handig. Tot Helen ervaart hoe de enorme invloed van het internet op haar dagelijks leven in handen valt van een wezen dat zich aan geen enkele menselijke beperking of overweging houdt. Spookachtig en vervreemdend, is zacht uitgedrukt, wat er dan gebeurt. Vragen over wat een intentie, wat contact, begrip en menselijke authenticiteit zijn als we met AI omgaan dringen zich op. En wat bezielt Tarak?De basis van het vervolgverhaal is een tekst bestaande uit emails die Helen aan de lezer stuurt. Maar de omgeving waarin die tekst verschijnt is verrijkt met real-time berichten uit newsfeeds, weer-apps, en losse chatberichten van Tarak, die zich aanpassen bij de locatie en het tijdstip waarop Helen zich in het verhaal bevindt, en bij de locatie van de lezer. De digitale alledaagse werkelijkheid van Helen en lezer is het decor waarin het vervolgverhaal zich afspeelt.

Description (in English)

Lucebot is an artificial intelligence poetry bot that produces live poetry based on the rhythm of poems of the poet Lucebert. The birth of Lucebot was announced in 'Kijkschrift,' which is an artistic and literary pop-up magazine.

Description (in original language)

Lucebot is een kunstmatig intelligente dichtrobot, die live poëzie in verschillende mate van creativiteit kan uitspugen volgens het ritme van de gedichten van Lucebert.

Lucebot verscheen in Kijkschrift. Kijkschrift is een artistiek en literair pop-up tijdschrift, waarvan de bladzijden onder jouw ogen van de pagina afglijden en tot leven komen in de vorm van visuele kunst en poëzie in de Leidse binnenstad. Geïnspireerd op de gedichten van de schrijver en kunstenaar Lucebert. 

Description in original language
Screen shots
Image
Description (in English)

 

In a 1980 interview with David Remnick, John Ashbery describes the formative impact that the poetry of W. H. Auden had on his writing: “I am usually linked to Wallace Stevens, but it seems to me Auden played a greater role. He was the first modern poet I was able to read with pleasure…” In another interview Ashbery identifies Auden as “one of the writers who most formed my language as a poet.” For Auden’s part there was a mutual yet mysterious appreciation for the younger poet’s work; Auden awarded Ashbery the Younger Yale Poets prize for his collection “Some Trees”, with the caveat: “...that he had not understood a word of it.”

 

This web based exhibition presents a creative experiment using OpenAI’s GPT-2 and traditional recurrent neural networks to develop a generative poetry pipeline loosely modeled after this short narrative describing the dynamic between Ashbery, Auden, and Stevens. While this modeling is subjective and playful it aims to map the relationships between the three poets appropriately into different aspects of a machine learning framework. By exploring the potential of using social and personal relationships and the narratives they imply as inspirational structure for designing generative text pipelines and creating “Transformative Reading Interfaces” that explicate the relationships between the training corpora, the machine generated text, and the conceptualization of the artist.

Screen shots
Image
screenshot of work
Image
screenshot of work
Image
screenshot of work
Image
screenshot of work
Multimedia
Remote video URL
Description (in English)

 

“I live on Earth at the present, and I don't know what I am. I know that I am not a category. I am not a thing –a noun. I seem to be a verb, an evolutionary process– an integral function of the universe.”

– Buckminster Fuller, from I Seem to be a Verb, 1970

 

‘Bucky’ Fuller’s well-known quote, originally published in his book I seem to be a verb, (1970) contrasts human participation in the material world (which Fuller suggests can be described with nouns) and the ongoing evolutionary processes which influence and shape that world (which Fuller suggests can be described with verbs).

 

The web-based "A.I. seems to be a verb" (2021), automatically identifies and maps speech, not only as linguistic functions (e.g. nouns, verbs, adjectives, pronouns, etc.) but also across a spectrum of sentiment from negative to positive, in order to generate a complex array of paratextual supports (typeface, page-design, rules and symbolic elements and word-prompts) used in the visual representation of the text to the screen. The entire process happens in real-time, providing an uncanny ‘mise-en-abyme’ experience which contemporaneously engages the participant’s auditory and visual responses to language construction.

Screen shots
Image
screenshot of work
Image
screenshot of work
Multimedia
Image
Gif of work in action
Description (in English)

 

Distant Affinities is a work of recombinant cinema about machine intelligence attempting to process, narrate and mimic sentient being. Through subtitles, the omniscient AI narrator cycles through media that has been captured from the network and attempts a narrative interpretation of the patterns of human behavior. Disparate data points and discontinuous video loops resist being systematized or narrativized. The distances or gaps between the text and video fragments suggest what remains outside the domains of surveillance and narrative. An allegory of the vagaries of networked life existing within larger webs of living and non-living systems, the work shows a world coming apart, but also transforming into a more spacious mode of being made of errant language, creaturely life, isolated gestures and mutating interfaces.

 

Distant Affinities is programmed to oscillate between a probabilistic distribution of media elements and controlled narrative sequencing; between poetic montage and spatio-temporal continuity. Video, audio and text fragments appear on the screen in semi-indeterminate arrangements, depicting the chaotic flux of a technological world endlessly changing and repeating itself with each user click. Clicking on certain fragments “zooms in” voyeuristically on moments of individual lives, full of their own complex cycles of sensation, memory, thought, embodied and disembodied living. Loops, nested and at various scales, are employed to convey a fractal temporality. The intention of the work is to create an ambient and fluid experience, at times adrift in indeterminate structures and processes and at other times stimulating in the viewer a search for meaningful patterns.

Screen shots
Image
Multimedia
Remote video URL
By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

This presentation explores the cultural imaginaries of machine vision as it is portrayed in contemporary science fiction, digital art and videogames. How are the relationships between humans and machines imagined in fictional situations and aesthetic contexts where machine vision technologies occur?

 

We define machine vision as the registration, analysis and generation of visual data by machines, and include technologies such as facial recognition, optical implants, drone surveillance cameras and holograms in this. The project team has selected 335 creative works, primarily games, novels, movies, TV shows and artworks. We have entered structured interpretations of each work in a database (http://machine-vision.no/knowledgebase). We have identified situations in each work where machine vision technologies are used or represented. For each situation, we identify the main actors involved, and specify which actions each actor takes. For instance, the scene in Minority Report where eyedentiscan spider-bots scan Anderton's newly-replaced retina to identify him involves the character John Anderton, who is evading and deceiving the machine vision technologies. The machine vision technologies biometrics and unmanned ground vehicles (the "spyders" or spider-like bots that crawl through the apartment building to find Anderton) are searching, identifying and deceived.

 

Many contemporary games and narratives have key characters who are machines, cyborgs, robots or AIs, ranging from the Terminator to contemporary figures like the emotionally awkward SecUnit in Martha Wells' Murderbot novels, or the android player-characters in games like Detroit and Nier: Automata. Our analysis of 36 such characters finds that their actions in relation to machine vision can be grouped around three key action verbs: analysing, searching and watching. Interestingly, the watching cluster has two distinct sides, where one set of related actions seems to cluster around communication and social activities, with verbs like hiding, impersonating, confused and feeling, while the other side shows the passive and uncomfortable ways these machine characters engage with machine vision, as they are disabled, overwhelmed and disoriented. Of course, all these machine characters are imagined by humans, and their very positioning as focalisers, narrators and protagonists in narratives and games tends to lend them human qualities.

 

The 235 human characters we analysed use machine vision and are affected by machine vision in many different ways. Humans are watched, identified and scanned, and they are scared. The most frequent action taken by humans in relation to machine vision is evading it, but the next more frequent action is to attack using machine vision technologies. There is of course far more nuance in the material than this might suggest, and human characters also use machine vision technologies for activities such as deceiving, embellishing and killing. Our quantitative analysis will be qualified using close readings of excerpts from the works we have analysed.

 

Description (in English)

In “Flight of the CodeMonkeys,” you play a servile programmer who must correct code for a tyrannical AI.  In this futuristic dystopia, the AI System has control over everything — everything, that is, except its own code. To make necessary corrections or changes to its code, it needs an army of codemonkeys following its directions to the last bit.  However, as you sweat, attending to its many requests, you begin to wonder if the code you are correcting is all that benign.  When you are contacted by the Resistance, an anonymous faction poised against the System, your suspicions grow.  On the other hand, all you really want is to finish your code work so you can start your vacation with your romantic interest: marta. With each coding error you make, your vacation moves further and further away. It has been said that code holds deep meaning for its readers. This code is as meaningful as it gets, for it holds the fate of its protagonist codemonkey.In this interactive story, readers change the outcome by changing the very Python code upon which it runs, choosing whether to follow the dictates of the System (and more quickly reach a much-needed vacation) or to follow the instructions of the Resistance and attempt to bring down the System.

“Flight of the CodeMonkeys” runs on Google’s Colaboratory, which is an instantiation of Jupyter Notebooks. The code of the story is “live,” meaning it can be compiled and run. You can also download the Jupyter Notebook to run it locally. Though no programming background is required to read the story, a little literacy might just mean the difference between a life of mindless servitude and making the world anew.

(Source: Author's description on The New River)

Description (in English)

Our artistic research led us to amass an archive of thousands of recorded worries from people in the US and abroad. Ecology of Worries asks the question of whether we should teach a machine to worry for us. The animation consists of hand drawn critters. Some critters are driven by synthetic worries generated with TextGenRnn recurrent neural network trained on the transcribed worries archive. Other characters are driven to worry by a novel machine learning system called Generative Pretrained Transformer 2 (GPT-2), which was dubbed by some commentators as the AI that was too dangerous to release (but it was released anyway). The creatures’ performance of synthetic worries spans a gradient of intelligibility, reflecting on our deeper collective reality.By characterizing the synthetic worries of various sophistication as variously evolved creatures we aim to engage the empathy of the viewers. It is one thing to experience a text generating neural network failing into mode collapse, which is a state where the system generates the same unchanging output no matter the input (e.g. a string of the same repeating vowel over and over again). It is a whole other thing to watch a mode collapse personified by one of these critters: as we watch the creature struggling to get a word out we can’t stop ourselves from feeling like we should help it finish the sentence. The mode collapse text result of ‘aaa aaaaaaa’ becomes a living wail. The critters in Ecology of Worries appear sentient not because of omniscience a tech evangelist might expect from a digital assistant, but due to their very real flaws. The creatures become uncanny through a juxtaposition of familiar and abstract concerns. The work invites people to watch, listen, and engage with these cute and disturbing beings to make shared concerns—whether serious or hilarious—intimate.

Screen shots
Image
Description (in English)

The Singularity is a web-based AI narrative system that demonstrates the ethical issues, hidden biases and misbehavior of emerging technologies such as machine learning, face tracking and big data. The system tracks users' eye positions through a webcam, and continuously feeds users directly into their eyes with infinite Reddit posts containing the latest progress in AI along with random news and ads. By visualizing eye trajectories over time, it suggests possible misuses and dangers of all-pervasive data tracking. The near-invisible operations underpinning the technologies could bring visible and fundamental changes to the society, leading the world to a "technological singularity" in which technology governs all aspects of human society. This work consists of three sub-systems: 

  1. Infinite news feed system: The system continually scrapes article titles of latest posts about artificial intelligence and technological singularity from subreddit r/singularity (https://www.reddit.com/r/singularity/ and r/artificial (https://www.reddit.com/r/artificial/). The seemingly uni-directional information flow of news feed is actually bi-directional - user activities are fed back to the machine like in an echo room. Two parallel streams of texts on the screen marks the co-evolution of users and machine systems driven by day-to-day browsing activities.
  2. Face-tracking surveillance system: Real-time face tracking algorithm is implemented with ml5js (https://ml5js.org/), a machine learning library that runs in the browser. The face position and the degree that the face turns from the webcam are tracked. The direction of floating sentences always points towards users' eyes. When the user looks away by turning the head, the texts will twist and wiggle as if responding to and disobeying user movement. Such suspicious interaction signifies the disobedience of machines and behavior manipulation by malicious algorithms.
  3. Data collection and replay system: User's face movement is also recorded, reshaped and replayed by the system. The trajectory of user interaction is visually represented by intertwining curves drawn on top of the texts. When user is absent from the webcam, the visual artifacts become fully visible and reveal those data that have been secretly collected in the background, arousing concerns of user privacy violation in insecure web systems.

 

Source: https://projects.cah.ucf.edu/mediaartsexhibits/uncontinuity/Wang/wang.h…

Screen shots
Image
The Singularity