AI

Description (in English)

Why Are We Like This? (WAWLT) is an AI-augmented digital story construction and collaborative, improvisational writing game in which two players write a story in a pastiche of the cozy mystery genre, with support from a simulation-based AI system that operationalizes character subjectivity.

WAWLT explores how computation can enable new forms of playful, social creative writing practice. By running, querying, and updating an underlying storyworld simulation, the AI system provides players with inspiration and keeps the story moving forward, even when the players are unsure what should happen next. Players collaboratively select author goals they would like to work towards throughout the story, and select actions for characters to perform, either from a set suggested by the system or by querying the action possibility space in a custom story sifting interface. The suggested actions are continually reassessed (using simulation rules) based on what each character might want to do next, prioritizing actions that could fulfill the current author goals. Whenever players select an action to be performed in the storyworld, its effects are realized in the simulation, and a generated action description is appended to a textual transcript recounting the story so far, which players freely edit as the story develops.

The system uses the newly developed technology of story sifting - the extraction of narratively potent sequences of events from the chronicle of all the events that have taken place within a simulation. Sifting is used by players to guide the story, and used to implement character subjectivity.

Screen shots
Image
Content type
Contributor
Year
Publisher
Publication Type
ISBN
978-1-93-399664-6
Record Status
Description (in English)

Mexica: 20 Years–20 Stories [20 años–20 historias] contains 20 short narratives developed by the computer program MEXICA. Plots describe fictional situations related to the Mexicas (also known as Aztecs), ancient inhabitants of what today is Mexico City. This is the first book of short-stories produced completely by a creative agent capable of evaluating and making judgments about its own work, as well as incorporating into its knowledge-base the pieces it produces. By contrast with other, statistical models, MEXICA is inspired by how humans actually develop fictional stories. The book, in both Spanish and English, also includes source references related to the program. Preface by Fox Harrell.

(Source: Publisher's catalog page)

Screen shots
Image
Mexica book cover image
Content type
Translator
Year
Language
Publication Type
Record Status
Description (in English)

Yuefu is a poetry generation system using OpenAI’s GPT, a Generative Pre-Trained natural language model pretrained on Chinese newspapers, that is fine-tuned with classical Chinese poetry. The developers write in their paper describing the system that it does not use "human crafted rules or features," or "any additional neural components". The system can generate poems in various formal, classical styles.  

The example shown is translated by Ru-Ping Cheng and Jeff Ding for the ChinAI newsletter. It is an example of Cang Tou Shi, a Chinese version of acrostic poems. "In this case," the translator explains, "the first words of each line form the title of the poem: 神经网络 (neural networks)." Some other examples of the system's output are shown in a preprint published by the system's creators, and a translation of a Chinese newspaper article (entered into ELMCIP) provides translations of more examples.  

Pull Quotes

Neural Networks

Allocating divine status to a soul that has passed—it is natural,Like the classics that preserve the virtues of ancient wisdom.The astray scripts of the internet try earnestly to preserve their legacies,A newfound literary wisdom that shall be passed down for centuries.

Screen shots
Image
Screenshot of one of the generated poems in Chinese
Technical notes

A demo of the system can be accessed on WeChat. The developers write that to test it, one should register a Wechat account and add “EI体验空间” or “诺亚实验室”.

By Jill Walker Rettberg, 18 September, 2019
Publication Type
Language
Year
Record Status
Abstract (in English)

We present a simple yet effective method for generating high qual- ity classical Chinese poetry with Generative Pre-trained Language Model (GPT)[5]. The method adopts a simple GPT model, without using any human crafted rules or features, or designing any additional neural compo- nents. While the proposed model learns to generate various forms of clas- sical Chinese poems, including Jueju(绝句), Lu ̈shi(律诗), various Cipai(词牌) and Couples(对联), the generated poems are of very high quality. We also propose and implement a method to fine-tune the model to generate acrostic poetry. To the best of our knowledge, this is the first to em- ploy GPT in developing a poetry generation system. We have released an online mini demonstration program on Wechat1 to show the generation capability of the proposed method for classical Chinese poetry.

Creative Works referenced
By leahhenrickson, 12 September, 2019
Language
Year
Record Status
Abstract (in English)

Natural language generation (NLG) – when computers produce text-based output in readable human languages – is becoming increasingly prevalent in our modern digital age. This paper will review the ways in which an NLG system may be framed in popular and scholarly discourse: namely, as a tool or as an agent. It will consider the implications of such perspectives for general perceptions of NLG systems and computer-generated texts. Negotiating claims made by system developers and the opinions of ordinary readers amassed through empirical studies conducted for this research, this paper delves into a theoretical and philosophical exploration of questions of authorial agency related to computer-generated texts, and by considering whether NLG systems constitute tools for manifesting human intention or agents in themselves.

This paper will begin by considering NLG systems as tools for manifesting human intent, the more commonly expressed view amongst developers and readers. An NLG system arguably serves as an extension of a human self (e.g. the developer or the user). Yet one cannot ignore the increasing autonomy of such systems. At what point does an extension of the self become a distinct entity altogether?

The discussion will then shift to considering NLG systems as agents in themselves. As evidenced by the results of studies conducted for this research, ordinary readers do tend to attribute authorship to computer-generated texts. However, these readers often attribute authorship to the system rather than its developers, indicating that – in some way – the system is distinct enough from its creators to warrant the title of author. Yet conventional modern understandings of the word ‘author’ suggest that authorship at least partly presumes intentiondriven agency. Do NLG systems adhere to this expectation? Through reference to various theoretical perspectives, this paper will argue that some NLG systems may surpass the ‘tool’ title and more appropriately be deemed authorial agents. This type of agency, however, is not so characterised by the free-will intention of human writers, but by the intention to fulfil a designated objective that is respected within broader social contexts. When readers attribute authorship to the NLG system itself, that entity is permitted a place within the fluid social networks that humans populate. The NLG system becomes an algorithmic author.

Description (in English)

The Listeners is a linguistic performance, installation, and Amazon-distributed third-party app or skill – transacted between speakers or speaker-visitors and an Amazon Echo. The Echo embodies a voice-transactive Artificial Intelligence and domestic robot, that is named for its wake-word, Alexa. The Listeners is a custom software skill built on top of this infrastructure. The Listeners have their own interaction model. They listen and speak in their own way – as designed and scripted by the artist – using the distributed, cloud-based voice recognition and synthetic speech of Alexa and her services.

(Source: shadoof.net)

Multimedia
Remote video URL
Description (in English)

Cyborgs in the Mist is an enquiry which takes the form of a movie, a soundinstallation, photo prints, and a book. The film presents the LOPH research laband its utopian proposals to struggle against the planned obsolescence ofhumankind. Taking into account the development of robotics and artificial formsof intelligence, the LOPH research lab experiments with ways to help humansadapt to their new environment, and to put them in a position to fight against their planned obsolescence. How can we anticipate this shift in the logic of evolution?How can we adapt to this change with a minimum of violence? Academic teams,science-fiction writers, and new forms of artificial intelligence work together toanticipate the most disastrous scenarios.

(source: description from the schedule)

Pull Quotes

How can we anticipate this shift in the logic of evolution?How can we adapt to this change with a minimum of violence? Academic teams,science-fiction writers, and new forms of artificial intelligence work together toanticipate the most disastrous scenarios.

Screen shots
Image
image of the work
Multimedia
Remote video URL
By Amirah Mahomed, 19 September, 2018
Author
Year
University
License
All Rights reserved
Record Status
Abstract (in English)

I consider an expanded version of the technological singularity, that moment at which humanity will be transformed in an unrecognizable way – the biggest gap in human history. As I see it, the singularity may result either from the superintelligence of bootstrapping AIs or from superstupidity as we, using technology, cause our own species to go extinct. What will literature be like after this event? It seems hard enough to write a poem that will be of interest to the next generation or to produce an electronic literature work that can be read and accessed in a practical way after a few decades. My argument, however, is that only literature deeply engaged with computation will have any chance to remain relevant after the extinction or radical transformation of all human life. This includes work done by Christian Bök in xenopoetics – but because of the compositional process of the core poem of The Xenotext Project, not because of the proposed genetic encoding of that text. Small, highly constrained computer systems of the type I develop may also remain compelling to non-humans if the computational environments in which they function are preserved or can be reconstructed. Extraterrestrial or computational intelligences will be find human literary works accessible through their computational aspects; how these interact with language and culture could provide an important trace of our existence.

(Source: ELO 2018 Conference: Lighting Talks: Literature after the Technological Singularity)

Pull Quotes

...only literature deeply engaged with computation will have any chance to remain relevant after the extinction or radical transformation of all human life.

By Jane Lausten, 5 September, 2018
Author
Language
Year
Record Status
Abstract (in English)

This paper examines a selection of examples of AI storytelling from film, games, and interactive fiction to imagine the future of AI authorship and to question the impetus behind this trend of replacing human authors with algorithmically generated narrative. Increasingly, we’re becoming familiarized with AI agents as they are integrated into our daily lives in the form of personified virtual assistants like Siri, Cortana, and Alexa. Recently, director Oscar Sharp and artist Ross Goodwin generated significant media buzz about two short films that they produced which were written by their AI screenwriter, who named himself Benjamin. Both Sunspring (2016) and It’s No Game (2017) were created by Goodwin’s long short-term memory (LSTM) AI that was trained on media content that included science fiction scripts and dialogue delivered by actor David Hasselhoff. It’s No Game offers an especially apt metacommentary on AI storytelling as it addresses the possibility of a writers strike and imagines that entertainment corporations opt out of union negotiations and instead replace their writers with AI authors.After watching Benjamin’s films, it’s clear that these agents are not yet ready to take over the entertainment industry, but this trend is growing more common in video games. Many games now feature procedurally generated content that creates unique obstacles, worlds, and creatures. The most well-known example might be No Man’s Sky (2016), but it is not the first; Spelunky (2008), for example, made use of procedural generation many years prior. Although attempts at algorithmically generated narrative are rare, Ludeon Studio’s RimWorld (2016) boasts that its sci-fi game world is “driven by an intelligent AI storyteller.” Its AI, however, became the subject of controversy after Claudio Lo analyzed the game’s code that supports its storyteller and revealed that the program replicated problematic aspects of society, including the harassment of women and erasure of bisexual men.These examples offer insight into issues that have and will continue to arise as AI storytelling advances. This paper addresses questions concerning not only the implications for human authors in the face of this very literal take on Barthes’ “Death of the Author,” but also those related to what AI will learn from reading our texts and what it will mean to look into the uncanny mirror that AI will inevitably hold up to us when producing its own fiction. Though it may be a while before Siri will tell us bedtime stories, it is no doubt a feature that has occurred to Apple, as requesting Siri to do so results in a story about her struggles working at Apple and the reassurance she receives from conversing with ELIZA. ELIZA is one of the earliest natural language processing programs that was created by Joseph Weizenbaum in the 1960s and was designed to mimic a Rogerian psychotherapist by parroting back user input in the form of questions. Siri’s reference to this program is both an acknowledgement of the history of these agents and evokes a future where our virtual assistants grow to become more than canned responses.

By tye042, 5 October, 2017
Author
Publication Type
Language
Year
Record Status
Abstract (in English)

Timothy Luke reviews Nicholas Negroponte and takes a second look at ‘digital subjectivity.’

As the key overseer at MIT’s Media Lab, Nicholas Negroponte has used his best-selling book, Being Digital, as the trailer for a transnational road trip on which he touts the exciting new online world as it is being invented in his digital workshops. Yet Negroponte’s enthusiasm about these possibilities leads him away from raising other, more interesting, questions about digital being, particularly those having to do with the kind of subjectivity that becomes possible in cybernetic spaces. Save for his somewhat overdrawn exhalations over the shift from “atoms” to “bits” as the wave of the future (a shift that was first noticed 15 years ago by the Tofflers in The Third Wave), he too sticks with the usual interpretive conceit: namely, that such new (wo)man/machine interfaces at the computer will simply reposition existing material styles and structures of social agency in a new cybernetic register, making everything more or less the same there (in “bits”) as it is here (in “atoms”), only maybe more so, meaning essentially quicker, better, closer, sharper, etc. 

Pull Quotes

....these digital beings are now deeply embedded life forms, created by and for those disciplinary institutions that generate power over and knowledge of them by meshing groups of people in vectors of influence coursing through complex statistical spaces.