Alexa

By Carlos Muñoz, 3 October, 2018
Author
Language
Year
License
All Rights reserved
Record Status
Abstract (in English)

Earlier this year, poet-scholar John Cayley proposed that scholars and makers of electronic literature attend to the “delivery media for ‘literature’ that are, historically, taking the place of physical, codex-bound books” (John Cayley, 2017, “Aurature at the End(s) of Electronic Literature,” electronic book review). Among those emerging delivery media are so-called Virtual Digital Assistants (VDA) like Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri. Capable of interpreting and producing human language, these domestic robots speak in pleasant female voices, offering access to information, music, social media, telephony, and other services. Their terms and conditions inform the consumer that once the device is activated, it records everything that is being said. The proliferation of VDA bears wide-reaching ethical and aesthetic ramifications that scholars in digital media should attend to.

On the one hand, “we are willingly installing and paying for the last mile of the infrastructure needed for the ultimate surveillance society” (Robert Dale, 2017, “Industry watch: The pros and cons of listening devices,” Natural Language Engineering 23.6, 973). On the other hand, “the arrival of speaking and, especially, listening networked programmable devices…has, I believe, important consequences for literature and for literary — linguistic aesthetic — practices of all kinds” (Cayley 2017). Cayley’s digital aural performance The Listeners (2016) offers a lens to examine the poetics and ethics of VDA. The Listeners is housed in an Amazon Echo, a smart speaker system controlled via the Artificial Intelligence Alexa. An instance of what Cayley has called ‘aurature’ (a composite of aurality and literature), The Listeners complicates our understanding of audio performance art, as text is delivered by a synthetic voice. By aesthetically engaging the slight – yet noticeable – robotic monotony of Alexa’s speech, The Listeners challenges audiences to think about the nature of transactive synthetic language and the meaning of human / AI subjectivity.

At the same time, as the title The Listeners suggests, Alexa’s ability to ‘hear’ is a key feature in Cayley’s piece. In installations of The Listeners, Alexa’s ‘recording’ feature is active, which means that all transactions between speakers and The Listeners are “sent to the artist's Alexa app and the alexa.amazon.com website” (Cayley 2016). Along with the piece’s title, the recording function of The Listeners hints at the forms of social control enabled by technologies like Alexa. Alexander Galloway uses the term “reverse Panopticon” for a society which is characterized by “a multiplicity, nay an infinity, of points of view flanking and flooding the world viewed” (Galloway, Alexander, 2014, Laruelle: Against the Digital. University of Minnesota Press, 68). Alexa records not only her owners’ transactions, but also sends what she hears from guests and visitors to the owners’ account, allowing consumers to spy on each other. Like online practices such as “following” or “stalking” others on social media, Alexa constitutes a prime example of surveillance in a reverse panoptic society. In aesthetically engaging these ‘hearing’ abilities via Alexa’s transactive synthetic language, The Listeners brings computer ethics into conversation with new media poetics, offering trajectories for scholarly inquire into the ethical and aesthetic implications of VDA technologies.

By Jane Lausten, 5 September, 2018
Author
Language
Year
Record Status
Abstract (in English)

This paper examines a selection of examples of AI storytelling from film, games, and interactive fiction to imagine the future of AI authorship and to question the impetus behind this trend of replacing human authors with algorithmically generated narrative. Increasingly, we’re becoming familiarized with AI agents as they are integrated into our daily lives in the form of personified virtual assistants like Siri, Cortana, and Alexa. Recently, director Oscar Sharp and artist Ross Goodwin generated significant media buzz about two short films that they produced which were written by their AI screenwriter, who named himself Benjamin. Both Sunspring (2016) and It’s No Game (2017) were created by Goodwin’s long short-term memory (LSTM) AI that was trained on media content that included science fiction scripts and dialogue delivered by actor David Hasselhoff. It’s No Game offers an especially apt metacommentary on AI storytelling as it addresses the possibility of a writers strike and imagines that entertainment corporations opt out of union negotiations and instead replace their writers with AI authors.After watching Benjamin’s films, it’s clear that these agents are not yet ready to take over the entertainment industry, but this trend is growing more common in video games. Many games now feature procedurally generated content that creates unique obstacles, worlds, and creatures. The most well-known example might be No Man’s Sky (2016), but it is not the first; Spelunky (2008), for example, made use of procedural generation many years prior. Although attempts at algorithmically generated narrative are rare, Ludeon Studio’s RimWorld (2016) boasts that its sci-fi game world is “driven by an intelligent AI storyteller.” Its AI, however, became the subject of controversy after Claudio Lo analyzed the game’s code that supports its storyteller and revealed that the program replicated problematic aspects of society, including the harassment of women and erasure of bisexual men.These examples offer insight into issues that have and will continue to arise as AI storytelling advances. This paper addresses questions concerning not only the implications for human authors in the face of this very literal take on Barthes’ “Death of the Author,” but also those related to what AI will learn from reading our texts and what it will mean to look into the uncanny mirror that AI will inevitably hold up to us when producing its own fiction. Though it may be a while before Siri will tell us bedtime stories, it is no doubt a feature that has occurred to Apple, as requesting Siri to do so results in a story about her struggles working at Apple and the reassurance she receives from conversing with ELIZA. ELIZA is one of the earliest natural language processing programs that was created by Joseph Weizenbaum in the 1960s and was designed to mimic a Rogerian psychotherapist by parroting back user input in the form of questions. Siri’s reference to this program is both an acknowledgement of the history of these agents and evokes a future where our virtual assistants grow to become more than canned responses.