The Work of Art in the Age of Mechanical Reproduction

By Vian Rasheed, 12 November, 2019
Language
Year
Record Status
Abstract (in English)

There exists a rift in contemporary culture of computational poetry generation. On the one hand, a vibrant poet-programmer scene has emerged around certain arts-focused conferences (e.g. ELO), online events (e.g. #nanogenmo/#napogenmo), spaces (e.g. NYC's Babycastles), and publication venues (e.g. Nick Montfort's Badquar.to). On the other, computer scientists working on Machine Learning, Natural Language Processing, and related fields publish scientific research on generating literary texts. The epistemological divide between these two groups can be seen most readily in the latter's focus on using empirical tests to assess work. These tests may be intrinsic (e.g. a quantitative measure of the linguistic features of computer-generated poetry), but they are often extrinsic (e.g. based on human judgments of whether a poem possesses qualities such as humor or coherence). Underwriting much (though not all) of this activity is the notion of the Turing Test and its assumed goal of computer-generated text that can pass as human-authored. Clearly, a great variety of work produced by the former group, poetprogrammers, does not lend itself to this kind of empirical testing; more often these works refuse to dissemble, instead radically foregrounding their non- or post-human qualities. The point of this paper will be to reconsider the peripheral status within the e-lit community of the kind of text generation that takes as its goal the emulation of human-produced literary discourse. As this paper’s title suggests, our main point of theoretical departure is Walter Benjamin’s classic account of the way that mechanical reproduction threatens art’s “aura” by obliterating the distance between art and its consumer. Likewise, Vilém Flusser (_Does Writing Have a Future?_) imagined that computer-generated poetry requires the writer-programmer to “dissect” their experience, fracturing it into the smallest logical units possible in order to be calculable and thus turned into a model of human cognition. What is “mechanically reproduced,” then, is not so much the poem but the poet. What do we learn about ourselves, our experiences, and our perception when we subject them to algorithmic “dissection”? What notions of the human do we reproduce or produce anew when we model the mind or minds? How do contemporary computational paradigms (e.g. deep learning) constrain this representation? Where is the consonance between human and computational thought, and where is the dissonance? What remains mysterious, distant, unmodellable? The goal of this paper is not to answer these meta-questions but rather to suggest that to turn entirely away from “emulation” as a goal is to evade them. In an era when algorithmic agents increasingly imitate humans, corporate interests are very happy to pursue these questions on their own terms, determining what aspects of humanity are worth emulating and to what ends. Artist- and researcher-led “imitation games” are one way of wresting back this prerogative; our talk will reflect on these questions in light of the Turing Tests in the Creative Arts at Dartmouth College. ELO2019 University