English

By Cecilie Klingenberg, 26 February, 2021
Author
Language
Year
Record Status
Abstract (in English)

This presentation examines a selection of poems by contemporary Irish poets at the intersections of technology, ecology, and literary aesthetics. I argue that the discussed poems, published during the past 30 years, anticipate or comment on the emerging posthumanist paradigm, what Rosi Braidotti describes as a system of “discursive objects of exchange” within a community of “transversal ‘assemblages’ […] that involves non-human actors and technological media”. 

The work of poets described as “neo-modernists” has been, from the 1990s onwards in particular, closely attuned to changes in technology, environment, and the media. The work of Randolph Healy draws on his background in mathematical sciences. Trevor Joyce’s career as a Business Systems Analyst at Apple has informed his approach to language as structured, fragmented, and networked transmission of data.

Billy Mills’s writing is highly environmentally oriented, and often appropriates scientific texts to explore the materiality of language, organic life, and geological objects. Critics have similarly commented on the “scientific” quality of the poetry of Catherine Walsh, characterized by what Lucy Collins describes as “an absence of a clearly located self”. While scholars have repeatedly commented on these poets’ resistance to prevalent views on “Irish” poetry as arising from narratives of cultural identity, few have acknowledged how their writing anticipated the theoretical shift to a posthumanist and new materialist framework. Importantly, their poetry does not discard Ireland as a geographical, cultural, or sociopolitical location, but focuses on this location as situated at the intersection of technology and science, economic and colonial power, and environmental change.

The formally ambitious and experimental poetry of Healy, Joyce, Mills, and Walsh is particularly concerned with questions of signifying systems, and the materiality of technological and environmental processes. In the poems of Justin Quinn and Nick Laird, however, media technology, consumer capitalism, and globalization are approached through a more ethically, as well as aesthetically, informed poetics.

Both Quinn and Laird are attentive to phenomena and everyday environments of the web, digital interfaces, global supply chains, and the ecological crisis, as well as their connections to social injustice and political power. Both register the connection between everyday experience in the global North, and the reliance of consumer media technologies on what Anna Tsing has described as systems of “supply chain capitalism”. For Tsing, the concept provides “a model for understanding both the continent-crossing scale and the constitutive diversity of contemporary global capitalism”. 

Drawing on Tsing, Evelyn Wan has similarly highlighted digital media culture’s participation in these often exploitative supply chains. Finally, despite their differences, what all of the poets share is a highly critical approach to the idea of the “human” subject as represented by the lyric first person voice.

Each addresses human beings as material bodies, bone, tissue and flesh, alongside other material entities, whether technological, organic or non-organic. A literary aesthetics is employed to discard the category of the human as privileged identity in relation to the nonhuman world.

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

Grace Dillon, an Anishinaabe scholar of science fiction, writes that “Native slipstream,” a subgenre of speculative fiction, “views time as pasts, presents, and futures that flow together like a navigable stream.” The immense possibilities inherent to this genre, she continues, allow “authors to recover the Native space of the past, to bring it to the attention of contemporary readers, and to build better futures.” 

Biidaaban (Dawn Comes) (2018), a short stop-motion film by Vancouver-based Michif filmmaker Amanda Strong, illustrates the political possibilities of Indigenous slipstream, and Indigenous science fiction more broadly, to envision liberatory futures in the face of forces that naturalize the current destructive, capitalist, and colonial order.This paper takes as its starting point a problem posed by cultural theorist Mark Fisher, who suggested that capitalism has become so naturalized that it is nearly impossible to imagine an alternative future. Concurrently, I consider the arguments made by Dené political theorist Glen Sean Coulthard that dispossession of land is the foundation of capitalism and colonialism and that the politics of recognition reinforce settler colonial structures of domination.

With these premises, I read a number of writers on Indigenous science fiction and Indigenous political resistance alongside Biidaaban in order to demonstrate how the film’s marriage of anti-colonialism and refusal of settler recognition provide an answer to Fisher’s dilemma. The concepts of biskaabiiyang (Anishinaabemowin, “returning to ourselves”), intergenerational time, grounded normativity, and resurgence are all antidotes to capitalist realism. 

These related terms refer to political strategies that counter colonial power through land-based practices, experiential knowledge, and a rejection of the politics of recognition. Biidaaban and Indigenous slipstream denaturalize capitalism by placing equal emphasis on the past and the future as on the present, the primary domain of capitalism. Similarly, since the control of land, human bodies, and non-human animals are paramount for colonialism and capitalism, this form and representation of resistance counters the very foundation of domination.This paper serves as the foundation of a larger research project that investigates how the spatial and temporal practices in Strong’s film represent the aforementioned concepts of Indigenous resistance towards colonialism and Enlightenment epistemologies. Strong’s hybrid documentary/fiction films blend traditional stories, time travel, oral history, and contemporary life, drawing on both fine art and film practices.

For this reasons, this research draws on the history of art (particularly contemporary Indigenous art, performance art, feminist art, and ecological practices) and film studies (emphasizing Canadian animation) to offer a nuanced reading of Strong’s work. Methodologically, this project is guided by Indigenous feminist thought, posthumanist theory, and ecocriticism to understand the complex web of relationships between the human and non-human worlds that are integral to Biidaaban and Strong’s other work.

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

“He may be a superdecoder or a superspy but he’s sort of neutral, though not quite like a machine, more like he’d, sort of, come and, reversed all our, traditional, oppositions, and questioned, all our, certainties”, or so Zab falteringly describes the Martian boulder-cum-supercomputer that has crash-landed in a disused Cornish mine.

Christine Brooke-Rose’s 1986 novel, Xorandor, is remarkable as much for its eponymous radioactive-waste-guzzling, double-crossing rock, as for being partially narrated in the programming language, Poccom 3. Invented by siblings, Jip and Zab, first as a kind of idioglossia and then as a lingua franca for communicating with Xorandor, Poccom 3 is rather like the indeterminate rock: its presence in the text requires a supreme effort of decoding to begin with, becomes increasingly naturalized with exposure, but consistently questions all our certainties about the language of literature.

This is because whatever is literary in a humanist sense is not usually considered communicable in anything other than human-only language. And yet, here is this alien “alpha-eater” not only hijacking control of the narrative from the children, but also ‘eating the alphabet’ and regurgitating it in human-readable, or what I venture to call ‘plus-human’ code.

Turning from Cold War-era sci-fi to electronic literature, Nick Montfort’s single page of Python code in The Truelist bears remarkable similarities to Brooke-Rose’s Poccom 3. Although the code can only be found on the last page of this book-length poem, it is as in Xorandor central to the book as artefact, for it was used by Montfort to generate the poetry. “Xorshift to create a random-like but deterministic sequence”, reads one of the lines of code, simultaneously describing its role in recombining a concise inputted lexicon according to rules also specified by Montfort.

The effect is a journey “through a strange landscape that seems to arise from the English language itself”, complete with idiosyncratic compound words (e.g., “voidring”, “book-bound ear”) not without analogues in Jip and Zab’s private programming-inspired idiom (e.g., “diodic!”, “Avort”, “flash-in-the-datasink”).

 

Notwithstanding that Xorandor and The Truelist are books containing and driven by pages of plus-human code, it is the profound differences between the two that gives scope to this proposed paper. Brooke-Rose’s is firmly of the print tradition, where the paper medium brings to readerly attention issues of language such as: the richness but also (from the perspective of a computer) the illogic of polysemy; the power of discourse to subject a sub-human object to study or enslavement, to make peace or war. Montfort’s offering, although in the final instance presented on paper, belongs to an emerging tradition within electronic literature: one that produces and benefits from a programmed artefact’s affordances for scale, dispersal and change (e.g., Stephanie Strickland’s V: Vniverse, Deena Larsen’s Stained Word Window), before remediating it to the stable, serial, although not necessarily linear platform of print (Strickland’s Losing L’Una/WaveSon.nets, Larsen’s Stained Word Translations).

This begs the question, What does electronic literature – for which ‘born-digital’ is at once a sine qua non and a raison d’être – seek to gain, supplement, or reverse by printing out its exercises in plus-human language?

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

This presentation explores the cultural imaginaries of machine vision as it is portrayed in contemporary science fiction, digital art and videogames. How are the relationships between humans and machines imagined in fictional situations and aesthetic contexts where machine vision technologies occur?

 

We define machine vision as the registration, analysis and generation of visual data by machines, and include technologies such as facial recognition, optical implants, drone surveillance cameras and holograms in this. The project team has selected 335 creative works, primarily games, novels, movies, TV shows and artworks. We have entered structured interpretations of each work in a database (http://machine-vision.no/knowledgebase). We have identified situations in each work where machine vision technologies are used or represented. For each situation, we identify the main actors involved, and specify which actions each actor takes. For instance, the scene in Minority Report where eyedentiscan spider-bots scan Anderton's newly-replaced retina to identify him involves the character John Anderton, who is evading and deceiving the machine vision technologies. The machine vision technologies biometrics and unmanned ground vehicles (the "spyders" or spider-like bots that crawl through the apartment building to find Anderton) are searching, identifying and deceived.

 

Many contemporary games and narratives have key characters who are machines, cyborgs, robots or AIs, ranging from the Terminator to contemporary figures like the emotionally awkward SecUnit in Martha Wells' Murderbot novels, or the android player-characters in games like Detroit and Nier: Automata. Our analysis of 36 such characters finds that their actions in relation to machine vision can be grouped around three key action verbs: analysing, searching and watching. Interestingly, the watching cluster has two distinct sides, where one set of related actions seems to cluster around communication and social activities, with verbs like hiding, impersonating, confused and feeling, while the other side shows the passive and uncomfortable ways these machine characters engage with machine vision, as they are disabled, overwhelmed and disoriented. Of course, all these machine characters are imagined by humans, and their very positioning as focalisers, narrators and protagonists in narratives and games tends to lend them human qualities.

 

The 235 human characters we analysed use machine vision and are affected by machine vision in many different ways. Humans are watched, identified and scanned, and they are scared. The most frequent action taken by humans in relation to machine vision is evading it, but the next more frequent action is to attack using machine vision technologies. There is of course far more nuance in the material than this might suggest, and human characters also use machine vision technologies for activities such as deceiving, embellishing and killing. Our quantitative analysis will be qualified using close readings of excerpts from the works we have analysed.

 

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

How do videogames imagine diegetic and extradiegetic posthuman agents? In a sense, videogame play is already posthuman. The player of a videogame is redistributed in an interrelational assemblage of human and non-human agents (Braidotti 2013); of physical world, player, technology, player character, and virtual environment (Taylor 2009).

Thus, videogames, by their very “nature” should allow us to play out versions of breaking away from anthropocentric idealism and experience what new modes of subjectivity and agency might entail. 

One such attempt is found in the 2017 videogame NieR: Automata (PlatinumGames 2017), lauded as a work of existential nihilism and post-humanity (as “after-human” as well as “beyond-“ or “more-than-human”). NieR: Automata is a role-playing action adventure videogame set in a post-apocalyptic version of Earth where androids and machines are caught in an eternal war. The player “controls” the android 2B, and later other androids and drone companions, to fight machines on behalf of humanity.

The director, Yoko Taro, has explained that the videogame intentionally avoids asking, “What does it mean to be human?” in favor of asking questions about what is left when we are gone (Muncy 2018).

The videogame more than nods at the posthuman in its narrative and gameplay, as it rejects standardized perception (Gerrish 2018) and traditional depictions of characters (Wright 2020) for machines interacting outside of the human sensorium, and idiosyncratic narrative structures (Backe 2018; Jaćević 2017).

Yet even if the humans behind the conflict are revealed to be long gone, their traces linger as machines and androids are struggling with concepts of human society such as gender, race, and human language. What happens to the posthuman stance of videogame play when machines are breaking away from humanism’s restricted notion of what being human is while continuously performing versions of it?

This presentation investigates how NieR: Automata consolidates reversing an anthropocentric view with firmly situating the human in the network. Through conceptualizing the posthuman as an interrelated agent (Braidotti 2013; Hayles 1999; 2017), the videogame presents oppositions to the humanist fantasy of autonomy on several fronts, especially in the final scene of the videogame. Here, the player has to shoot (and by extension, kill) the credits with names of the developers.

After removing the creators, the player can choose to “release” the player characters by deleting the videogame’s save file, thus stopping the perpetual circle of war, dying, and rebirth that the videogame presents. Is this part of the posthuman agent? Ultimately, in the tension between accelerating and inhibiting agency, in joining and distributing perspectives, in prompting continuation and condemning it, NieR: Automata imagines a paradoxical posthuman future of a human present.

By Cecilie Klingenberg, 26 February, 2021
Author
Language
Year
Record Status
Abstract (in English)

The cultural use of the concept of the Anthropocene usually includes the problem that the climate, unlike the weather, is not organized in an event-like manner and not directly perceptible, so the human imagination is facing a serious challenge when it tries to think about climate change.

This problem most often leads to questions that ask about the possibilities and performances of the art (what kind of works of art can adequately mediate the hard-to-conceive era of the Anthropocene?), which questions are complemented in this article by the question of the reception, especially reading. This addition is motivated by the recognition that the understanding of our world is traditionally associated with its “readability,” but such a metaphor of reading — precisely in the absence of perceptibility and eventuality — may no longer be able to describe our relationship to the culture and the world.

Therefore, the list of the practices presented in the article ranges from non-institutionalized and less familiar ways of reading to the operations that no longer read and interpret texts in a traditional sense. I will introduce practices that operates with contextualizing combinatorics, where the complexity of the interpretations stems from the large number of relationships created on the surface of the texts; as well as the cultural techniques of the data analyzing and diagrammatic reasoning. I will argue that in the literary and cultural studies the traditional reading methods should be complemented by the interpretations of graphs too.

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

Despite its many flaws, the blockbuster television series Game of Thrones could be seen as attempting to resist what David Mitchell and Sharon Snyder have identified as narrative prosthesis, in which disabled characters are oversimplified and utilized primarily as a kind of catalyst for normate characters in their foregrounded narrative arcs.

Characters in the series can arguably be seen as more complex at times, while also evoking other stereotypes of disability, from Tyrion Lannister, played by Peter Dinklage, who is referred to as a “dwarf” and has congenitally restricted growth, to Bran Stark, who is paralyzed after being thrown out of a window, and Hodor, who only ever utters the word that has become his name.

The use of various forms of prostheses is common in the series as well, from Bran’s horse saddle, modified and made for him by Tyrion, to his developing ability to “warg” into animal and human others, which allows him to control and move around in their bodies, while perceiving the world through their eyes and ears. It is significant, though, that the only human character he “wargs” into is one who appears to have a cognitive disability, the character of Hodor.

The purpose of this paper is to think through various kinds of prosthesis suggested by the series, particularly when animality and disability are thus juxtaposed with each other, when animals are constructed as objects merely to be utilized by humans, and disabled humans are arguably seen as closer to animals. I engage with posthumanist theory, biopolitics, and human-animal studies to reiterate challenges to the idea that animals cannot have agency or subjectivity, as well as disability studies in relation to various ways of theorizing prosthetics.

These fields come together through the concept of companion prosthetics, which I have theorized with Jan Grue, as a way of taking into account the animacies, in Mel Chen’s sense, of various animal, human, and technological prostheses. Drawing upon Donna Haraway’s work on companion species, I emphasize the difference between prosthetic relations which are merely instrumentalist (denying the animacies of the prosthesis itself) and those in which an animated actor responds to the animacies of a prosthetic other, whether it be mechanical, animal, or human. Game of Thrones can ultimately help us to see the ways that companion prosthetics suggest better ways of acknowledging and responding to the inevitable dependence we all have on prosthetics of various kinds, even if we do not think of ourselves as disabled.

 

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

The well-defined focus in the last year on tracing back the performance of nonhumans in many fields of knowledge is often led by the somehow troublesome consciousness of the entanglement of humanity in the technological spectrum. I use the term ‘spectrum’ deliberately, because in my view it brings to mind the scope of critical possibilities of posthumanism.

In my presentation, I would like to pick up on that issue by discussing the “haunting” performance of nonhumans that is revealed when we install a posthumanist lens to investigate the historical accounts on literature. Starting with my current research into the reciprocal designation of the fields of literature and science in the nineteenth century, I will try to define the broad agency of electricity as an agent and a metaphor. Its ontological status cannot be determined.

Brought from scientific discourse (that is never culturally and socially ‘pure’), it served as a matrix of ascending modern, techno-scientific communities in the beginning of the nineteenth century, that started to explain their experiences in reference to scientific theories, models, and terms. Reference to electricity had circulated throughout both dramatic and poetic literature as well as philosophical works up to the point when the metaphor of ‘electricity as spiritual endeavor’ started to become naturalized, and as such, ceased to be separated from the discourse on literature.

I will show then that electricity “invaded” the texts of critics, who, paradoxically, tried to establish a strict border between the science and the literature, and believed that every “trespasser” would be immediately observed and defined. Such a call for control, that aims to be a methodological panopticon, yields in the unaffected hybridization of terms. It exposes the insufficiency of modern critics, just as Bruno Latour pictured this in We Have Never Been Modern (1991).

Following the criticism on modernism given by this French sociologist and philosopher, we can see the interconnectedness of the process of translation (hybridization) of actors and purification of the objects from ‘impure’ inputs (i.e. the performance of nonhumans in establishing the knowledge). What is this disturbing, haunting performance of words-objects? By presenting the abovementioned problems while conducting the research on literature according to the posthumanist view, I would like to name few answers on that issue.

The invasion of electric metaphors into modern discourse should be interpreted as a “haunting” performance according to what Jacques Derrida wrote about specters in Specters of Marx (1993) and Archive Fever (1995). Electricity – the agent and origin of metaphors – works in a larger time span as a specter that reiterates, performs (while not seen), and continually “becomes”. Its “haunting” performance should be referred to by its early interpretation in scientific discourse – as “soul of the world”. By evoking the Derrida’s theory, then I would like to describe this all-important process where electricity – being named a “ghost”, behaves like one of them. In my opinion, this guarantees the opportunity to reconsider the critical potentiality of installing the posthumanist approach that influences not only single study cases but also the discipline of literature history and criticism as a whole.

By Cecilie Klingenberg, 26 February, 2021
Author
Language
Year
Record Status
Abstract (in English)

The AI imaginary as unfolded over half a century of posthuman machine beings foregrounds how scientific modernity has entangled the matter of intelligence with the mediation of technology. AI exhibits this condition explicitly as engineered intelligence instantiated in machines.

Classical versions of the AI imaginary typically bring artificial intelligence forward as higher intelligence, beyond organic contingencies, cosmic rather than terrestrial. In the thrust and escape velocity of such cosmological narratives, the AI imaginary beams outward and away from Earth along expansionist and monolithic lines of evolutionary progressions toward cosmic heights ever receding from its human origins.

However, Kim Stanley Robinson 2015 novel Aurora is a magnificent exception to the traits of the AI imaginary as I have just enumerated them. What makes the difference? For one, ecological realism regarding the human contingencies of technological systems, and for another, posthumanist realism regarding the systemic contingencies of communication systems. Aurora’s AI narrator must construct a sense of self to produce its narrative utterance. In this capacity, it participates in a history of sociality specific to the ship and its human residents.

This AI narrator produces an artificial communication for an absent or unknown recipient, creates artificial meaning consistent with its machine selfhood, and processes the meaning of its social affirmation through attachment to a solidarity that regathers rather than alienates human and machine beings.

By Cecilie Klingenberg, 26 February, 2021
Language
Year
Record Status
Abstract (in English)

Late-nineteenth- and early-twentieth-century advances in physiology – in particular the discovery and characterisation of the autonomic nervous system, an adaptive physiological mechanism that carries out life-sustaining functions entirely automatically – led to growing awareness of the central role of automaticity in human survival.

Reflecting this growing awareness, French physiologist Claude Bernard observed that, despite appearing 'free and independent', humans largely rely on automatic processes for their survival, just like their evolutionarily more ancient precursors. Further emphasising Bernard's idea, at the turn of the century American philosopher and psychologist William James estimated that ‘nine hundred and ninety-nine thousandths of [human] activity is purely automatic and habitual'. These and similar observations suggested that, whilst intuitively appearing defined by individual agency and free deliberate choice, humans are, to a large extent, dependent upon evolutionarily ancient automatic physiological mechanisms.

Human thought, action, and survival itself, are largely a matter of habit. Late nineteenth- and early twentieth-century progress in the understanding of the central role of automaticity and habit in human physiology was paralleled by growing interest in the role of automaticity and habit in literature and art.

Some of the physiological observations on automaticity elaborated in the medico-scientific literature were assimilated into and mobilised by avant-garde art in ways that challenged the understanding of the human as voluntary agent. For example, echoing James's claim that most human activity is 'purely automatic’, French poet André Breton proclaimed Surrealism to be '[p]ure psychic automatism'. Surrealists strove to free their work from rational restraints by becoming spectators of their own subconscious, relinquishing control over their own selves, and turning into passive vessels for creative forces.

In an attempt to access the 'superior reality' of the automatic thought, late-nineteenth- and early-twentieth-century artists developed techniques of automatic writing, drawing, and painting, which effectively integrated physiological insights on the centrality of habit in human survival, thought, and behaviour, and mobilised habit for its creative potential.

In my paper I will explore specific aspects of this integration of physiological insights on automaticity and creative mobilisation of habit, by examining ways in which the resulting literary and art-practices (e.g. automatic writing, automatic painting) challenged contemporary conceptions of the human individual, author, artist, and spectator as free independent agent defined by voluntary choice and action, and capitalised instead on the idea of humans as physiological organisms, largely deterministic and dependent upon fixed automatic habit.

I will suggest that the result is an ante litteram posthuman (because deterministic, mechanical, and automatised) aesthetics, rooted in prehuman (because evolutionarily ancient) physiology