machine reading

By Jill Walker Rettberg, 19 April, 2016
Publication Type
Language
Year
License
CC Attribution
Record Status
Abstract (in English)

This essay argues that the sensing activities of smart objects and infrastructures for device-to-device communication need to be understood as a fundamental aspect of the rhetorical situation, even in the absence of human agents. Using the concept of exigence, most famously developed by Lloyd Bitzer, this essay analyzes the asymmetrical rhetorical dynamics of human-computer interaction and suggests new rhetorical roles for reading machines. It asserts that rhetorical studies has yet to catch up with electronic literature and other digital art forms when it comes to matters of the interface and the sensorium of the machine. It also claims that the work of Carolyn Miller epitomizes the conservative tendencies of rhetorical study when it comes to ubiquitous computing, even as she acknowledges a desire among some parties to grant smart objects rhetorical agency. Furthermore, when traditionally trained rhetoricians undertake the analysis of new media objects of study, far too much attention is devoted to the screen. In the logic of rhetorical theory, cameras are privileged over scanners, optics are privileged over sensors, and representation is privileged over registration. However, new forms of rhetorical performance by computational components may be going on independent of human-centered display. By interpreting works of electronic literature by Amaranth Borsuk, Caitlin Fisher, and Judd Morrissey, it posits a possible framework of sensing exigence.

Creative Works referenced
By Johannah Rodgers, 30 October, 2015
Publication Type
Language
Year
Record Status
Abstract (in English)

This thesis explores how various computer programs
construct poems and addresses the way several critics
respond to these computer generated texts. Surprisingly,
little attention has heretofore been paid to these programs.
Critics who have given the matter attention usually focus on
only one of the myriad programs available, and more often
than not, such scholarship concludes with a disparagement of
all such projects. My work reexamines computer generated
poetry on a larger scale than previously exists, positing
some conclusions about how these texts affect contemporary
theories of authorship and poetic meaning.
My first chapter explicates the historical debate over the
use and limits of technology in the generation of text,
studying similitudes between certain artistic movements and
computer poetry. This historical background reveals that
the concept of mechanically generated text is nothing new.
My second chapter delineates how the two main families of
computer poetry programs actually create these texts.
Computer programs combine existing input text, aleatory
functions, and semantic catalogues, which provides insight
into how humans both create and interact with these
programs. At the same time, this study illustrates the
difficulty in defining the level of intention and influence
by individuals on the textual product, and therefore these
texts challenge our traditional notions of authorship and
the value of poetry. My third and final chapter argues that
contemporary literary theory and poetics creates the
conditions under which computer generated poetry can pose as
a human product. The success of these programs to deceive
readers about the origins of the text becomes clearer with
the results of a survey I conducted in which the respondents
were fooled by the machine more often than not. This
possibility of machine-created text masquerading as human
art threatens many critics, who quickly dismiss the process
and its results as non-poetic, but I conclude that since the
computer complicates foreknowledge of origin in some
contemporary poetic forms, this intrusion by the machine
prompts us to reconsider how we traditionally value and
interpret poetry.

Description in original language
Abstract (in original language)

See above.

Pull Quotes

"Formulaic poetry generating programs produce texts
influenced by two individuals : the programmer and the
operator. One could argue that they are one in the same,
since by inputting data such as subject and gender, the
operator enters into the role of programmer and "finishes"
the instruction set. It would follow that in such a case,
the label "programmer" now applies to a role and not to a
specific individual. Much to the possible disappointment of
the Bill Chamberlains and Chris Westburys of the programming
world, authorship now disintegrates into a true author
"function," not applicable to identifiable individuals. Yet
somehow this creates a nagging sense of inaccuracy precisely
because of the type of language computer programmers use."

Description (in English)

Text ‘n FX is a DJ mixer for text. It is a prototype machine developed in the 80’s for the emerging practice of Hip-Hop. Instead of a DJ mixing two records together, the designers of the device proposed the idea of a Text-Jockey (TJ). The TJ acted as a machine-assisted poet mashing up lyrics read from two floppy disks in real-time using statistics, Natural Language Processing (NLP) and cut-up techniques from experimental literature. The product never made it to market but it exists today as a media-archaeological curiosity.

(Source: ChercherLeTexte website)

Screen shots
Image
Brendan Howell performs Txt'N'Fix at Le Cube Numerique, Paris, Sept 25th 2013
By Scott Rettberg, 5 November, 2012
Language
Year
Record Status
Abstract (in English)

Language is the hidden scaffolding of networks, applications, and web sites. It is minified and monetized in ways that are often occluded from the everyday user’s experience. From their point of view, the interaction is innocuous – language is used for labels and explanations. A few words are typed into an empty field and thousands of related results appear instantly. A simple search, an email to a friend, a unique phrase – all easily logged, monetized, and indexed. This is the world of invisible participation.

Our panel is interested in language on the Internet, how it is created, by whom, where it exists, and how it is used. Three examples: Google reads our emails, garners information from our personal messages and uses that profiling strategy to select “relevant” ads. It then displays those ads on the screen next to the very emails from which the information was initially taken. Facebook and other social media platforms use similar methods of securing and storing data — data that is paradoxically private and public, and all personal. Further, crowd-sourced encyclopedias like Wikipedia are shaping the way we read, learn, and think. Language is what links all of these sites together. All of the sites’ underlying organization and structures have been built to follow the logic we ourselves employ in using language. “Robots” read content, algorithms interpret it and databases memorize it. The impact of this process is no longer confined to the Internet, but has reached beyond it into our everyday lives.

Attachment
Database or Archive reference
Description (in English)

Google reads our emails, garners information from our personal messages and uses that profiling strategy to select “relevant” ads. It then displays those ads on the screen next to the very emails from which the information was initially taken.American Psycho was created by sending the entirety of Bret Easton Ellis’ violent, masochistic and gratuitous novel American Psycho through GMail, one page at a time. We collected the ads that appeared next to each email and used them to annotate the original text, page by page. In printing it as a perfect bound book, we erased the body of Ellis’ text and left only chapter titles and constellations of our added footnotes. What remains is American Psycho, told through its chapter titles and annotated relational Google ads.We were most curious how Google would handle the violence, racism and graphic language in American Psycho. In some instances the ads related to the content of the email, in others they were completely irrelevant, either out of time or out of place. In one scene, where first a dog and then a man are brutally murdered with a knife, Google supplied ample ads regarding knives and knife sharpeners. In another scene the ads disappeared altogether when the narrator makes a racial slur. Google's choice and use of standard ads unrelated to the content next to which they appeared offered an alternate window into how Google ads function — the ad for Crest Whitestrips Coupons appeared the highest number of times, next to both the most graphic and the most mundane sections of the book, leaving no clear logic as to how it was selected to appear. This "misreading" ultimately echoes the hollowness at the center of advertising and consumer culture, a theme explored in excess in American Psycho.

(Source: Mimi Cabell's project page for American Psycho)

By Scott Rettberg, 21 March, 2011
Language
Year
Publisher
License
All Rights reserved
Record Status
Abstract (in English)

Article abstract required.

Guest lecture at Duquesne University.

Pull Quotes

The crucial questions are these: how to convert the increased digital reading into increased reading ability and how to make effective bridges between digital reading and the literacy traditionally associated with print.

When it came to digital reading, however, they were accustomed to the scanning and fast skimming typical of hyperreading; they therefore expected that it might take them, oh, half an hour to go through Jackson’s text. They were shocked when I told them a reasonable time to spend with Jackson’s text was about the time it would take them to read Frankenstein, say, ten hours or so.

Reading has always been constituted through complex and diverse practices. Now it is time to rethink what reading is and how it works in the rich mixtures of words and images, sounds and animations, graphics and letters that constitute the environments of twenty-first-century literacies.

Content type
Author
Year
Language
Platform/Software
Record Status
Description (in English)

In this piece the user can type whatever they wish into the application. The application takes this information and displays it in a more or less conventional manner. However, it does this in a number of different languages, including English, Greek symbols, the decimal ASCII codes that map keyboard keys to typography, the binary codes that equate to these, Morse Code and Braille. In all cases, except that of the Braille, the material is all remembered and displayed back to the user. All material written is also saved to the user's hard-drive, as it is typed in, so that they may keep a permanent record of that which has been written. The saved file is called "LossText" and you should be able to find it in the prefs or plug-ins folder of the browser you are using to run the application. You could find it using the FIND command of your computer.

As in other pieces, such as 'This is not a Hypertext' and 'Book of Books', this work also auto-resizes the text as required. If you choose to keep writing for long enough the text you are writing will eventually become reduced to 1 point fontsize, rendering all the different codes visually equivalent and equally unreadable (except for the Braille). Nobody has yet developed a Braille computer display so for those who would prefer to read their Braille with their sense of touch I regret I am unable to facilitate this option currently.

(Source: Artist's description from the project site)

I ♥ E-Poetry entry
Screen shots
Image