Leaps and take-offs
The blue sky above us is the optical layer of the atmosphere, the great lens of the terrestrial globe, its brilliant retina.From ultra-marine, beyond the sea, to ultra-sky, the horizon divides opacity from transparency. It is just one small step from earth-matter to space-light – a leap or a take-off able to free us for a moment from gravity.
Paul Virilio, Open Sky
As I read Virilio’s introduction to Open Sky (1997 [1995]), I decide to open the Google Earth app on my iPad. By sliding my forefinger over its glassy surface, I notice that I am coming closer and closer to what corresponds to my current geographical position, but still, at the same time I am able to travel around the world in just a few taps and swipes on the screen. As reminiscent as it may be of David Bowie’s “Planet Earth is blue/and there’s nothing I can do”, this apparently insignificant manipulation also reminds me of Canadian astronaut Chris Hadfield’s version of “Space Oddity”, recorded inside the International Space Station and enabling more than 23 million people to witness Earth’s blueness through Hadfield’s camera lenses.
What all of these artefacts – videos, music, lyrics, quotations – have in common is the fact that they are affected by a series of interface mediations, all of which, of course, can be seen and touched through the Internet. However, one questions the real significance of this touch and why do we find the idea of holding the whole world in our hands so phenomenal. If what we need now to free us from gravity is just a leap or a take-off, which might be done by a simple touch of the hand or a snap of the fingers, what becomes of the eye?
The intensification of research around digital media devices that require tactile/haptic functions,1 such as touch and gesture, along with efforts to increase tangibility in the Human-Machine Interface (Gallace & Spence, 2014: 229), is giving way to a whole new rhetoric of bodies and surfaces (as well as interfaces). Not only touch and gesture are anything but superficial, but also these “new” processes of writing and reading tend to amplify the primacy of vision over other sensory modalities. This is a paradoxical situation that Wendy Hui Kyong Chun defines as a “compensatory gesture” by “the current prominence of transparency in product design and political and scholarly discourse”:
As our machines increasingly read and write without us, as our machines become more and more unreadable, so that seeing no longer guarantees knowing (if it ever did), we the so-called users are offered more to see, more to read. The computer – that most nonvisual and nontransparent device – has paradoxically fostered ‘visual culture’ and ‘transparency’. (2005: 2)
In addition, as ubiquitous computing turns into a naturalized process in our lives, the opacity/transparency paradox becomes even stronger, which is a natural consequence of its attachment to ubi-comp.
Extending an avant-garde countercultural tradition that started questioning visual culture as early as the beginning of the last century, there is also evidence of an increasing number of digital literary works channelling its countercultural and metamedial poetics towards the aforementioned phenomena. These “technotexts” (to borrow Katherine Hayles’ term) may or may not include multi-touch devices such as tablets and smartphones. Nevertheless, the one who do, often self-reflectively question the specificities of these digital devices and media, as well as the apparatuses enclosing them.2 I argue that such “machimanipulations”, manipulations of the device by both humans and machines, tend to defy the general assumption of surfaces as something superficial, recovering Deleuze’s idea of surfaces as double-fold and profound (1990: 4-11). If, in fact, we are now living in a “Glass Age” governed by a culture of transparency, to what extent are these “transparent” touching glass surfaces becoming an opaque looking glass?