Context-Awareness and Meaning – An Interview with David Rokeby

Interviewer: Ulrik Ekman.

This interview is the second in a series of four conducted in February 2014. All four interviews originated in David Rokeby’s presentation for a conference in Copenhagen, Denmark, held by the Nordic research network “The Culture of Ubiquitous Information” supported by the NordForsk research organization. The interviews constitute important parts of Rokeby’s contribution to the final publication project in this network, the anthology titled Ubiquitous Computing, Complexity and Culture (Routledge, 2015).

Ulrik Ekman: If computing in the early 21st Century, notably after 2005, becomes more ubiquitous or pervasive and accompanies multitudes of human lives and their activities, one might say that it also becomes more relevant to draw upon notions of ‘mixed reality.’

In the events in this kind of situation in network societies we interact not only in traditional physical environments but also in virtual environments, and the two begin to mix or become superimposed in various ways, depending on the setting and the activities at stake.

This both signals a departure from earlier 1990s notions of more or less transcendent virtual reality and hyperreality and gives us problems due to a relatively unclear indication of a more embodied virtuality as well as some kind of hybridity or ‘mixing.’

As researchers, Paul Milgram and Fumio Kishino have early on tried to define “mixed reality,” and Ronald Azuma has done the same with respect to “augmented reality.”[1] Here “mixed reality” denotes first an assumption of a virtuality continuum as the major contemporary research paradigm to be actualized, and then, second, no less than four key aspects constituting mixed reality: the real environment, augmented reality, augmented virtuality, and the virtual environment. And Azuma thinks of an “augmented reality system” as one that combines the real and the virtual, is interactive in real-time, and is registered in three dimensions.

How do you see your work on interactive media art in relation to mixed reality and augmentation, and do you think of these notions in a different way?

David Rokeby: I find it interesting that people exploring mixed and augmented reality seem to feel comfortable using the term “reality” as though it was a stable and external thing. 

Here is a rough and sloppy list of the layers that I think make up the continuum of experience:

  • real environment (stranger and less familiar and accessible than the virtual)
  • sensed environment (in which the dimensions of human experience are defined by the particular characteristics of our senses.
  • perceived environment (in which we group and order and filter the sense data)
  • projected environment (in which we enforce a coherent and stable reading on the perceptions based on memory, and accumulated habit, idea, intent, need.)
  • illuminated environment (in which our coherent and stable reading is disrupted by a new idea / an artwork / a new experience, yielding a new reading)
  • constructed environment (in which we build, decorate, shape, design our surroundings to more closely match our desires and needs)
  • responsive environment (in which elements of our environment respond to us according to behaviors (i.e. pets, lovers, robots))
  • information environment (in which forces not fundamentally defined by the apparent laws of physics, contribute to our experience (tour-guide, subway map, newspaper, computer)
  • cognitive environment (in which we enter a space with or without reference to these other layers of reality / environment and allow ideas, concepts, theorems and archetypes to interact according to some sort of potentially arbitrary but consistent rules.)
  • virtual environment (a kind of vacuum in which artful constructions of synthetic sensory (or neural) input successfully engage with our existing perceptual, cognitive and memory systems)
  • fantasy environment (which is not necessarily bound by any rules except that we must be able to resonate with what takes place there based on our existing pool of memories and tendencies of interpretation.)

I like to be aware of all of these levels when I make an interactive work. In an interactive work, your primary medium is not the computer / software / sensors. It is the audience, and the many layers (including those above) which contribute to the nature of their experience of the world and of the work.

To step back and be more direct, Very Nervous System was always something that one could comfortably call augmented reality or mixed reality.[2] It always existed in a physical space that had its own ‘reality’ and the installation added layers to that already existing context. More importantly, it played off the presence of that already existing context. In some cases, as when it has been presented on a public street, without marking, boundaries, or explanations, the tensions between the familiar behavior of the space and this unexpected additional layer are the most interesting and poignant parts of the work.

I was nearly guided into a life as a math professor. In my first year in university, it became clear to me that I was only interested in the space where idea and theorem met the ‘real’ world and everyday experience, and I think that that blended space is really the place we live. We are always in mixed reality. For me the big step was to admit this and commit to being aware of it. In a sense, my interactive installations are specifically designed to create a productive and illuminating mix, in which the mix is explicit and can be experienced more consciously than is perhaps usual.

UE: One could say, then, that your installations involve interactivity with audiences situated in and existing in mixed realities. Actually, I have repeatedly been struck by the ways in which parts of your work could be approached as precursors of the kinds of technocultural development we now witness when interacting with mixed reality projects as defined in parts of ubiquitous computing.

The mixed realities at stake here involve technical systems that are supposed to have a certain ‘context-awareness.’ Not just in a basic positional sense, as in the use of GPS, but considerably more advanced than this. Numerous researchers and technicians are at work today on fleshing out different kinds of such technical context-awareness.[3]

This may be a question of monitoring and interacting with natural environments and the climate. It can be a matter of urban design and traffic control as in smart cities and the u-cities in South-East Asia. It can be the context-awareness proper to smart homes, or perhaps the kinds of context-awareness to be found in pervasive healthcare projects in which multitudes of computational units help take care of the elderly.

Already at the end of the 1980s, at least ten years before ubicomp systems began being implemented, you seem to have been at work on exploring media art projects that included engagements with technical context-awareness. Body Language[4]and Very Nervous System obviously drew upon technical systems contextually aware of the presence, movement, and gestures of human bodies. Later installations such as Gathering,[5] Dark Matter,[6] and International Feel[7] continued to work with technics demonstrating something akin to artificial perception systems, especially as regards visual, auditory, haptic, and kinetic dimensions of sensation. The Giver of Names[8] and n-cha(n)t[9] even proceeded to include a decidedly linguistic, poetic, and proto-semantic dimension in such technical context-awareness.

How do you see your long-standing interest in this in relation to contemporary technocultural developments, including the forging of interrelations among human and technical context-awareness?

DR: My early projects such as Reflexions[10] and Body Language were not motivated conceptually by the idea of context awareness per se. I was trying to have the computer algorithms express themselves in and attend to human-scaled and human-filled space. I wanted the computer program to happen in physical space and on my body. This required what you are describing as context awareness.  But I guess I was after human awareness of things that were normally inaccessible — like the operations of code. Perhaps more properly I was striving for a mixing of domains – human and computed.

This points back to the question of augmented reality a bit. But my aim was more precisely balanced. I was trying to create a meeting place between embodied consciousness in its native habitat, physical space, and virtual, computed structures of possibilities. To be more precise, I guess I would avoid the term ‘augmented’ as it implies that you are simply adding to reality. ‘Hybrid experience’ might be closer to my aim. In this perhaps one could find a parallel to Myron Kruger’s Videoplace idea, in which the installation creates a space that is neither real nor virtual and is frontier-like in the sense that there is not yet an established culture, established normative behavior, etc.[11] This was Myron’s utopian vision and though I did not meet him or read his book until the late 80s,[12] a similar thinking was involved, though less segregated.

I did not see the hybrid experience as being so isolated from cultural space, but certainly some of my early utopian hopefulness about interactivity came from the notion that this experience would be transcendent, in the sense that it would have the potential to lift us out of entrenched patterns of behavior… That it would be in some manner liberating, but, importantly for me, not fantasy or escapism. I guess I was hoping to be able to profoundly change the terms of engagement in physical, human, social, and political space, and as a result open new ways of resolving intransigent issues that we seemed stuck in as a civilization. The early 80s was a very intoxicating time for interactivity, before the hype of the late 80s, and I, at least, had some very high hopes… 

UE: You seem to indicate certain changes in your work from the 80s on. How do these changes map out in terms of your approach to human and technical context-awareness?

DR: I have come to see context as a major roadblock along the road to computational intelligence.

It is fine and important to create systems that take contextual information into account, but the challenge is that the human sense of context is deeply layered. It seems as though consciousness itself is perhaps a deep and recursive awareness of context. We are not everything, as we might feel as a baby. We learn to understand layers of contextual otherness as part of the way we learn to grasp ourselves within this onion-skin of contexts.

It is an excellent challenge to try to imagine a piece of software that can autonomously imagine a context encompassing the one it has been given. Computer programs generally have an in-built sense of scope. The limits of its scope are determined by the program and its variables. It is difficult to imagine a program deducing that the current context within which it is assessing a problem or performing a task is insufficient to the problem or task, and then autonomously imagining a broader context within which to operate. Put simply, computers as we currently imagine them are terrible at thinking outside the box. Even machine-learning systems like neural nets or self-organizing feature maps have a reach limited by the initial choice of which sensors are initially attached to the system and which available parameters are attended to. We might consider this a problem of ‘artificial phenomenology.’

What was exciting about playing around with direct human interaction in this context was that it seemed as though the system, encompassing the human and technical apparatus, couples the computer’s computational and information recall abilities to human inventiveness and creativity. The limitations of each part of the system are compensated for by the distinctive abilities of the other. The operational cycle of a true hybrid system includes both human and machine. The feedback loop itself has human cognitive flexibility and creativity as well as the computer’s computational speed, logical rigor, and information access. In this sense we are perhaps actually describing augmented consciousness.

These were my utopian dreams of the early 80s at the birth of Very Nervous System. The 90s for me was mostly an extended hangover. It was not that the utopian idea was impossible, but simply, as with most Utopias, that it did not sufficiently take into account human nature. The kind of thing I was imagining requires a long and delicate incubation, I think. The attractions of total knowledge and total control are too compelling (as we currently see with the NSA). The necessary delicate balance between human and machine is easily disrupted. 

The Giver of Names served as a bit of a hangover cure. It helped me see that it was still possible to consider the computer as a philosophical prosthesis, and to nurture a small personal utopia, hopefully more likely to endure!

UE: Works later than Very Nervous System often demonstrate a more complex and in-built technical context-awareness. They tend to incorporate more pre-programmed control and tend to reduce the free variability of interactivity on the side of human interactants engaging with the installation. This is much in keeping with some of your own statements elsewhere concerning the call for delimitation and reduction due to certain experiences with the audience in the early works.

A later work such as n-cha(n)t might be called quite unique in this respect. This community of technical givers of names communicating more or less meaningfully among themselves almost appears as individual automata and a sociality qua automata of automata. Here interior and programmed technical complexity is perhaps at its highest level in your work, even more than the twenty or so musical ‘personalities’ at play in VNS. Arguably, this move towards reduction of the complexity of choices faced by human interactants should put one at ease, or evoke boredom, neutrality, unawareness, ignorance, etc. But, somewhat paradoxically, something like the opposite might just as well ensue. Human interactants are set free in such a way and to such a degree that uncertainty, fear and anxiety, anger and aggression are unleashed in the face of the other and otherness, perhaps not least due to uncanniness of strongly autonomous automata endowed with the (human) capacity to produce and recognize meaningful languages.

In your work on installations, how do you proceed with respect to the internalization versus the externalization of the complexity of decisions respecting interaction with the other and otherness?

DR: n-cha(n)t is not so much an outlier as it might seem. n-cha(n)t is definitely the installation of mine with the most emergent / collective / autonomous behavior. Very Nervous System does not come close… was always much more constrained and human-centric. n-cha(n)t related to the human participant mostly as a source of interesting noise… I, as the artist, use them (the human visitors) as witnesses, and they often get frustrated by the fact that they feel under-affirmed.

I should mention also Petite Terre which I did with French artist Erik Samakh back in the early 90s.[13] We decided to work together because we both felt that there was a serious problem with the ‘user as God’ paradigm of most interactivity. In Petite Terre we created a miniature world with embedded quasi-natural sound behaviors (crickets, frogs, birds, etc. working in responsive choruses). The piece responded to itself, building, when uninterrupted, into a full Amazonian chorus of sounds. But if any one approached the world, you would hear the sound of stones dislodged and a splash or two, then dead silence. It would take several minutes of being undisturbed before the virtual creatures would gradually start singing again.

This is very much a precursor to n-cha(n)t and was very much not user-focused. These works are both the result of me being tired of creating mirrors, and of living up to the audience’s expectation of affirmation and clear relationship.

UE: It is interesting that this led in the direction of internal technical complication and reduction of human undecidability.

DR: I turned, somewhat unwillingly, towards reduction at around 1987, as I followed the rich vein of interaction that existed in the sweet spot, and this was because I was intentionally pursuing the question of how we humans respond to the kind of scenario I was proposing.

In 1990, I told myself I was tired of creating mirrors (“Transforming Mirrors” was written in 1991),[14] and tired of creating limited interactions that were largely predefined by me.

The Giver of Names and then later n-cha(n)t were projects that sought to understand what kind of experience a computer could have, and more importantly, eventually, how wide the coverage of the interface could be (meaning how well the interface could map each unique input scenario to a unique output.) 

n-cha(n)t is a bit of an end of the line kind of piece. It was the ultimate product of a certain train of thought and experimentation. The whole Giver of Names / n-cha(n)t project dominated 10 years of my life.

I did not turn away from it… I did not reject it. I did turn my attention elsewhere. Part of this is the task of broadening my reach and my understanding. I want to keep growing as an artist and as a human, and it seemed in 2002 that the best options for this were to be found in addressing the art context directly. It is also true that I love n-cha(n)t deeply as a work, but people often find it very frustrating (not just because they are being rejected).

Each work has a unique internalization/externalization and viewer active/viewer passive balance, based on what my intent was, what worked best, etc. It was a major step for me in 1989 to make my first non-interactive piece (Liquid Language).[15] I did also experiment with invisible interactions, where the function of the interaction was to simply improve the odds that the viewer would have the kind of experience I was hoping they would. I created n-cha(n)t at the same time as Machine for Taking Time,[16] and they are extremely different.

UE: I certainly agree with your earlier remark that context-awareness appears to be a major roadblock on the path towards context-aware machinic intelligence.

But I think I can hear a slight hesitation in your remarks. You withdraw a bit from the notion of further development of technical context-awareness for a human-oriented computing (e.g., as in the current goals for ubicomp, pervasive computing, and ambient intelligence). This withdrawal seems to take place partly in favor of a delicate balance between human and machine, and, it seems, partly in favor of the complexity of human context-awareness (e.g., what you call its deep layerings and recursiveness). Nonetheless, current technical and cultural developments move on in the other direction, and although it is much too early to say where this will lead or stop, I wondered whether we could stay with this issue.

Since I was first trained as a computer scientist and worked for a number of years as a programmer and systems planner and since you have ventured into much the same kinds of territory in your work from the very beginning, we both know that extended variability and flexibility, deep layerings, and recursion pose no uncircumventable problems for programming. So, technical context-awareness can go there with the right kind of effort and care. Things might be getting considerably more difficult once we get to some of the other traits mentioned in your gestures towards human context-awareness: invention, creativity, and self-adaptation to contexts.

I was wondering, however, why you did not mention another part of the roadblock on the path towards intelligent technical context-awareness: meaning. Perhaps I was surprised because this you have opened up in technically inventive and aesthetically quite sublime ways – in The Giver of Names and n-cha(n)t.

In your work on installations, how do you approach the rather difficult divide between technical context-awareness qua registration and production of data akin to informational sensations and, on the other hand something very often attributed more or less singularly to humans: context-awareness qua registration and production of sense, that is, something meaningful on a semantic, semiotic, and perhaps linguistic plane?

DR: I think my hesitation stems from a kind of ambivalence… It could also be creeping conservatism as I get older… The optimal design path now is a recursive one in which we let the computer design the computer that designs the computer that designs… There is an accelerating momentum which will tend to generate a new design agenda. Design aims will be borne out of the recursive momentum itself. There is certainly something exciting about that and there is an undeniable utility, but it causes goal-shifting at a faster pace than we can absorb. I guess I am an unabashed humanist when it comes down to it. The human agenda may be awkward, inefficient, and contradictory, but I must admit that I am still attached to it. I guess I want to be sure that what we call “human-oriented” computing is more than just “human-oriented.” Cats display mouse-oriented behaviors that are not beneficial to mice.

We can certainly agree however, that increasingly context-aware systems are the trend.

And so, of course, this leads to the ‘meaning’ question… Yes, the question of how meaning is constructed, felt, communicated, is certainly one of my main preoccupations. 

Can we create autonomous systems that are able to grasp the (human) significance of a certain confluence of data, sensor readings, etc.?

UE: This seems to be precisely the kind of difficult question at stake in The Giver of Names and n-cha(n)t!?

DR: I think my most profound experience of meaning-making came when I spent several days refining an installation of n-cha(n)t in a gallery in Canada. The piece was a few years old at the time, and I was experiencing it with the benefit of distance for perhaps the first time. I found that the installation’s behavior ‘moved’ me in a way that I had not expected and could not quite figure out. I spent some time pondering this and came to what felt like an interesting conclusion.

There are 7 computers in this installation. They each have a current idea or word they are focused on and they are wandering their internal knowledge base somewhat aimlessly, following relational links between words. They utter grammatically correct phrases or sentences referring to their current focus, creating a rambling stream of connected utterances. Each computer sends its current item of interest to the others over a network, and this communal input subtly guides each machine’s own wandering. If left alone, they approach and then attain a consensus. This results in a sort of chanting, where they tend to all say the same things. Each system had microphones, and utterances of visitors to the gallery were analyzed and acted as distractions, pulling individual machines away from the group chanting behavior. This disruption would often cause the community to completely fragment into individual, unrelated commentary.

My first impression was that the moments before the consensus was attained, as each machine apparently groped towards the others, was one of the most beautiful things I had ever been responsible for. You would hear the consensus forming as a spatial play of language. But what was more surprising to me was that while the sentences and phrases were clearly nonsensical, the gathering to consensus had a feeling of meaningfulness. The passing back and forth of what were to me, non-sensical groups of word-tokens, was establishing what felt like a shared frame of reference. And it felt like the process of building a shared frame of reference that allowed for substantive communication among the group was a very significant component of the mechanism by which meaning is generated.

What did this mean to me? It made me think about meaning-generation as a communal act, requiring the careful construction of shared reference points and the ability then to torque this construction of references in a way that is expressive to everyone that shared them.

UE: This is very interesting in itself, just as it reinvokes the kind of strikingly beautiful emergence of ‘sense’ one witnesses in the company of this installation. But it also momentarily puts in the background human ways of making sense of a context, just as it parenthesizes how computers might make sense of how humans make sense of a context…

DR: In terms of computers coming to understand what is meaningful to or about us humans in a shifting field of information and data, then a lot depends on the comprehensiveness of the information field and the degree that it maps successfully to what we actually find meaningful. Both The Giver of Names and n-cha(n)t sometimes say strings of works that I might consider meaningful, but this is largely a consequence of my own ability to project into those words, and it is very inconsistent. Random combinations of words in grammatically correct patterns are also sometimes wise and profound. 

To some degree, we humans are susceptible to formulaic meaning. Tear-jerker movies use well understood narrative constructions to induce us to cry. But part of the problem of getting machines to understand how to comprehend and convey human meaning is that we are not terribly good at stepping outside our built-in meaning-making systems enough to ‘explain’ them comprehensively to a machine (which is kind of what programming is). 

Another approach is to give machines a wide playing field but also give them sensors of some sort that let them know when we feel meaning/pleasure/etc. And allow it to make quasi-random responses and tune itself over time. But clearly the danger is we would perhaps be creating the perfect sycophant… 

UE: What did you do in your own installation in this regard?

DR: In The Giver of Names and n-cha(n)t I am depending in some significant part on the very aggressive sense-making machine in each of us. Any statement that is syntactically digestible will send us looking for meaning. In The Giver of Names, there is a shared referent… the object(s) that both you and the system are looking at. This makes it even easier for a human observer to dig for sense in the system’s utterance. I think of this as a sort of stereoscopy… since human and machine are engaged in somewhat parallel activities, it is easier for the viewer to tease out a sense of ‘depth’ in the system’s response to the objects, despite the lack of any real intelligence on the machine’s part.

I have always taken advantage of the human intelligence that the viewer makes available to me… This leads us to part of the quandary that an artist involved in a research-like process finds him- or herself in. Unlike classic scientific research, artistic research is inherently speculative and is more about generating questions than answering them. I play with stuff and note what surprises, troubles, or thrills me. Then I tease out that thing which caught my attention so that I can share the experience with others in the form of an artwork, to let them digest the experience for themselves.

You could say that I feel comfortable inducing questions with methods that I might question were I using them to give answers. This has its dangers of course. Joseph Weizenbaum’s Eliza passed my own personal Turing test when I first encountered it at 13 years old.[17] I could not figure out how the person on the other end was typing so fast. Weizenbaum’s intent of course was to show that you could create a demonstrably unintelligent program that could perhaps pass the Turning test. Unfortunately it was so successful that the right wing in the USA started suggesting that deploying Eliza in disadvantaged neighborhoods would be an excellent way to reduce mental health care costs.

In The Giver of Names and n-cha(n)t I did my absolute best at trying to give the system effective perceptual, linguistic, and associative abilities. My programming skills are not unlimited… others, I am sure, could do much better. But in fact, the point with The Giver of Names was simply to allow me to play at AI research. To let me get my fingers dirty so that I understood what it means to practically pursue the creation of machine intelligence, to understand the kinds of questions it would draw one to ask. It was of course very serious and committed play… But it was a questioning process, not so interested in finding an answer.

UE: On its own, this associative and questioning approach would lean towards an almost exponential rise in undecidability as to the sense of the context, and a massive complexification on the side of computational intelligence. Anyone who has met with The Giver of Names and n-cha(n)t will have recognized that other and much more reductive and non-randomizing things are going on…?

DR: One thing that I did in the programming of The Giver of Names was to do my best to achieve maximum coverage. That is to say to try to create a responsive system that provided as close to a one-to-one mapping of input to output as possible; that there should be as many non-random responses as there are possible inputs… and that there be a coherence to the responsive system nonetheless. One important thing I tried to do wherever possible was to defer decision-making (on the part of the machine). Robust responses to complex input require continuous feedback and feedforward mechanisms, and during this feeding back and forth, ideally, nothing is crystallized by making a definitive decision until absolutely necessary. This sort of deferral is fundamental to the relative robustness of our perceptual systems. The Giver of Names tries to maintain as much of the nuance in the input data as possible. This is going against the “forces of gravity” on a computer, as each decision you make reduces the dimensionality of the problem and makes each successive programming stage much easier.

Another meaning-related experience with the programming of The Giver of Names came as I finished the software mechanisms that implemented the grammar generation. It became clear to me the great multitude of decisions we make every time we formulate a sentence. Some decisions are clear (do we add “no” to make this negative). Others are much more subtle questions of tone. (Do we express this actively or passively? Which of the adjectives that describe a certain property is most appropriate to the intended tone of the sentence?) Each of these decisions has an impact on the shades of meaning that the sentence unfolds in the reader’s or listener’s mind.

UE: I wonder how you approach and evaluate the interrelation of human and technical context-awareness and sense-making here. The Giver of Names and its community in n-cha(n)t constitute another kind of individuation of intelligent context-awareness, but this is also incredibly close and intimately well-known to us. We are noisy and disturbing sense-making individuations to it, but also deeply incorporated in it in so many ways. Is some kind of co-individuation at stake?

DR: After repeated exposure to The Giver of Names, I started to find the peculiarities of its language usage more and more natural-sounding, as though it was speaking an unfamiliar dialect rather than making mistakes. This was part anthropomorphic projection, and partly that I was starting to have a more comprehensive sense of the peculiar subjective world view of the system. I was coming, in a sense, to a point where I knew it well enough to trust it, to assign some credibility to its observations even if they were in conflict with my own or even seemingly nonsensical. I guess at this point we find ourselves considering the question of ‘Otherness’ and cross-cultural communication.

After exhausting myself trying to understand how machine intelligence would inherently be different from human intelligence, I found myself at this point in my relationship to The Giver of Names thinking about the real value of alienness. If we acknowledge an alien viewpoint (recognizing that it is different from our own but valid within some other subjective cognitive environment), then we can begin to build an environment of trust in which the sharing of meaning is possible. 

This challenges notions of transparency and user-friendliness in interfaces, perhaps suggesting that a computer system should not attempt to mask its alienness, so that we can forge a relationship with it on shared terms, specific to neither human nor computer world. I am a long-time Macintosh user, but I am fascinated and somewhat compelled by the idea that user-friendliness might be dangerous. How do we trust something that we know relates to us through a simulation specifically designed by a corporate entity to appeal to us? On the other hand how could we not be susceptible to it?

When I talk about achieving a delicate balance between human and computer, I am talking about something that should take time. The situation we are in is that the computer is changing many orders of magnitude faster than we are, despite the remarkable ability for children to adapt to new interfaces. How do we establish meaningful relationships with something that is shape-shifting all the time?

Notes:

[1] Milgram, Paul, and Fumio Kishino. "Taxonomy of Mixed Reality Visual Displays." IEICE Transactions on Information and Systems E77-D, no. 12 (1994): 1321-29. Also, Azuma, Ronald T. "A Survey of Augmented Reality." Presence: Teleoperators and Virtual Environments 6, no. 4 (1997): 355-85.

[2] For further information on Very Nervous System, see http://www.davidrokeby.com/vns.html

Presentations of all Rokeby’s installations can be found here: http://www.davidrokeby.com/installations.html

[3] For the reader interested in the developments in technics and media art in this respect, see these sources: Bolchini, Cristiana, Carlo A. Curino, Elisa Quintarelli, Fabio A. Schreiber, and Letizia Tanca. "A Data-Oriented Survey of Context Models." SIGMOD Rec. 36, no. 4 (2007): 19-26.

Dey, Anind K. "Understanding and Using Context." Personal and Ubiquitous Computing Journal 5, no. 1 (2001): 4-7.

Dourish, Paul. "Seeking a Foundation for Context-Aware Computing." Human-Computer Interaction 16, no. 2-4 (2001): 229-41.

———. "What We Talk About When We Talk About Context." Personal and Ubiquitous Computing 8, no. 1 (2004-02-01 2004): 19-31.

Gellersen, Hans W., Albrecht Schmidt, and Michael Beigl. "Multi-Sensor Context-Awareness in Mobile Devices and Smart Artifacts." Mobile Networks and Applications 7, no. 5 (2002): 341-51.

Loke, Seng. Context-Aware Pervasive Systems.  Boca Raton, FL: Auerbach Publications, 2006.

Moran, Thomas, and Paul Dourish. "Introduction to This Special Issue on Context-Aware Computing." Human Computer Interaction – Hillsdale 16, no. 2 (2001-12-01 2001): 87-96.

Paul, Christiane. "Contexts as Moving Targets." In Throughout: Art and Culture Emerging with Ubiquitous Computing, edited by Ulrik Ekman, 399-418. Cambridge, Mass.: MIT Press, 2012.

Schilit, Bill N., Norman Adams, and Roy Want. "Context-Aware Computing Applications." In Workshop on Mobile Computing Systems and Applications : Proceedings, December 8-9, 1994, Santa Cruz, California, edited by Luis-Felipe Cabrera, M. Satyanarayanan, IEEE Computer Society. 85-90. Los Alamitos, Calif.: IEEE Computer Society Press, 1995.

Schmidt, Albrecht, Michael Beigl, and Hans-W. Gellersen. "There Is More to Context Than Location." Computer & Graphics 23, no. 6 (1999): 893-901.

Strang, Thomas, and Claudia Linnhoff-Popien. Location- and Context-Awareness : First International Workshop. Lecture Notes in Computer Science, 3479. Berlin: Springer, 2005.

[4] Cf., Rokeby, David. "Body Language."  http://www.davidrokeby.com/body.html.

[5] Cf., Rokeby, David. "Gathering."  http://www.davidrokeby.com/gathering.html.

[6] Cf., Rokeby, David. "Dark Matter."  http://www.davidrokeby.com/Dark_Matter.html.

[7] Cf., Rokeby, David. "International Feel."  http://www.davidrokeby.com/int_feel.html.

[8] Cf., Rokeby, David. "The Giver of Names."  http://www.davidrokeby.com/gon.html.

[9] Cf., Rokeby, David. "n-cha(n)t."  http://www.davidrokeby.com/nchant.html.

[10] Cf., Rokeby, David. "Reflexions."  http://www.davidrokeby.com/reflex.html.

[11] Cf., Krueger, Myron W., Thomas Gionfriddo, and Katrin Hinrichsen. "Videoplace: An Artificial Reality." SIGCHI Bull. 16, no. 4 (1985): 35-40.

[12] Cf., Krueger, Myron W. Artificial Reality.  Reading, Mass.: Addison-Wesley, 1983.

[13] Cf., Rokeby, David. "Petit Terre."  http://www.davidrokeby.com/pt.html.

[14] Cf., Rokeby, David. "Transforming Mirrors: Subjectivity and Control in Interactive Media." In Critical Issues in Electronic Media, edited by Simon Penny, 133-58. Albany: State University of New York Press, 1995.

See also Rokeby, David. "Transforming Mirrors."  http://www.davidrokeby.com/mirrors.html.

[15] Cf., Rokeby, David. "Liquid Language."  http://www.davidrokeby.com/ll.html.

[16] Cf., Rokeby, David. "Machine for Taking Time."  http://www.davidrokeby.com/machine.html.

[17] Cf., Weizenbaum, Joseph. “Eliza – a Computer Program for the Study of Natural Language Communication between Man and Machine.” Commun. ACM 9, no. 1 (1966): 36-45.