The following is a reconstruction of a paper I wrote in 1990-1991. This article was subsequently published in:
Glanville, Ranulph, and Gerard de Zeeuw (eds.), Interactive Interfaces and Human Networks , Amsterdam: Thesis Publishers, 1993 [ISBN 90-5170-199-3], pp. 119-135.
© 1993 Ranulph Glanville and Gerard de Zeeuw
Interactional models are of increasing concern in the design of information technology (IT), and they will become even more important as systems are progressively targeted to support groups rather than individual users. Most models applied to date derive from a rationalistic, structuralist tradition which, although easily translated into computing specifications, is being supplanted in linguistics. The trend toward more action-oriented views in linguistic research, coupled with the trend toward collective support systems, requires that new viewpoints be explored. I propose that the autopoietic theory of Maturana and Varela provides a source of potentially useful insights. Deriving from their theory are two models of social systems which are employed to analyze IT systems and the design processes by which they are constructed. These models are employed to suggest ways for tailoring collective support systems to the character of the workspace into which they are to be inserted.
Collective support systems are those which are designed to facilitate group activities. Since social (group) networks are implemented and maintained by communicative interactions, design models must be based on models of interaction. 'Natural' language interactions are of increasing importance in the ongoing debates over the proper relations between computers and their users. Pragmatic issues of context (Silverstein, 1988) and 'language games' (Winograd & Flores, 1986; Ehn, 1988) are invoked in critiques of prevailing cognitivistic (Woolgar, 1987) or rationalistic (Winograd & Flores, 1986) traditions. Conversational analogies have been applied for some time in describing human-computer interplay, and Suchman suggests '...we might productively take human-machine interaction to be an extreme form of resource-limited communication, applying essentially the same methods to its analysis as those used in the study of human conversation.' (1990, p. 43)
Conversational models have not yet been widely integrated into design practice, although much work addresses their potential. We should exercise caution in evaluating and incorporating such models into support systems. One reason pertains to theory: there is no universally accepted model of language or discourse. Probably due to convenience, previous natural language processing (NLP) research concentrated on structuralist models deriving from the same period and intellectual traditions as computing itself (e.g., Morris, 1938). While such structuralist models are demonstrably incapable of capturing contextual aspects of human interactions (Silverstein, 1987), alternatives emphasizing pragmatics rather than syntax or semantics have not yet been widely applied. Searle's (1969; 1975) speech act theory is the only such alternative reflected in products to date, but it is itself a structuralist formalization of Austin's (1962) original insights into pragmatics. Curiously, the nascence of speech act theory in computing applications coincides with its decline in linguistics; it has been criticized as limited (Ballmer & Brennenstuhl, 1981; Hancher, 1979; Levinson, 1983), and it is being supplanted by more empirical discourse analyses (Bowers et al., 1988).. Furthermore, speech act theory conflicts with other ideas supported by some of its IT proponents -- in particular Wittgenstein's notion of open-ended 'language games' (Whitaker, 1991).
A second reason for caution pertains to practice: a more enlightened approach is no guarantee of escape from the inflexibility of current practices. Simply projecting interactionally oriented analyses onto novel structured models may perpetuate rather than cure problems when those models are subject to the same limitations they were intended to overcome (Woolgar, 1987). Searle's speech act theory is purportedly flexible and highly reflective of natural interactions through capturing pragmatic aspects of collective activity (Flores & Ludlow, 1981; Flores et al., 1988). However, systems embodying a speech act framework have been criticized for imposing undue restrictions onto workspace interactions (Carasik & Grantham, 1988), and some researchers have elected to seek models more consistent with the aforementioned discourse analyses (Bowers et al., 1988).
The theory of autopoiesis, developed by the Chilean biologists Humberto Maturana and Francisco Varela, has engendered a considerable body of literature, outlining a phenomenological and epistemological framework applicable to fields as diverse as linguistics, systems theory, management, sociology, law, and family therapy.  Autopoietic theory is a promising candidate for the badly needed common substrate connecting disparate points espoused by proponents of IT alternatives (Whitaker, 1991). I would like to sketch some aspects of this theory relevant to interaction (in general) and their implications for human-computer interaction (HCI). Many (if not all) the points put forward will not be new to the reader; they are consistent with recently nascent viewpoints of 'user-centered design' (Norman & Draper, 1986) and 'cognitive engineering' (Rasmussen, 1986; Norman, 1987). The novelty lies in the fact that one (autopoietic) perspective generates these points as necessary derivatives, while proceeding from a distinct phenomenologically-based stance. For the sake of brevity, I must proceed under the assumption that the reader is familiar with the theory and its attendant terminology at a level concomitant with the introductory literature (Maturana & Varela, 1980 or 1987; Varela, 1979).
A key tenet of autopoietic theory is that the nervous system is organizationally closed. Ongoing behaviors, which to an observer seem linked to the external 'world', are enacted solely with regard to the closed state of the nervous system; any reference to entities in the environment is therefore indirect. This ascribes a status of artificiality to the means by which an observer can reflect or interact about the medium, and it implies that cognitive systems can only interact via their descriptions -- hence always at least one step removed from the phenomena they may naively believe to portray or convey. In an autopoietic framework, interaction cannot be a process of transferring discrete symbols back and forth, as it is typically portrayed in information theory (Shannon & Weaver, 1949) via a 'conduit' (Reddy, 1979) or 'pipeline' (Maturana & Varela, 1987) metaphor. Instead, language is a venue for action -- a coupling among two or more actors undertaken within a '...domain of descriptions [which serves] as a metadomain that exists only in a consensual domain in reference to another domain.' (Maturana, 1978, p. 48) The primary function of language is therefore the mutual orientation of conversants within such a consensual domain. Maturana reinterprets language as the archetypal illustration of a human consensual domain; in fact, he labels all such interactional domains as linguistic. This permits him (1975; 1978) to subsume types of 'communication' not exhibiting finite lexical, syntactical, and semantic elements; it also provides a conceptual base relevant to a broad range of interactors, human and otherwise -- subsuming the stereotyped interchanges typical of human-machine interactions.
But how does Maturana account for seemingly standardized lexical units and syntactic conventions? He claims such a question is biased in presuming such conventions are persistent manifestations of some separate language system. This presumption of an a priori, systematized communications schema has been all too easily adopted in computing, where command interaction is conducted via such formal patterned codes. '...(T)he superficial syntax can be any, because its determination is contingent on the history of consensual coupling and is not a necessary result of any necessary physiology...' (Maturana, 1978, p. 52). Structuralist approaches to natural language are therefore inherently inaccurate, even though regularities in linguistic usage may give an impression of some immutable system in operation. '(T)he 'universal grammar' of which linguists speak as the necessary set of underlying rules common to all human natural languages can refer only to the universality of the process of recursive structural coupling.' (Ibid.)
This shift from participants' manipulations of mutually held coding schemata to behavioral coupling is radical, insofar as it does not determine meaning from form. 'For an observer, linguistic interactions appear as semantic and contextual interactions. Yet what takes place in the interactions within a consensual domain is strictly structure-determined, interlocked concatenations of behavior. In fact, each element of the behavior of one organism operating in a consensual domain acts as a triggering perturbation for another.' (Ibid., p. 52) What may appear to an observer as a situation in which the 'listener' is deterministically influenced by the 'speaker' -- a 'semantic coupling' (Maturana, 1975, pp. 326 - 327) -- is valid only within the domain of descriptions for that observer.
This provides the first clue as to how the autopoietic viewpoint treats the use of electronic media: there is no such thing as 'direct' cognitive engagement with the world -- it is always mediated by referential descriptions. As such, there is no requirement that these descriptions conform precisely to their referents in the world. Hallucinations, dreams, and the like must be accorded a degree of cognitive 'validity' on a par with any other phenomena (Maturana & Varela, 1980). It is this fact which underlies the demonstrated power of direct manipulation interfaces (Shneiderman, 1982; 1983) and the potential power of virtual realities (Kelly, 1989; Rheingold, 1990). In effect, it also explains the variety of ways in which humans interact among themselves -- all social systems can be seen as virtual realities.
This brings us to the second point on which autopoietic theory affects HCI: any presumptive interpretation of 'language' in terms of structured coding schemata is a fiction projected by an analytical observer onto interactional behavior. It is certainly true that much of human interaction is regularized to the point that some 'grammar' serves as a convenient descriptive device. It is not, however, justifiable to believe that all interactional behavior is symptomatic of conversants' internal manipulation of such 'grammars'. This has not always been recognized in HCI, where highly stereotyped symbol sets with fixed representational mappings (e.g., Morse code, Unix commands) are exchanged between a human and a mechanical partner (e.g., a computer), in which a formal 'grammar' is functionally embedded. The effect of symbol interchange on the machine is deterministic with regard to that 'grammar' and is effected through its physical and functional configuration.  This is in stark contrast to the human, for whom identical states of the machine may (e.g., in different contexts) induce wholly distinct neurostructural (cognitive) states. Such artifacts (machines plus their determining 'grammars'), no matter how stereotyped or stylized, are never soles determinants of the user's cognitive state(s) -- they are, in effect, ongoing sources of 'stimulation' which are at most conventionally interpreted via goal-directed associations. This is well illustrated by the compulsive, even hypnotic, engagement afforded by the better computer games.
An autopoietic analysis leads to the conclusion that disjunction between the behavioral domains of humans and computers is unavoidable. This is of extreme pertinence to the state of current HCI research. The operational states of both the human nervous system and the digital computer are commonly described as structured composites, resulting in a transference of descriptive features from one to the other -- e.g., the human information processing (HIP) paradigm in psychology and the computational metaphor of cognitive science (Cherniak, 1988; West & Travis, 1991). This structured view, though warranted for the computer, is certainly not warranted for the humans. Any apparent similarities between them lies solely within the same domain as their individual descriptions (i.e., that of an observer), and they are illusions deriving from ontological confusion: the structurally-described coupling ascribed to human behavior is only a description imposed upon manifest reality, while the allegedly analogous behavior in a digital machine is a manifestation realizing a description framed entirely in terms of constituent elements and rigidly formal relations among them.
This functional boundary irrevocably divides the individual user's cognitive domain from the consensual domain within which she shares task actions with others. Acceptance of this disjunction leads us to some preliminary conclusions. First, the representational surface at which a user engages an information system need not have a direct mapping onto the consensual appearance of the task domain being supported. In other words, users may accomplish tasks (e.g., controlling production processes) through interfaces portraying something quite distinct from any feature of that task (e.g., game-like interfaces). This disjunction between interface representation and functionality is the basis for applying metaphors to interface design (Laurel, 1986).
We are witnessing a shift of focus in computer-based support systems from individual end users to groups of such users. This is the basis for interest in computer-supported cooperative work (CSCW) (De Michelis, 1990) and 'organizational interfaces' (Malone, 1985). There is a coalescing consensus that the key success factors for such collective support systems are of a social or organizational nature, and that the design of collective support systems must incorporate the expertise and insights from the social sciences and other fields (e.g., Moran & Anderson, 1990). We must accordingly shift our viewpoint from individual ergonomics and cognitive dynamics toward workplace social groups and interaction dynamics.
The points developed above (regarding relations between a single human and a support system) ignore the consensual domain(s) among humans in a workplace. However, shifting from task-directed applications (i.e., single-user-to-system relations) to interaction-directed applications (i.e., user-to-user-via-IT-relations), we can address this expanded scope with a corollary: the representational surface at which users engage an information system need not have a direct mapping onto those users' consensual interactional domain. This applies in two distinct ways. First, the representational surface through which any interactor communicates with another need not bear any (representational) correspondence to an observer's description of the (consensual) domain in which they interact. For example, email could be exchanged via an arcade-game format. Second, the individual representational surfaces through which any two users interact need not be identical. To extend the preceding example, the arcade-styled email system could be accessed via multiple game facades.
In addressing collective support systems, the appropriate scope of interest is the entire workspace -- a set of people (and resources) who through their interactions accomplish some task. The focus of IT support is taken to be the coordination of these people and their interconnected activities. We will in effect be treating the workspace as a social system, and the effectiveness of autopoietic (or any) theory will depend on the theory's treatment of such social systems. Although originally a view of individual phenomenology, autopoietic theory has been extended to address social systems. There have been two distinct lines of descent from Maturana and Varela's own views. One emphasizes the formal aspects favored by Varela (1979), seeing social systems as systems in themselves. The other emphasizes the phenomenological aspects favored by Maturana (1980), viewing social systems as emergent networks of interaction among individuals. Both views are well-developed, but due to their recency neither has yet been explicitly applied in modelling workspaces for IT system design.
I shall begin with the application of autopoiesis to social systems (in general) and the legal system in particular by Niklas Luhmann (1988).  Luhmann's work explores the possibility that social systems are themselves autopoietic, starting from his own 'general theory of social systems' and its assertion that '(s)ocial systems can only reproduce themselves by (always self-referential) communication.' (1988, p. 16) He asserts communicative acts as the defining components of a social system, describing the system in terms of operational characteristics, independent from the specific participants. Luhmann's usage of autopoiesis is confined to the abstract social system, with no reference to the participants or observers as autopoietic themselves.
Luhmann (1988) offers an illustration of such an autopoietic social entity by applying his framework to the field of law. He portrays a system of laws as necessarily exhibiting closure with regard to referential links, both within and without the codex. Within the body of law, terms (referents) are interconnected with regard only to each other; without, those referents must be linked to real world events, persons, and objects. As such, the legal system is a unity defined by its persistent organization, simultaneously 'closed' in terms of organization among self-referenced items 'pointing' to actual subjects, yet 'open' in terms of the links made between internal referents and world objects. Luhmann states this as 'normative closure' and 'cognitive openness' (1988, p. 21), based on the closed self-referential nature of the norms induced by law, versus the potential for observers to map actual phenomena onto those norms via referential linkages.
In contrast, the German sociologist Peter M. Hejl (1980; 1984) offers a 'bottom-up' view of social systems, incorporating more of the phenomenological aspects of autopoietic theory. He accords primacy to the individual, defining a social system to be '...a group of living systems which are characterized by a parallelization of one or several of their cognitive states and which interact with respect to these cognitive states.' (1984, p. 70). This definition explicitly constrains this parallelization to participants' cognitive domains, grounding his vision of social domains in individual phenomenology and mutual interaction among the individuals thus described.
In other words, what has since Durkheim been considered a stable or evolving structural entity (i.e., society as a unit object of which individuals are merely members) is redefined as an emergent effect of individuals' mutually held consensus. Hejl defines social systems, neither in terms of their composite identity, nor in terms of the individual participants, as being syn-referential -- '...constituted by components, i.e., living systems, that interact with respect to a social domain. Thus the components of a syn-referential system are necessarily individual living systems, but they are components only inasmuch as they modulate one another's parallelized states through their interactions in an operationally closed way.' (1984, p. 75)
Syn-referentiality allows an autopoietic view of interaction which accounts for social domains in a manner fundamentally different from earlier approaches. To delineate social systems as independent unities is inherently erroneous because properties are thereby projected onto the social system as a monolithic whole, without regard for the individually-realized properties of the participants. Although Hejl claims that a social system can exhibit operational closure, he frames such closure (as well as its resultant effects) with strict regard to the autopoietic nature of the participants (1984). Hejl's perspective on social systems is therefore entirely consistent with Maturana and Varela's emphases on individual phenomenology and their warning that social systems are to be considered exclusively from that perspective (Maturana, 1980; Maturana & Varela, 1987).
Hejl's syn-referential approach is more consistent with the stated positions of Maturana and Varela ; it is the more epistemologically and ontologically well-grounded; and it is supportive of the phenomenological and interactional positions embraced by alternative IT adherents. On the other hand, Luhmann's viewpoint (hereafter called sys-referential) is admittedly useful if we are willing to take one step away from autopoietic theory's 'ontological ground zero' (i.e., from the observer to something apparently observed); it is more readily embraced by those already inculcated in prevailing structuralist tradition; and it maps very nicely onto advanced IT applications of the day, particularly knowledge-based systems.
The disjunction between these viewpoints demarcates many dilemmas in IT today. Just as it is impossible to simultaneously address a social domain as a whole and as a collection of participants, it is impossible to simultaneously analyze an IT support system as a formalized system and as an interactional milieu. Any Luhmann-like application of autopoietic theory to entire IT support systems can only be undertaken with the explicit proviso that any results may not be informative regarding users and relations among them. Any Hejl-like 'bottom up' view is constantly forced to qualify itself by reference to the eyes of some observer, thus undermining the degree of certainty with which work tools and procedures may be specified, constructed, and installed.
Luhmann's emphasis on communicative acts connected into an organizationally closed unity maps readily onto a vision of (e.g.) rules, data, and messages maintained in a computerized base. Human participants form an 'interface', drawing linkages between key referential points in the system and external objects or events. The relations among key referential points are specified wholly within the organizationally closed system; once links are drawn between world entities and those points, specification of productions over those entities follow. Competence is defined with regard to conformance to the structured mappings among key referential points. The primary user interaction is between individual humans and an IT artifact serving as active agent for some phase(s) of the task at hand. The interactional vehicle is Robinson's (1989) 'formal level' of language. The design metaphor most consistent with this view is the tool.  Current IT products which suggest themselves as general examples are databases and expert systems.
Table 1: Comparison of Luhmann's and Hejl's Models of Social Systems
Luhmann's Model Hejl's Model (Sys-Referential) (Syn-Referential) ISSUE: ----------------------------------------------------------------- Focal aspects: Formal aspects Phenomenological aspects Focal issues: (closure,autonomy) (observer, consensual domains) Basic unit of analysis: Entire social system Individuals Elementary constituent: Communicative act Individual participants Status of social system: Autonomous and autopoietic Emergent and syn-referential Main functional interaction: Human-to-system Human-to-human Interactional vehicle: Formal level of language Cultural level of language  CSCW becomes: CScw csCW  IT analogies: Expert systems, databases Conference systems, hypermedia
Now let us consider Hejl's vision of social systems. The workspace (and its IT embodiment) are seen as emerging from a network of interactions among participants. Humans determine the relations among referential points, which are subject to ongoing negotiation. An IT artifact appropriately serves as an intermedium for conducting acts by which interactors orient themselves to each other. Competence is defined with regard to performance in cooperative interactions. The primary user interaction is among humans, via an IT artifact which is at its most reified a passive reflection of the interactional network's state. The interactional vehicle is Robinson's (1989) 'cultural level' of language. The design metaphor most pertinent is that of a communications medium (Whitaker & Östberg, 1988; Whitaker, 1991). The current IT products which suggest themselves as analogies are hypermedia and conferencing systems.  The two positions are summarily compared in Table 1.
Fundamental to autopoietic theory is the phenomenology of the observer, who brings forth her world by distinguishing entities from the milieu. The contrasts between Luhmann's and Hejl's social system prescriptions derive directly from contrasts in their perspectives: to Luhmann, the social system is a whole whose character is independent of its members; to Hejl, the social system is a continual fabrication whose character emerges from the interactors. An observer may adopt either perspective on a workspace, but she cannot maintain both simultaneously. Viewing any set of interactors (as a 'set') is to adopt the sys-referential viewpoint; to take a syn-referential position would, in effect, be to enter into the network of engagements as a participant without regard to the network as a discriminable entity.
Is there a resolution to this dilemma? One approach is to simply select one of the alternatives, as in the divergence of computer-supported cooperative work (CSCW) research into differential foci on computer support (CScw) and cooperative work (csCW) (Whitaker, Östberg & Essler, 1989). With regard to HCI, work continuing under rationalistic / cognitivistic traditions still attempts to subsume everything at once -- either describing the information system as a human-like participant or describing the human user as a mechanical unit. The former is not justified given the limited sophistication of our most 'intelligent' systems, and the latter is not justified on the grounds it is dehumanizing and degrading.Another approach is to accept the disjunction, but embrace both perspectives as necessary and complementary facets of collective activity, as in Robinson's (1989) 'double-level languages'. Let me sketch three issues fundamental to pursuing such a balanced view.
Second, the model doesn't cover everything. When attention turns from the model in isolation to the model within a context (user-to-model context to evaluate usability; task domain-to-model context to conduct analysis), a sys-referential approach becomes opaque. Again, the Woods et al. (1990) study illustrates this. Their discussion of expert system usage is wholly within a context of 'joint man-machine systems'. From this vantage point (machine + user together), they describe dysfunctions in terms of 'control' -- a concept which has meaning only in a context where the 'controller' and the 'controlled' are jointly subsumed.
The sys-referential perspective cannot address usability without considering the interactivity of machines and users (i.e., a syn-referential evaluation). Due to this observational shift, the domain of analysis is distinct from the domain of the problem, and often neither is isomorphic with the domain of prescribed solution(s). Cognitivists sometimes do no better than to throw the burden of guilt onto designers (Norman & Draper, 1986) or confront issues within a framework wherein the user is reduced to a system component (Rasmussen, 1986; Robertson, Zachary, and Black, 1990).
On the other hand, a sys-referential perspective cannot be completely abolished where the workspace and/or the designed artifact is addressed via collective representation, model, etc. Regardless of the interactions leading to the collection and synthesis of opinions, there will inevitably be some common reference point generated. Such common references are sys-referential -- generalized, abstract depiction of the issue(s) and reifications of the patterns of interaction emergent from the syn-referentially realized design team social system. Anyway, we are discussing the construction of computer-based tools -- reifications of a formal model strictly defined in machine language and strictly interpretable by the CPU at run time. The artifact itself is therefore addressable as 'system' linked (in practice) to, but not dependent upon, the individual phenomenologies of its users -- a sys-referential automaton analogous to Luhmann's model of the legal system.
The essential point is then to delineate the scope of sys-referentiality applied in a design setting. Support tools should not constrain the development and/or continuation of syn-referentially realized workspace social networks. These networks cannot be captured in design models; they can only be allowed for through recognizing and delineating boundaries on the work model implemented in a collective support system. Within those systemic boundaries, support artifacts can function as referentially closed units, surrounded (outside those boundaries) by human users. This is a more enlightened form of Luhmann's model, ascribing well-specified functions to the closed systems and leaving the rest to the humans. It is (by definition) a sys-referential model, but it does not subsume the humans within the system itself. Such an approach circumscribes support systems' functionality and the topology of their interfaces within the same context where (e.g.) evaluations of 'control' can be done.
The goal for interface development should then be to increase the 'plasticity' of the machine's domain of behaviors. The machine would thereby be subject to structural change (modifications to the knowledge base and/or procedures) based on interaction with the user. Specific means for implementing this would range from (e.g.) users overriding machine behaviors (interfaces cast in roles of 'assistants') to users interactively programming the knowledge base (software being intended as a user 'toolbox' rather than as a hermetically sealed artifice). These approaches have, of course, already been promoted for some time; the Woods et al. study recommends corrective measures of just this type. The novelty here is that application of autopoietically-inspired social system models (1) yields these remedies as necessary conclusions rather than arbitrary judgments and (2) provides one framework for evaluating group interfaces' initial applicability (i.e., workspace analysis) and their resulting efficacy. Furthermore, by distinguishing sys- / syn-referential positions assumed (as well as other distinctions in referential domains), autopoietic theory offers the means to analyze and critique those evaluations.
This brief overview demonstrates the utility of autopoietic theory and derivative work in delineating strategic issues in the design of collective support systems. Autopoietic theory affords us a cogent set of well-defined terminology and concepts consistent with those basic positions espoused by Winograd and Flores (1986) and promoted thereafter by adherents of IT alternatives. It offers a conceptual substrate binding together what has heretofore been a fragmentary theoretical pastiche, spanning phenomenology, epistemology, and interactional (hence, social) issues. The Luhmann and Hejl alternatives are framed within a common conceptual framework, affording the ability to compare them (and hypotheses concerning them) within a single 'domain of reference'. I suggest this utility can be further exploited, perhaps leading to concepts providing credible alternatives to the 'rationalistic' and 'cognitivistic' views dominating IT today.
BACK TO TEXT
BACK TO TEXT
BACK TO TEXT
BACK TO TEXT
BACK TO TEXT
BACK TO TEXT
BACK TO TEXT
BACK TO TEXT
Ballmer, T., & W. Brennenstuhl, Speech Act Classification, Heidelberg: Springer-Verlag, 1981.
Bednarz, J., Autopoiesis: The Organisational Closure of Social Systems, Systems Research, 5 (1988), no. 1, pp. 57-64.
Benseler, F., P. Hejl, & W. Kck (eds.), Autopoiesis, Communication, and Society: The Theory of Autopoietic System in the Social Sciences, Frankfurt: Campus Verlag, 1980.
Bowers, J., J. Churcher, & T. Roberts, Structuring Computer-Mediated Communications in COSMOS, in Speth, Rolf (ed.), EUTECO '88: Research into Networks and Distributed Applications, Amsterdam: North Holland/Elsevier, 1988.
Carasik, R., & C. Grantham, A Case Study of CSCW in a Dispersed Organization, in Proceedings of the CHI '88 Conference on Human Factors in Computing Systems, Washington DC: ACM, 1988, pp. 61-66.
Cherniak, C., Undebuggability and Cognitive Science, Communications of the ACM, 31, no. 4 (April 1988), pp. 402-412.
Ehn, P., Work-Oriented Design of Computer Artifacts, Stockholm: Arbetslivcentrum, 1988.
Ehn, P. & M. Kyng, A Tool Perspective on Design of Interactive Computer for Skilled Workers, in Sääksjärvi, M. (ed.), Proceedings from the Seventh Scandinavian Research Seminar on Systemeering, Helsinki, 1984.
Flores, F., & J. Ludlow, Doing and Speaking in the Office, in Fick, G., and R. Sprague (eds.), Decision Support Systems, Issues and Challenges, New York: Pergamon Press, 1981, pp. 95-118.
Flores, F., M. Graves, B. Hartfield, & T. Winograd, Computer Systems and the Design of Organizational Interaction, ACM Transactions on Office Information Systems, 6 (1988), no. 2, pp. 153-172.
Hancher, M., The Classification of Cooperative Illocutionary Acts, Language and Society, 8 (1979), pp. 1-14.
Hejl, P. The Problem of a Scientific Description of Society, in Benseler et al. (eds.), 1980, pp. 147-161.
Hejl, P., The Definition of System and the Problem of the Observer: The Example of the Theory of Society, in Roth, & Schwegler (eds.), Self-Organizing Systems: An Interdisciplinary Approach, Frankfurt: Campus Verlag, 1981, pp. 170 - 185.
Hejl, P., Towards a Theory of Social Systems: Self-Organization and Self-Maintenance, Self-Reference and Syn-Reference, in Ulrich, H., & G. Probst (eds.), Self-Organization and Management of Social Systems, Berlin: Spring, 1984, pp. 60-78.
Kelly, K., Virtual Reality: An Interview with Jaron Lanier, Whole Earth Review, no. 64 (Fall 1989), pp. 108 - 119.
Levinson, S., Pragmatics, Cambridge UK: Cambridge University Press, 1983.
Luhmann, N., The Unity of the Legal System, in Teubner, G. (ed.), 1988, pp. 12-35.
Malone, T., Designing Organizational Interfaces, in Proceedings of the CHI '85 Conference on Human Factors in Computer Systems, San Francisco: ACM, 1985, pp. 66-71.
Maturana, H., The Organization of the Living: A Theory of the Living Organization, International Journal of Man-Machine Studies, 7 (1975), pp. 313-332.
Maturana, H., & F. Varela, Autopoietic Systems, Urbana IL: University ofIllinois Biological Computer Laboratory Research Report 9.4, 1975.
Maturana, H., Biology of Language: The Epistemology of Reality, in Miller, G., & E. Lenneberg (eds.), Psychology and Biology of Language and Thought: Essays in Honor of Eric Lenneberg, New York: Academic Press, 1978, pp. 27-64.
Maturana, H., Man and Society, in Benseler et al. (eds.), 1980, pp. 11-32.
Maturana, H., & F. Varela, Autopoiesis and Cognition: The Realization of the Living, Dordrecht: D. Reidel, 1980.
Maturana, H., & F. Varela, The Tree of Knowledge: The Biological Roots of Human Understanding, Boston, Shambhala, 1987.
Moran, W., and R. Anderson, The workaday world as a paradigm for CSCW, CSCW 90 Proceedings, Los Angeles: ACM, 1990, pp. 381 - 393.
Morris, C., Foundations of the Theory of Signs, in Neurath, O., R. Carnap, & C. Morris (eds.), International Encyclopedia of Unified Science, Chicago: University of Chicago Press, 1938, pp. 77 - 138.
Norman, D., & S. Draper (eds.), User Centered System Design, Hillsdale NJ: Lawrence Erlbaum Associates, 1986.
Norman, D., Cognitive Engineering -- Cognitive Science, in Carroll, J. (ed.), Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, Cambridge MA: MIT Press, 1987, pp. 325-336.
Rasmussen, J., Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering, New York: North-Holland (Elsevier Science Publishers), 1986.
Reddy, M., The Conduit Metaphor: A Case of Frame Conflict in Our Language about Language, in Ortony, A. (ed.), Metaphor and Thought, Cambridge: Cambridge University Press, 1979, pp. 284 - 324.
Rheingold, H., Travels in Virtual Reality, Whole Earth Review, no. 67 (Summer 1990), pp. 80-87.
Robertson, S., W. Zachary, & J. Black (eds.), Cognition, Computing, and Cooperation, Norwood NJ: Ablex, 1990.
Robertson, S., & W. Zachary, Conclusion: Outlines of a Field of Cooperative Systems, in Robertson, Zachary, and Black, 1990, pp. 399-414.
Robinson, M., Double Level Languages & Co-operative Working, Cosmos Information Exchange Network, Issue 6 (November 1989), pp. 42-84.
Searle, J., Speech Acts -- An Essay in the Philosophy of Language, London: Cambridge University Press, 1969.
Searle, J., A Taxonomy of Illocutionary Acts, in Gunderson, K. (ed.) Language, Mind and Knowledge, Minneapolis: University of Minnesota, 1975, pp. 344-369.
Shannon, C., & W. Weaver (1949). The Mathematical Theory of Communication, University of Illinois Press, Champaign/Urbana IL.
Shneiderman, B., The Future of Interactive Systems and the Emergence of Direct Manipulation, Behavior and Information Technology, 1 (1982), pp. 237-256.
Shneiderman, B., Direct Manipulation: A Step beyond Programming Languages, IEEE Computer, 16 (1983), pp. 57-69. Silverstein, M., The Three Faces of 'Function': Preliminaries to a Psychology of Language, in Hickmann, M., Social and Functional Approaches to Language and Thought, Orlando FL: Academic Press, 1987, pp. 17 - 38.
Suchman, L., What is Human-Machine Interaction, in Robertson, Zachary, & Black, 1990, pp. 25 - 55.
Teubner, G. (ed.), Autopoietic Law: A New Approach to Law and Society, Berlin: Walter de Gruyter, 1988.
Varela, F., Principles of Biological Autonomy, New York: Elsevier (North-Holland), 1979.
Varela, F., Autonomy and Autopoiesis, in Roth, G., & H. Schwegler (eds.), Self-organizing Systems: An Interdisciplinary Approach, Frankfurt: Campus Verlag, 1981, pp. 14-23.
Varela, F., & H. Maturana, Mechanism and Biological Explanation, Philosophy of Science, 39 (1972), pp. 378 - 382.
West, D., & L. Travis, The Computational Metaphor and Artificial Intelligence: A Reflective Examination of a Theoretical Falsework, AI Magazine, Spring 1991, pp. 64 - 79.
Whitaker, R., Toward Intermedia: Interaction, Autopoiesis and the Design of Information Systems (working title), doctoral dissertation, Umeå University (Sweden), 1991.
Whitaker, R., & O. Östberg, Channeling Knowledge: Expert Systems as Communications Media, AI & Society, 2 (1988), 197-208.
Whitaker, R., O. Östberg & U. Essler, Communications and the Coordination of Concerted Activity, Human Interface News & Report, 4 (1989), pp. 325-338.
Winograd, T., & Flores, F., Understanding Computers and Cognition, Norwood NJ, Ablex, 1986.
Woods, D., Roth, E., & K. Bennett, Explorations in Joint Human-Machine Cognitive Systems, in Robertson, Zachary & Black, 1990, pp. 123 - 158.
Woolgar, S., Reconstructing Man and Machine: A Note on Sociological Critiques of Cognitivism, in Bijker, W., T. Hughes, & T. Pinch (eds.), The Social Construction of Technological Systems, Cambridge MA: MIT Press, 1987, pp. 311-328.
Zeleny, M., and N. Pierre, Simulation of Self-Renewing Systems, in Jantsch, E., and C Waddington (eds.), Evolution and Consciousness, Reading MA: Addison-Wesley, 1976.
Zeleny, M. (ed.), Autopoiesis, Dissipative Structures, and Spontaneous Social Orders, AAAS Selected Symposium 55, Boulder CO: Westview Press, 1980.
Zeleny, M. (ed.), Autopoiesis: A Theory of Living Organization, New York: North Holland, 1981.