sitemap The Observer #6 -- 22 April, 1993

The Observer

Number 6: 22 April, 1993

"Everything said is said by an observer"

An electronic forum
Autopoiesis & Enactive Cognitive Science

Randy Whitaker




Dear Randy:

... As to the way the The Observer is going: I like it so far, but its very early days. I think you are right to keep the size per issue down to a manageable level; and your editorial work is very helpful. My only concern is whether you can possibly keep up the pace you have started out with! As to the "linearising" of the responses on the C-life stuff, I have not quite made up my mind yet: it is useful, but it does make it hard to see the integrity of the original contributions. There are pros and cons here, but on balance I think you should keep going with the editorial interventions...

Keep up the good work,

Barry McMullin

Dear Randy

I am very happy that you have started to use e-mail to develop 'autopoietic society'. It is my dream, but I do not have technical possibilities. May I introduce myself to explain why I think it is very important to launch this idea.

....All my experience (psychology, technology, economy, sociology) helped me to become grounded for theory of autopoiesis. I recognized in works of Maturana, Varela, Hejl, Luhmann, Zeleny and others my own behavior. Last year I have succeeded to meet Hejl, Luhmann and Zeleny, but none of them use e-mail. You are, for me, the first who is familiar with autopoiesis and e-mail. I think that for our 'autopoietic' communication is very important to vocalize our thoughts through this medium. It will develop quality of our understanding of autopoiesis, enhance speed and diminish cost. If there are at least a few scholars who will agree to start teleconference about economic, politic, social, technical and other problems from autopoietic point of view it will be very useful for the total USENET community. In fact there are many who behave in autopoietic ways but do not know that they do know. I would like to receive as much as possible in ASCII code anything that you have. My interest is to develop theory (and practice!) of freedom combining human, organizational, economic and cultural development.

....My project now is to design autopoietic society as challenge to chaos in which we live in Croatia. Specially developing electronic university, where everybody could learn using e-mail, BBS, WWW, gopher etc.

Ekonomski fakultet,
54000 Osijek,

[Greetings, Ante! Your letter is very interesting, and we would like to welcome you to The Observer. As you know by now, I am sending you all 5 back issues of The Observer, as well as the ASCII bibliography resources. We look forward to addressing the social and other issues which you mention. -- Randy]

[JACOB GALLEY at the University of Chicago has been corresponding with Ye Olde Editor about autopoietic theory and language. Jacob (Jake) is interested in the similarities between enactive cognitive science and the work of Mark Johnson (cf. The Body in the Mind, Univ. of Chicago, 1987). Jake explains his work as follows:]

The basis for my analysis is the work of Lev Vygotsky, especially as presented to me by David McNeill, who is my B.A. advisor. (McNeill wrote Psycholinguistics (1987) and Hand and Mind (1992), among others. He has been studying how people gesticulate when they talk, investigating utterance production, communication and meaning in general.) Anyway, Vygotsky realized that the psychologists of his day really had no good way to think about the mechanics of the mind --- no conceptual model analogous to the gene in heredity, or the atom in chemistry. Thus, he prescribed some constraints on how we should go about analyzing the mind into smaller, simpler units. (See Zinchenko (1985)*.) He hinted at the concept of self-organization, but did not himself manage to go very far with this before his death.

McNeill (1992) presents a theory under Vygotsky's constraints, which explains the production of communicative utterances in terms of self-organization and microgenesis. Each communicative act (i.e.. verbalization + gesture) arises from a "growth point" (akin to Vygotsky's "psychological predicate"). The growth point builds itself into a communicative act through interactions with pragmatic knowledge (e.g.. context of the conversation) and the linguistic competence of the speaker. McNeill does not invoke M&V's autopoietic theory explicitly, but he is aware of it. (He cites it in (1987), and I have talked to him about it.)

So, in the same vein, I am outlining a system that will account for the manifestation of Johnson's image-schemata in autopoietic terms. I intend to explain that all perceptions (and realizations?) arise in the same way --- that all are instantiations of image-schemata. I think that the key here is to think of each act of perception as an autopoietic unity unto itself, within the nervous system of the perceiver. This is indicated in the long passage by Merleau-Ponty quoted near the middle of The Embodied Mind, and it is analogous to McNeill's analysis of utterance production.


* V.P. Zinchenko. Vygotsky's ideas about units for the analysis of mind, in Culture, Communication, and Cognition: Vygotskyan Perspectives, James Wertsch, ed. 1985 Cambridge.

Dear Randall,

I'm a student in linguistics and cognitive sciences and I'm interested in language as a self-organizing system. Some time ago I discovered the works of Maturana and Varela. Now I'm looking for someplace where I can do an apprenticeship or PhD on language as a self-organizing system.

I thought that you might know someone, or some place where I can get more information. Thank you very much,

Jorn Veenstra

[I've been corresponding with Jorn about autopoiesis / enaction and language. I don't know of any specific suggestions regarding where Jorn could pursue his interests in language as a self-organizing system, particularly from an autopoietic viewpoint. DOES ANYONE OUT THERE HAVE ANY IDEAS OR SUGGESTIONS FOR JORN? -- R. ]

Hello Randy!

Currently working on my dissertation about communicational problems in the genetic engineering debate in Germany with the perspective of radical constructivism as a model for perception and understanding (reference are Ernst von Glasersfeld's theories), I'm very interested in this mailing list.

Juergen Geer

Hello from Michigan State University:

As a psychologist located within a department of educational psychology I have been especially interested in understanding the implications of the of Maturana and Varela's more expansive conception of cognition (especially its sociocultural dimensions) for how we think about pedagogy. I have read Autopoiesis and Cognition, The Tree of Knowledge, and I and several colleagues are currently reading/discussing Varela, et al's The Embodied Mind. In any event, I look forward to following and participating in the uptaking discussions on the The Observer.

Jim Gavelek


The following was forwarded from FRANCIS HEYLIGHEN, because of the Spencer Brown connection. These messages were originally posted by Prof. Heylighen in another electronic forum. I have done some minimal editing to present them for The Observer's audience. The references listed in these messages have been added to the on-line ASCII bibliography. I would like to thank Prof. Heylighen and others for forwarding this material to The Observer --R.

THE ORIGINAL QUERY WAS: Does anyone know anything about the book Laws of Form by G. Spencer-Brown?


Spencer-Brown's Laws of Form is a rather obscure book with perhaps an unusual viewpoint. Stanford Beer reviewed it in Nature when it was first published in the early 70s or late 60s. It has attracted the attention of people who have absorbed all the typical flotsam and jetsam of several disciplines and still find something left to be desired. What he does in this little book is so strange it has been left more or less in oblivion by academic humanity. The book is rather dense in the sense of intensely cogent and apparently a bit "mystical." He cites in the original hard cover a line from the Chinese text called Tao Te Ching in Chinese ... which is just the sort of thing which will turn many a head in another direction. If you like only what is familiar to you you may not like this little book. You must be willing to follow him out into his rather intense world to develop an appreciation of his efforts which could be called decent.

I don't have the time at the moment to amplify Mr Spencer-Brown's atypical opus ... but I'll return to it later. A few other thoughts are right at hand though: L.L. Whyte refers to this book with some frequency in his later writings. John Lilly, the dolphin fellow, also picked up on it. One of John Wheeler's students, Daniel Rolick, wrote a master's or a Ph.D. thesis on it called, as I recall, "The Paradox of the World Knot." There is perhaps a reference to this in Wheeler's big book on Gravitation with Kip Thorne. Spencer-Brown also wrote a book on Probability and Scientific Inference and contributed too to the literature of scientific parapsychology -- re statistical interpretation of parapsychological data as I recall. Steven Eberhard III, a mathematician at Washington State, went into Spencer-Brown's work quite deeply and was quite absorbed in it in the late 70s when I was last in touch with him. Mr Eberhard met Spencer-Brown in London and said that Spencer-Brown told him that he had continued where Boole had left off, feeling in this way like Dali who said that he had continued where Vermeer had left off. Spencer-Brown was a colleague of RD Laing whom he influenced. He apparently knew B Russell who thanked him for resolving the miasma of disallowed logical types. Since Spencer-Brown arrives so much from left field as it were he has been ignored and not in any way assimilated by the basically conservative nature of academia.


I would like to thank Mr Taylor for posting his question about George Spencer-Brown's Laws of Form. When I first encountered Spencer-Brown's book I thought it highly interesting and provocative but I could hardly find a soul who shared my interest. That was in 1972 I think. I took the book to people I knew then in the Mathematics department here at Indiana University, and also to people in Physics, and Philosophy. Not a soul was familiar with the book. The typical and near universal response was: if this book has content I should already know of it, since I do not the book must lack content. I found out that what in my milieu was an atypical type could be made interested in this author or his book and that there were no atypical types in my milieu, or nearly none. I had a friend who was interested but he was most atypical I should think in any milieu, and I did eventually meet a few other people who had found merit and an interest in it. I mentioned Mr Eberhart, who I met at a conference on multi-valued logic held here at Indiana University in 1975. But Mr Eberhart was also an anthroposophist, which was nothing in the least against him from my point of view but again in the pattern of an atypical personality. Well too John Lilly who had worked with dolphins with atypical experimental methodologies and atypical assumptions and weltanschauung had voiced his interest in Spencer-Brown's book but Mr Lilly too was an atypical personality. The same could be said of Lancelot Law Whyte who too had shown an interest. Stanford Beer, who reviewed Laws of Form in Nature seemed to me the most outside the pattern of odd-balls who took an interest in Spencer-Brown but he says in that review that he suspected that he was reviewing the work of a genius and saying such a thing seems to cut against some resentment and animus among all those who identify themselves with a certain idea of the establishment. New ideas, new points de depart, must crawl against a kind of ancestral miasma. Well I was younger then and had no profound investments in academic dogmatics -- so I was always surprised and disappointed that Spencer-Brown's little book did not arouse interest and curiosity.

Spencer-Brown begins his book saying:

"The theme of this book is that a universe comes into being when a space is severed or taken apart. The skin of a living organism cuts off an outside from an inside. So does the circumference of a circle in a plane. By tracing the way we represent such a severance, we can begin to reconstruct, with an accuracy and coverage that appear almost uncanny, the basic forms underlying linguistic, mathematical, physical, and biological science, and can begin to see how the familiar laws of our own experience follow inexorably from the original act of severance. The act itself [Read: The act is itself] already remembered, even if unconsciously, as our first attempt to distinguish different things in a world where, in the first place, the boundaries can be drawn anywhere we please. At this stage the universe cannot be distinguished from how we act upon it, and the world may seem like shifting sand beneath our feet."

It is a remarkable statement and view point.

Spencer-Brown also wrote a book called Only Two Can Play This Game (1972). He wrote it under the pseudonym of James Keys. It is dedicated "To his Coy Mistress." It begins with a "Prescript" which itself begins: "If like me you were brought up in a western culture, with the doctrine that everything has a scientific explanation, there will be certain ideas you will not be allowed to know.

These ideas are in fact as old and as widespread as civilization itself. But your education will have programmed you so that whenever you hear or read about any of them, its sets off a built-in reflex that shouts 'mystical nonsense' or 'crazy rubbish'."

This book from which I quote Spencer-Brown says is the feminine counter-part to his book Laws of Form. The dust jacket of my copy of Only Two ..., which I have dated 14 May 1974, quotes from somewhere in the interior of the book James Key's remark:

"Man has nothing to say on his own, but what he says he gets from the woman. The woman has it complete but she has no voice to say it, the man is her voice; he is the poet and she is his muse. And if the voice of man divorces from its feminine source and thinks it can say it all on its own, then we get this nightmare -- or comedy if you like to look at it that way, ending up with man sitting on the hydrogen bomb."

To-day we can be disappointed that Spencer-Brown did not say "the man is the poet and woman is the muse" rather than what he did say but then too if he had said that perhaps he would not have had to write the book in which he says this, perhaps not even the other book, on the laws of form.

So because of all of what Spencer-Brown says and because of the vacuum into which all that seemed to me to mysteriously fall I was surprised to encounter Mr Taylor's question and even more surprised by the informed and thoughtful responses of John Collier, Eric Oberle, and Dr Francis Heylighen. Do you happen to know Jeff how it was that your psychologist friend knew of Spencer-Brown?


I don't now much about the impact this book has had in philosophy, but it is referenced very frequently in cybernetics (especially the constructivist epistemologists of the "new" or "second order" cybernetics school, see e.g. Heylighen F., Rosseel E. & Demeyere F. (eds.) (1990): Self-Steering and Cognition in Complex Systems. Toward a New Cybernetics, (Gordon and Breach Science Publishers, New York).

I have myself been very impressed by Spencer-Brown's concept of "distinction", which I have elaborated in what I call a "distinction dynamics": a theory of how new distinctions arise out of combinations of existing ones. I have applied this among other things to an epistemological analysis of physical theories (quantum mechanics, relativity theory, thermodynamics, ...). See for example:

Heylighen F. (1990): Representation and Change. A Metarepresentational Framework for the Foundations of Physical and Cognitive Science, (Communication & Cognition, Gent).

Heylighen F. (1989): "Causality as Distinction Conservation: a theory of predictability, reversibility and time order", Cybernetics and Systems 20, p. 361-384.

Heylighen F. (1990): "Classical and Non-classical Representations in Physics I", Cybernetics and Systems 21, p. 423-444.

Heylighen F. (1990): "Classical and Non-classical Representations in Physics II: Quantum mechanics", Cybernetics and Systems 21, p. 477-502.

Heylighen F. (1992): "Non-Rational Cognitive Processes as Changes of Distinctions", in: New Perspectives on Cybernetics. Self-Organization, Autonomy and Connectionism, G. Van de Vijver (ed.), (Synthese Library v. 220, Kluwer Academic, Dordrecht), p. 77.

I haven't gotten through the book yet, but apparently Spencer-Brown is developing some sort of calculus which solves the self-referential paradoxes that Russell and Whitehead attempted to solve with the Theory of Types. He does so by introducing, in addition to the truth-values True, False, and Meaningless, the value Imaginary, which functions analogously to the way the square root of -1 functions in algebra.

My own use of the distinction concept does not address the issue of self-reference. Several "second-order cyberneticists" have focused specifically on this self-referential aspect, though, extending Spencer-Brown's calculus. Some names: Francisco Varela & Joe Goguen, Ranulph Glanville, Louis H. Kauffmann... Some references:

Glanville R. (1979) "Beyond the Boundaries", in Ericson R (Ed) Improving the Human Condition: Quality and Stability in Social Systems ,London, Springer

Glanville R and Varela F. (1980) "Your Inside is Out and Your Outside is In", in Lasker G (Ed) Applied Systems and Cybernetics Vol II, Oxford, Pergamon

Goguen JA, and Varela, FJ (1979): "Systems and Distinctions: Duality and Complementarity", Int. J. Gen. Sys. 5:1, p. 31-43.

Kauffmann L.H. (1987): "Self-reference and recursive forms", J. Social Biol. Struct. 10, p. 53-72.

Varela F.J. (1979): Principles of Biological Autonomy, (North Holland, New York).

This book was recommended to me by a psychologist, but I have been unable to find any mention of it in the Philosopher's Index, and have never heard of it anywhere else. It looks like it should have been an important work. Has it had any effect in philosophy?

I would also be quite curious to know the reactions to Spencer-Brown's book outside of cybernetics. Any references are highly welcome.


Brown has had an impact reaching far beyond cybernetics. Not only has he been invoked in recent debates about biological and psychological autopoiesis (as in the works of Varela and Maturana: Autopoiesis and Cognition: The realization of the living, Dordrecht 1980; Heinz von Foerster, Observing systems (Seaside California, 1981).

Brown has also been incorporated into Parsonian sociology in some interesting (and sometimes perverse) ways by Niklas Luhmann. See Essays on Self-Reference, Columbia University Press, 1990. And two books I don't have the years of publication, also by Luhmann: Ecological Communication and The politics of the welfare state. If you read German, see Soziale Systeme: Grundriss einer allgemeinen Theorie, (Frankfurt, 1984).

Luhmann's notes will carry you further if you are interested. For a response to Luhmann see Habermas, The Philosophical Discourse of Modernity, (MIT Press, 1991), p.368-390


The Laws of Form are equivalent to propositional calculus. Spencer Brown showed LoF -> PC in his book. B. Banaschewski showed the opposite entailment in Notredame Journal of Formal Logic, 3, (1977): 507-509.

I always suspected that the Laws of Form and the Propositional Calculus (or Boolean algebra) were equivalent, and have treated them as such in my papers on distinction dynamics. It is nice to see that somebody actually proved this result, though.

As I guess people will start to wonder by now why we need Spencer-Brown then, if his algebra is merely equivalent to something which is very well-known, I want to remark that the axiom system of LoF is much simpler than the usual axiom systems for PC. Moreover, these axioms are derived from 2 extremely simple and intuitively understandable properties of the act of making a distinction.

The following quote from my book "Representation and Change" summarizes the basics of Spencer-Brown's system. I have replaced the special symbol used by SB for representing his basic connective (the "mark", symbolizing the act of distinction) by square brackets [].The original sign looks like a right angle, sloping from left-up to right-down, which may cover other symbols underneath it; symbols covered by the mark are replaced in my notation by symbols enclosed in square brackets.

The Boolean algebra, which forms the basic structure of classical logic, is defined by means of a collection of different axioms, relating the connectives (conjunction, disjunction, negation,...) to the variables, and constants (I and O) of the algebra. However, Spencer-Brown (1969) has shown that this algebra can be derived from an algebra of distinctions, which uses only one connective ( [] ), determined by two axioms. The axioms are:

1) [ [p] p] =

2) [ [pr] [qr] ] = [ [p] [q] ] r

This can be transcribed in the notation we used for Boolean algebras (section 4.4), by observing the conventions:

[p] = -p (NOT p)

pq = p v q hence: [ [p] [q] ] = p & q

The axioms then become:

1) -p & p = O (law of contradiction)

2) (p v r) & (q v r) = (p & q) v r (distributivity)

Spencer-Brown has shown how Sheffer's set of postulates for Boolean algebras, which is the least such set, can be derived from these two postulates.

The algebra of distinctions, however, is derived from an even more simple formal structure : the calculus of distinctions. In this calculus there are no variables denoting different distinctions. There is only one distinction considered, which could be interpreted as the distinction between "true" and "false", and which is represented by []. The rules of calculation are derived from two extremely simple axioms:

i) [ [] ] =

ii) [] [] = []

In our notation this could be transcribed by using one variable between brackets: (a), to indicate that it is not a specific distinction:

i) --(a) = (a) (double negation)

ii) (a) v (a) = (a) (idempotence)

The sign [] can be interpreted as the crossing of a boundary or distinction between two states : the marked state (represented by the inside or concave side of the [] sign) and the unmarked state (represented by the outside or convex side of the [] sign).

Axiom ii) means that if you cross a boundary two times in sequence, from the outside to the inside, and from the inside to the outside, you come back to the state you started from: nothing has changed.

Axiom i) signifies that if you cross a boundary two times in parallel, from the outside in, and from the outside in, then this is equivalent to only crossing the boundary once.

The axioms i) and ii), or 1) and 2), can be used to prove a number of theorems (Spencer-Brown, 1969). These theorems can be used to simplify expressions consisting of combinations of connectives ( [] ), and variables ( a, b, c, ...). Since all the expressions of Boolean logic can be transcribed in such expressions of the algebra of distinctions, this formalism can be used to simplify expressions from classical logic. The ultimate simplification of an expression consists in reducing it to one of the two primitive expressions:

a) [] (i.e. "truth")

b) (empty expression, i.e. "falsity")

The sequence of steps leading to this simplification can be interpreted as a proof of the expression, or as a proof of its negation. In general the simplified expression will still contain variables. For example:

((p => q) & (r => s) & (q v s)) => (p v r)

can be transcribed as:

[ [ [ [p] q] [ [r] s] [qs] ] ]

and this can be simplified to (Spencer-Brown, 1969; p. 116):

[qs] pr

which can again be transcribed as:

(q v s) => (p v s)

The truth of such an expression will be contingent upon the truth or falsity of the variables q, s, p and r . However, checking the truth of the simplified expression for a given set of values for the variables is much easier than checking the truth of the original expression.

This method of simplifying or proving propositions is essentially similar to the resolution method used in artificial intelligence and in automated theorem proving (see e.g. Charniak and Mc Dermott, 1985).


EDITOR'S NOTE: Many of those who came to this mailing list in the March surge of inquiries noted that their interests in our focal area either centered on, or had begun with, enactive cognitive science as introduced in The Embodied Mind (Varela et al., 1991).

TONY SMITH sends the following:

While aware of Maturana and Varela's earlier work, I am working from Embodied Mind rather than backwards. My interests are definitely on the "enactive" side, although I am not enamoured by the word, so I would appreciate any material you have that fits with or extends The Embodied Mind sans the Buddhist bits.

...I attach an extract from my position paper "Towards a Knowledge Network" which formed part of a consultancy report to an Australian government ministerial review of an "investigation of the effectiveness and potential of state-of-the-art technologies in the delivery of higher education".

The Constructedness of Mind and World

The early directions of cognitive science applied the programmed logic of digital computers as a model of how the (human) mind works. In reality humans do poorly at what computers do easily and vice versa. So by the time MIT Professor Marvin Minsky (whose earlier work had dismissed the relevance of 'neural nets' in modelling the mind) elucidated his concept of 'The Society of Mind' [1986] the cutting edge of cognitive science had clearly moved from a symbolic to an emergent model.

Paris-based Professor of Cognitive Science and Epistemology, Francisco Varela, and two North American colleagues extend Minsky's work by integrating it with an updated theory of evolution by "natural drift" and countering what they see as Minsky's nihilism, to produce an optimistic account of the groundlessness of world and self.[1]

"[T]hese various forms of groundlessness are really one: organism and environment enfold into each other and unfold from one another in the fundamental circularity that is life itself." -- Varela et al 1991: 217

"The realization of groundlessness as nonegocentric responsiveness ... requires that we acknowledge the other with whom we dependently cooriginate." -- ibid.: 254

Varela's project is to interplay cognitive science with human experience so as to suggest that abandonment of positivist assumptions of grounding in an external reality and of a permanent self (ego) does not require a retreat into the negative grounding of nihilism but, rather, opens a new door for recognition of the genuine worth of co-operation. They do this by showing how autonomous minds are "enacted" by (emerge from) their histories of structural couplings with other embodied minds and the environment. Their model is enhanced by recognition of the need for each mind to enact a world which has enough in common with those of other minds to support viable coupling.

Cognitive science arose quite recently as an interdisciplinary common ground between artificial intelligence, neuroscience and cognitive psychology; while also drawing on linguistics and, in schools such as Varela's, philosophy. While bridging between technology, biological and social sciences, cognitive science is generally strong on the rigorous experimental methods of the 'hard' sciences. It quite rapidly succeeded in overturning the model of mind as a symbolic processor that had spelt disaster for early attempts at AI, and more recently provided rigorous mathematical analysis of why neural nets can in fact recognise patterns in ways that are unconnected to symbolic approaches.

An insight into the workings of evolution and of the mind is provided by Varela's rejection of any notion that optimal solutions are provided or needed, reminding us instead that organisms and ideas just need to be viable. This approach gets away from the (often anthropocentric) idea that the pursuit of optimality prescribes singular ideals, but substitutes the proscription of that which can neither sustain nor reproduce itself -- at every step of the journey. Ecosystems do not sustain optimal organisms. Postmodern culture does not sustain totalising narratives. Recognition of the lack of absolute ground in, of the constructedness of, world and mind does not lead to the rejection of our human experience of living in the world. It unburdens us of the false optimism of teleological causes, gets us past the reactionary alternative of nihilism, and brings us face to face with a genuine optimism in the face of the opportunities offered by our unending quest for the unknown.

Note [1] They also invoke an interpretation of Buddhism as their inspiration to this path, but for this writer who discovered the other connections independently some time earlier, that invocation adds nothing.


Minsky, Marvin (1986) The Society of Mind, Simon & Schuster, New York

Varela, Francisco J, Evan Thompson & Eleanor Rosch (1991) The Embodied Mind: Cognitive Science and Human Experience, MIT Press, Cambridge, Mass.


C-life Questions: Some Clarifications...


Well, first let me thank Conor, Hans-Eric and Randy himself, for participating in such a positive way so far. And please, if you have not yet done so, join in now!

My comments, at this point, will be aimed solely at clarifying my original questions. As I said in the first note, I was leaving many things deliberately ambiguous, to see what way people would take them. That was an interesting exercise, but let me now reveal at least some of the hidden agenda...

First, Hans-Eric Nissen (ably echoed by Randy) loudly wondered why I should be so perverse as to start with C-Life rather than the tessellation automata which have been specifically designed to support autopoietic "simulation" (or is that "realisation", but not in the "physical" space? Anyone care to go to bat on that one? No matter, we'll get back to it...). I'll call these the Varela-Uribe-Maturana, or VUM, models---my two typing fingers tire easily (and I'll include the work of Zeleny and Pierre in the same bucket). Hans-Eric's point is all the stronger, since Varela himself has emphasised the distinction between VUM model(s) and things like C-life, as in the passage from Principles which is quoted.

Its a fair cop, so I'll confess.

The reason for my peculiar behavior is this: the answers to my questions would be already "known" for the VUM models, for they are already written down in the good book (Principles ;-), and I could reasonably be told simply to go and look it up. But of course, my problem is that I don't understand the good book. So what I'm after here is precisely to clarify what it is that critically distinguishes things like C-life from VUM. And my analysis of C-life was (is) intended as a prologue to a discussion of VUM. Of course (?) C-life is much simpler than VUM, so I wanted to start by getting some agreement on the terms of the discussion, within C-life.

(There, incidentally is my reply to Hans-Eric on the question of definitions. Good Popperian that I am, I do not believe in "What is?" questions either: definitions, as such, certainly do not solve problems. But, equally, discourse relies on, or takes place within, a "consensual domain"; what I'm about here is precisely to probe the "consensuality" of certain terms or signs of enactive cognitive science---at least within the severely restricted context of C-life. Hey, if we can't agree on what we're talking about in that context, is there any hope elsewhere? But yes Hans-Eric, please do take it your way: "By what interventions could I arrive at a structure and an organization to assign to [block, blinker, glider]?" That is the question all right, but what are the answers?)

I will throw up the quote Hans-Eric takes from Principles again:

"...It (the tessellation example in chapter 3) is fundamentally distinct from other tessellation models, such as Conway's well-known game of "life". (Gardner, 71) and other lucid games proposed by Eigen and Winkler (1976), because in these models the essential property studied is that of reproduction and evolution, and not that of individual self-maintenance. In other words the process by which a unity maintains itself is fundamentally different from the process by which it can duplicate itself in some form or another. Production does not entail reproduction, but reproduction does entail some form of self-maintenance or identity. In the case of von Neumann, Conway, and Eigen, the question of the identity or self-maintenance of the unities they observe in the process of reproducing and evolving is left aside and taken for granted; it is not the question these authors are asking at all." (p. 22)

I stipulate all this.

But just because the C-life (and other) models were not designed to allow investigation of "self-maintenance" or "identity" does not mean they cannot be used for that purpose. Maybe they cannot (I'm not convinced --yet...); but if so, I want to know why not. And the answer cannot simply refer to the motives or interests of von Neumann, Conway etc. It must refer to something concrete about their models (which does not apply to the VUM models). There are, of course, many differences between the models: but I want to know which of these are (alleged to be) the relevant or critical ones in this context. Note, in particular, that even though it has been claimed that "evolvable" automata, in the original von Neumann sense, can be embedded in C-life, such "automata" are not the subject of my questions on C-life. I am asking questions about much simpler---even "trivial"---objects in C-life...

OK, lets move on a bit here. In his initial comments, Randy makes a valiant attempt to clear away some of the distractions that the reference to C-life potentially raises: are we talking about dots on a computer screen, or what? (etc.)

Well, yes these are problematic issues, and I will not try to address them (yet). Instead let me try to take something Randy had started to do and simply incorporate it back into my original questions.

Briefly, in my original formulation I made no reference to the particular "mechanism" or "substrate" for C-life. Randy assumed (?) I had the conventional arrangement of a suitably-programmed general purpose computer and display screen in mind, and sought to clarify some implications of this. In fact, I had no particular implementation in mind. Indeed, I hope that, before we are finished, we can get back to that level of generality. But, in the meantime, it seems that the precise implementation might have some relevance to our discussions, so let me stipulate a very particular implementation which (I hope) will then minimize distractions.

So forget about conventional implementations of C-life on a conventional computer. I can hack up a single C-life cell pretty quickly with a handful of TTL chips.


TTL stands for "transistor-transistor-logic", and identifies a standard family of electronic integrated circuits performing simple "combinatorial" functions such as logic (AND, OR, NOT etc.), and also offering "sequential" devices such as FLIP-FLOPS, counters etc. which have some finite set of distinct states which are stable, and which they will stay in until disturbed (perturbed?) in a specified manner. TTL "signals" are conceptually binary (just two possible values) and represented by voltages. Anything below about 0.8V is a "low" or "zero"; anything above about 2.4V is a "high" or "one" (and anything in between is indeterminate in its effects).

LED stands for light emitting diode. You've all seen them---those little round lights, usually red or green, or maybe orange, you see everywhere on consumer electronics these days.

A cell in C-life is what we engineers (;-) call a finite-state-machine; it has 8 inputs, one output, and two possible internal states. In a C-life system, we attach (literally wire up) the 8 inputs of each cell to the outputs of its 8 neighbours (and route its output, in turn, back to each of these neighbours). When a cell receives a "clock" signal it changes state, the new state being uniquely determined by the old state, and the state of the inputs---in our case, the outcome will be designed to be as per the usual C-life rules.

A single C-life cell is a very very simple machine, and can be easily realised with a few standard TTL components. So imagine a single cell of my C-life machine as a little grey box---not much bigger than a matchbox. If you look inside you'll see a couple of standard chips on a tiny little bit of circuit board. On the outside is the LED, a switch with three positions, and a connector block. The connector block has a pair of power inputs (0V, 5V), one output (signalling the current state of the cell), and 8 inputs (which we will connect to the 8 neighbours in the standard C-Life arrangement), plus a clock input which synchronises the whole system.

With a little bit of extra work I can design the electrical/mechanical interface so that the cells can literally plug together, a bit like L*GO blocks, but making both electrical and mechanical connections. Now imagine that I lay out the entire array of 1000 x 1000 (arbitrary numbers) cells flat at first, all plugged together. Then I roll up the top and plug it into the bottom, all the way along forming a cylinder (all the leds and switches are pointing outward of course). I assume that the mechanical connections have a degree of flexibility (rubber L*GO blocks?). Then I bend the tube around and plug the two ends together. So now I have something shaped like a tyre from an enormous V*LVO construction truck or something, but festooned with little lights and switches, and plugged into a power supply and an oscillator/clock generator. Or for the Arthur C. Clark fans out there, remember the space station in "2001: A Space Odyssey"? Well imagine I have a scale model prototype hanging from my ceiling...

"...I can hack up a single C-life cell pretty quickly with a handful of TTL chips."]

In fact, let me just fetch my tools, and I'll do it right now. ..

OK I've got one cell wired up and working (I'm using genuine GENDANKEN-brand components throughout: full mil. spec., no gedanken-expense spared...). I had to add a power supply, and a clock, of course. I put on a LED so I could see the state of the cell, and a three position switch so I could let it run or force its state either way. The clock also has a run/stop/single-step switch, and an analog dial to wind the frequency up and down. Well, that wasn't too difficult.

Now I'll just run off a 1000 x 1000 cell array...

Fine, that's all working now. And damn fast she runs too!

I wrapped the edges around in a toroid, so I don't have to worry about boundary conditions (I tried to make a projective plane first, but somehow it just wouldn't zip up around the edges :-).

Hmmm. Something as pretty as this deserves a name. I shall call her Twinkle (and I'm not even going to pretend that that's an acronym for anything!).

(Coincidentally, I was talking to Noel Murphy about Twinkle over coffee, and he mentioned that he had a very similar project in hand himself. Quite remarkably, he's come across some previously unknown blueprints, mis-shelved in a musty corner of the DCU library, which turn out to be a design for a purely mechanical implementation of something very suspiciously like C-life, using steam for motive power. The print is very faint, but the machine seems to be called an "Analytical Lifeform"; we can just discern the initials C.B. in the title block, and the last revision is dated September '70 and signed by someone called "Lovelace". Anyway, Noel happened to have an old steam engine he wasn't using for anything so now he is hard at work in the office next door on the construction of this Analytical Lifeform. Hope he finishes soon, the noise is horrendous...)

[Ed's. note: Even if it's only marginally successful, Noel can offer it as a prop to Hollywood -- I hear they're going to make a film of Glibson and Slurling's Victorian Biopunk novel The Difference Lifeform. ;-) ]

So now I've got Twinkle sitting (or rather draped) on the bench in front of me, twinkling away (she's currently running at 10gHz---that's gedanken hertz of course---so its a bit of a blur, but I can slow that down any time). Please feel free to construct your own clones of Twinkle for yourselves.

I have invented Twinkle simply to bypass or shortcut some potential distractions in our discussion. With a conventional implementation of C-life, the "physical substrate" of any single cell is a rather amorphous and diffuse thing, not very well defined. In Twinkle, on the other hand, a cell is a clearly delineated physical entity (believe me: I can reach in and touch each individual cell). I think maybe this was the general kind of conceptual simplification Randy wanted to achieve, so Twinkle just represents going the whole hog.

Be that as it may: whatever about the structure/organisation of Twinkle herself, our concerns here are really directed at what Randy calls C-life beasties. In the specific case where Twinkle is the "substrate", I'll call them T(winkle)-beasties---T-block, T-blinker, T-glider etc. They are (informally) recurrent "patterns" or "configurations" of Twinkle cell "states", distributed in both "time" and "space".

I will say no more at this point: let's hear all the responses to the original questions first. But, wherever someone feels that the character of the "substrate" is relevant, then please now consider my questions rephrased specifically in terms of Twinkle...

Happy Twinkling,



(i) What is its structure?
(ii) What is its organization?
(iii) What is its boundary?
(iv) Is it organizationally closed?
(v) Is it autopoietic?

I should also be interested in a more general prior question: do these questions have definitive answers at all? And if not, why not?


Really enjoyed issue 5 which considerably deepened my understanding of the relationship between biological autopoietic systems and computer Alife. After reading your responses, I feel that what's going on in computers which happen to be running for example genetic algorithm search programs optimising neural network architecture phenotypes, does not correspond very well to the kind of referential ontology which characterise unicellular and multicellular organisms. As a result of course, I've got even more 'asks'.

OK, we've got computer hardware organisation which does nothing until it is switched on, an operating system is booted and software is run whose operation transforming input data to output data only makes sense in a referential frame provided by a programmer who connects the organisation to relevant peripherals. Frequently the programmer is unaware that it is she interacting with the hardware and operating systems manufacturer who implicitly provide an observational systemic framework in which organisation may be ascribed to a computer.

[RANDY NOTES: I just want to inject a cautionary note re: the transformations of inputs into outputs. The existence of "inputs" and/or "outputs" (i.e., internal / external transfers, and vice versa) is definitely ascribed by the observer. Inputs and outputs are also characteristic of an allopoietic machine / system, not an autopoietic one (cf. Autopoiesis & Cognition, pp. 72; 81). If you want to define an autopoietic machine / system, you won't do it in terms of what comes and goes -- you must concentrate on what persists.]


"This reduction does not, however, permit me to "pin down" the C-Life "system". I have managed to restrict the scope of the space in which it is manifested (by removing the graphical / display extensions), but I have still not determined which space in which to address it. I can address the system in an abstract, "conceptual" space as a network of (e.g.) data structures for the cells and finite state transition networks for the "program" itself. I can also address it in the physical space as a network of electrical states (e.g., states of the registers and RAM locations) manifested in a particular physical architecture..."


So we have two minimal observational spaces: physical electric currents whose temporal and spatial sequence support a logical space. This is the whole point of compilers. They transform logical specifications to enormously complex electrical sequences. But as every schoolboy knows, compilers can transform logical specifications to tinker toy operations or any other physical substrate which can implement the required AND, OR or NOT gates. As you correctly point out, and as Hilary Putnam proves in `Reason, Truth and History', logical spaces have no transcendental objective existence independent of observers. The details of Putnam's brand of `direct realism' are complex but essentially it seems to me to ground consensual linguistic practices which determine radial categories in human bodily architectural structural and functional experiential similarities.

So how do we communicate with aliens made out of jam living in clouds? Dunno.


Now, in both the stated cases, the manifestation (and continuance) of the network of relations is itself dependent on a subsuming "system" -- my cognitive domain in the "conceptual" case, and the computer in the "physical" case. Interruption or forgetfulness disintegrates the first version, while a fault in the underlying hardware or operating system disintegrates the second. In both cases, there is an implied agency...


I'm not so sure about the forgetfullness part. I can leave the CA and go off to lunch, come back and see some progress, read the paper etc. Surely the subsuming system is at a more abstract level than an actual present observer. It encompasses the observational contract so to speak between the hardware manufacturer and the programmer who buys the contract by reading the manuals. IS it not more of an observational stance developed during programming and debugging than requiring me to sit glued to oscilliscopes or monitors? More on the reproduction of components and autopoiesis anon.


In my cited comments, I was addressing a "conceptual" space as the setting for addressing a C-Life beastie (program, pattern, whatever...). I was therefore concentrating on the one-time recognition sort of case. Continued functioning of a systemic unity (once recognized), which is what I think you're pointing at here, is something else. Now that I think of it, though, I would not take the presumed persistence of the recognized network of relations as gospel. Consider the point made by Stafford Beer in his preface to Autopoiesis and Cognition (Maturana & Varela, 1980, pp. 66-67), where he is talking about the transcendental character of objects a la Kant (i.e., their "it"-ness):

"[Maturana & Varela's]... 'it' is notified precisely by its survival in a real world. You cannot find it by analysis, because its categories may all have changed since you last looked. There is no need to postulate a mystical something which ensures the preservation of identity despite appearances. The very continuation is 'it'. ....and I note with some glee that this means that Bishop Berkeley got the precisely right argument precisely wrong. He contended that something not being observed goes out of existence. Autopoiesis says that something that exists may turn out to be unrecognizable when you next observe it."


OK it is now clear to me that at no level does an ALife program have autopoietic closure because of the intentional contribution of the programmer.


Hmmmm. I'm not sure about this, so let me make a few remarks (I mean only to be cautious, not pedantic). The intentional contribution of a programmer might determine the applicability of allopoietic vs. autopoietic status (the former having as a "purpose" something other than the maintenance of their own organization). Designed machines are allopoietic. The pertinent notions of "closure" are organizational and operational. I can't think of a basis in the literature for claiming that these are precluded by a designer's intentionality -- they are features of the network of relations making up the discerned system. In issue no. 5, I was driving mainly at the point (which has bedeviled me for ages...) that it is the "implied agency" (of the supportive artifacts) which precludes software from being considered self-reproducing, and hence autopoietic. This agency implies that any autonomous system (present to inspection) is not the programmed (i.e., allopoietic) one. This does not, however, close the issue; it may well be that the allopoietic (programmed) unity is but a component of a larger network which exhibits autonomy, if not autopoiesis. I can't shake the feeling that we're focusing too much on the standard subsets of "hardware", "software", etc. If there's an autonomous entity to be found in an A-Life application, I suspect it will be discerned in some intersection / union / creative permutation of the "hard" and "soft" spaces.


I'm sure I'm wandering down well beaten tracks about the ontological distinction between simulation and reality. Most artificial life apart from robotie Brooks type stuff seems to be about dreaming up some highly complex exotic graphical simulation scape, some finite state automata given some cuddly name like `critters', some random or directed search algorithms for modifying the FSA transition rules and run like hell folks Frankenstein is around the corner as critters fight it out to see who will work out the transition rules to hypnotise their programmer to build them a robot body to escape the low rent CPU they currently have to put up with :-) Silliness aside, as you pointed out elsewhere in ...[issue no. 5]... this is heading straight for the bottomless pit that macho AI fell into. What's the difference between a simulated pistol (i.e. got full set of Newtonian equations describing most aspects of it, a nice ray-tracing picture etc.) and someone pointing a real one at you? Seems obvious to me. One's made out of molecules whose arrangement can kill you; the other isn't. Ah ha but the mind is different, it's just information or pattern so it's just a technological problem to do a fine enough resolution scan on all the molecules in my brain, i.e. get a perfect information snapshot of it, load it into a CM-5000 and well hey AI as dumb as me has arrived.

My point is. Everyone on this list probably now buys the enactive embodied cognition argument that mind is more than some set of recursive operations on information which are hardware independent. So even if you had a complete theory of brain function and could instantaneously get a complete state snapshot, if you ran this on a digital computer, would intentionality still remain?


Ahhhh -- the old brain snapshot conundrum!! I've seen this same issue posed in several places on the Net recently, and I've yet to jump into any of those discussions. Let me take a shot at it now.

First, both the snapshot (some sort of artificial representation of brain state) and the digital simulation (enacting the representation) are manifested in different domains than the domains in which the original intentionality was manifest. This holds for both senses of intentionality:

(1) Colloquial / pre-Brentano sense. This is intentionality as "to intend, to plan" -- pretty much as the term is used in everyday speech.

(2) Brentano / Husserl, etc., sense. This one is different, and it always drives me nuts trying to elucidate the distinction. As I understand it, Brentano shifted the focus of the term "intentionality" from direction of action to the direction of attention or apprehension (e.g., concentrating on a concept or image). It is my understanding that this second usage of "intentionality" is the one typically meant by Husserl and later phenomenologists, as well as being the sense belabored by Searle in his book Intentionality.

In both cases, intentionality is a description of a manner in which the observer is coupling with her own cognitive domain. In the first case, she is coupling with descriptions of non-present (imagined, potential) events; in the second case she is coupling with descriptions of entities / objects. Now, such recursive coupling (the hallmark of the observer) is determined by the connectivity of the organizationally closed nervous system. The state of the nervous system (e.g., taken as a set of variables), in and of itself, does not reflect this connectivity. I would claim that any digital simulation of such a state description would necessarily lose the connectivity aspects which help determine intentionality in both senses.

Furthermore, I would claim that a complete state / connectivity simulation would have no claim to being faithful to the ontogeny of the cognitive system being modeled. By this I mean that a full-blown state / connectivity replication of my planning to ship this issue of Observer tonight (intentionality-1) may result in an artifact directed toward action (or frantic thrashing), but without any specific direction toward this action I'm now doing. A similar replication of my focused contemplation of X's face (intentionality-2) might similarly result in an artifact directed toward some discernible visual object, but not necessarily with the emotional attachments.

I'm not trying to insert any of the vitalistic elements which Maturana and Varela sought to eject from cognitive studies. Instead, I'm just pointing out that the domain of manifestation for the replicant is not the same as the domain of interactions for the replicated. You might capture the state of me trying to get Observer on its way, but taken out of this situation ("involvement whole" per Heidegger), that state might be devoid of potential for continuation. The prior history of ontogenic coupling, no matter how well simulated, cannot be expected to continue on identical trajectories in disparate domains. Think of awakening suddenly from a bad dream -- e.g., limbs thrashing as if fighting, sweating, screaming, etc. You are then in a state "replicating" that of your "dream domain". Once you find yourself in the "real world" (safe and sound), you "couple" anew with the warm, dark sanctity of your own bedroom, often entirely forgetful of the frightening situation.


Seems to me that Alife runs slap bang into the same problems, but is useful to sharpen them because it approaches systems like viruses which are considered on the boundary between animate and inanimate. Even such simple structures display incredible structural and organisational complexity, the prime example being that cute retromutator, HIV. This suggests a question. Are viruses autopoietic?


Yep. I have no trouble committing myself on that issue.


Could a computer virus be autopoietic? They are simple programs which can reproduce themselves and manipulate hardware and humanware to colonise whole computer labs! They run frequently without users observing them and can have consequences far beyond the intentionality of their original programmer. Since I've asked the question, I'll also try and answer it.

The autopoietic concepts of structure, organisation, boundary and closure have been derived from examination of self-maintaining cells. It seems to me that examination of such a cell, e.g., a neuron, and its environment (other cells, energy and waste disposal pathways) highlights the importance of physical spatial proximity and structure for information processing. Lets spawn off the debate about the referential status of the information as a background job for the moment. It seems from neuroscientific data that:

"Software and hardware are one [and] the same in the nervous system. ... Extensive evidence indicates that the brain is not an immutable series of circuits of invariant elements; rather it is in constant structural and functional flux. The digital computer analogy is fatally misleading."

(Black, I.B. Information in the brain: A molecular perspective. Cambridge MA: MIT Press, 1988,, pp 2-3)

So right away it's problematical ascribing mappings of a logical calculus (bound to an observational system as described) to an arbitrary (manufacturers decisions) CPU, bus, RAM system as possessing the same kinds of structure, organisation, boundary and closure. Sure hardware has organisation and is bounded in a box, programs have structure and closure in the programmers representational scheme (float*float = float) but don't the differences between such an allopoietic system and autopoietic systems mean that Alife can never get off the ground as autopoietic? Brooks subsumption architectures while not containing explicit representational assumptions are nonetheless designed and built by his grad students and do not adapt to the environment. So they are allopoietic systems too.


I don't think A-Life will "get off the ground as autopoietic" so long as we are attempting to ascribe autopoiesis to the hardware or the software in isolation. If you look only at the hardware, you've succeeded in specifying a space (the physical space) and obtaining a measure of structural determination. Fine. However, this is a fairly static picture, and it gets tricky to explain function and change. If you switch to looking only at the software (by this I mean e.g., the code, the data structures, the logical flow, etc.), it is no longer clear within what space you are delineating the system. Furthermore, the "agency" of the hardware, operating system, etc., seem to cry out for incorporation or inclusion before you can make any claims about autonomy.

Now that I think about it, Conor, you never got around to telling us whether a computer virus is autopoietic!




Conor, I can't find where you got the phrase "pattern of relations of structural reproduction". Please clarify what you're pointing at; I think it's something I'd like to point at, too.


Well many people in AI and Cog Sci believe that the essential characteristic of mind is pattern. Here pattern is more generic than information because it seems to reduce the need for an observational representational frame. Autopoiesis is a theory of structural reproduction for self-maintenance which it seems to me attempts to provide an incremental theory of intentionality from the inside out of cells and bottom up in evolution so that we can get rid of the homunculus bedevilling most representational schemes. But the success of such a program requires precise definitions of base terms such as pattern and relation. Many systems are involved in production and reproduction of their internal state which permits continued production. A pattern of relations of structural reproduction to me is a description of such a system. So we can have pattern1 = bacteria type a, pattern2 = bacteria type b etc. Possibly it's just tautology.


All that is true. However, I would like to go back to your original query (issue no. 5) --


This issue of patterns and relations has been causing me to lose sleep for a long time. If a relation is discerned with respect to one or more objects, then what is a "pattern of relations" discerned with respect to???