- NOTES FROM THE EDITOR
- AUTOPOIESIS AND ARTIFICIAL LIFE (A-LIFE): RESPONSES & DISCUSSION
- I: Specific Responses to Barry's Line of Questioning
- II: General Responses to Barry's Line of Questioning
- CLOSING COMMENTS
Welcome to issue no. 5 of The Observer. In this issue, we'll get started with responses to Barry McMullin's questions from issue 3, meet David Vernon, and air some questions. First, though, I am pleased to report that in the wake of remarks in a Net news group and a couple of targeted announcements in related mailing lists, there has been a surge in new subscribers to our autopoiesis / enaction forum. Last week alone, our 'population' more than doubled. The new subscribers range around the planet and across many research fields. Many express prior work or interests in our focal area, and I would like to welcome their experience as well as solicit their contributions to the forum. Some are newcomers to autopoiesis and/or enactive cognitive science, and I would like to welcome their interest and express my hope that they will find the forum informative and useful.
Finally, an unexpectedly large proportion of the new subscribers have commented that they thought they were the only one(s) interested in this, that they were working in isolation, etc. I have known that feeling well. I guess that's why I brought up the subject of an ongoing autopoiesis forum at the 1992 Dublin conference on Autopoiesis & Perception. When Francisco Varela said there wasn't (nor had there ever been) such a forum, it confirmed my fear that isolation was the norm for the autopoiesis aficionado. His suggestion that I try doing something about it reminded me of the dangers inherent in opening one's big mouth [ ;-) ] and made me question the wisdom of having brought it up. Now that The Observer is gathering momentum, I trust that isolation need no longer prevail. Now that the isolation is dissipating, I'm glad I opened my big mouth. -- R.
Personal / professional introductions are a good way of getting to know each other and to outline the range of interests in this forum. All such introductions are welcome. Today's subject is:
- Having completed a Ph.D. on robot vision in 1985, I could not help but feel somewhat disenchanted with the emerging computational theories of vision at the time. This disenchantment arose not from a frustration with the usefulness of the discipline but with its (apparent) shallowness: we could do simple things well but the approaches did not scale well when attempting to deal with much more complex things (flexible objects, ill-defined environments, uncertainty, natural variability, and so on). In the same year, I was given a copy of Varela's "Principles of Biological Autonomy" and there began a lengthy and enjoyable, if sometimes confounding, quest to understand, work with, and develop the theory.
- What happened next.
- Although I didn't know it at the time (or didn't have the wherewithal to articulate it), it was the representationalism inherent in conventional computer vision which jarred so much. Autopoiesis seemed to offer, not so much an alternative, but an approach which was premissed on a sounder foundation: specifically the concept of self- organization. This foundation is, if nothing else, less pejorative in that it seems to assume less about the domain of discourse (the system environment) than did (and does) representational vision.
Since then, my research has proceeded along two parallel paths. On the one hand, I have been developing an understanding of the philosophical (ontological) foundations of autopoiesis and this has led me directly (almost) to phenomenology and away from idealism and realism. On the other hand, I have been attempting to develop a computational simulator for autopoietic systems which is grounded in the "real" world, i.e. it should interact with the environment with which you and I are familiar. In more specific terms, my work is concerned with identifying, in a prescriptive manner, the requirements for any instantiation of autopoietic organization, i.e., to specify the structural conditions necessary for the actualization of an autopoietic system.
Significantly, it is the symbiosis of the two paths that has been the most satisfying (and potentially fruitful) aspect of the work in that I wish to see what "additional" considerations must be addressed if we are to go beyond autopoiesis to more sophisticated autonomous systems.
All of the work has been founded squarely on Varela's theories; I have tried to exploit Spencer Brown's Calculus of Indications and Varela's extensions as my working formalism, and I have tried to incorporate Bennett's "Systematics" of multi-term systems to develop the work. So far, I have made no fundamental breakthroughs but at least now I think I know what I am trying to do!
- All of the work I outline above has been done with Dermot Furlong in the Department of Microelectronics and Electrical Engineering, Trinity College, Dublin.
- Who I am.
- I have been a lecturer in the Department of Computer Science, Trinity College, Dublin since 1983 and I am at present on a career break in the Commission of the European Communities (DGXIII).
My e-mail address is email@example.com -- David Vernon
SOME RESPONSES TO BARRY MCMULLIN'S QUESTIONS
In issue no. 3 of The Observer, Barry McMullin (Dublin City University: personal summary in issue no. 2) offered some questions to get a thread started on how to apply the principles of autopoiesis to self-* (* = organizing; reproducing) automata realised in software. I have rearranged and blended the responses so far into the following. If any respondent feels I have damaged his contribution, please contact me for a heartfelt apology. -- R.
A BRIEF RECAP OF BARRY'S QUERIES (for the full account, see issue 3):
The notions of organization and structure are fundamental to autopoietic theory; yet I find I am not always clear on their meaning. So I should like to consider a simple framework in which I feel unsure of how these terms should be interpreted [John Conway's so-called Game-of-Life (C-Life)], and ask you for your views. In the C-life universe we can recognise and identify a variety of entities (unities? systems?). There are the individual cells, or cell-automata.
[...Barry's description of these cells deleted...]
Now, for each of these kinds of entity or system, I would like to know the answers to the following questions:
- (i) What is its structure?
- (ii) What is its organization?
- (iii) What is its boundary?
- (iv) Is it organizationally closed?
- (v) Is it autopoietic?
I should also be interested in a more general prior question: do these questions have definitive answers at all? And if not, why not?
CONOR DOHERTY offers the following direct responses to Barry's questions (i) - (v):
An observer may ascribe "systemhood" to a manifested structure which is only a subregion within a subsuming organizational whole. This may be due to the observer's limited ability to couple with the whole (e.g., a limit to the intersection of the domains in which (a) the observer operates as such; and (b) the whole manifests its organization). It may also result from the observer's ontogeny (e.g., the bias of prior "learned" categories). I grant this is a sloppy summary, but it leads to the point: systems are ascribed by observers. The systems delineated depend on both (1) the intersection of the observer's cognitive domain and the domain of manifestation for any system; and (2) the manner in which the observer "slices up" that domain of intersection.
Now, Conor's responses rely on organization being mapped onto the transition rules (the regularities of occurrence of the graphical cells), and structure being mapped onto the graphical cells (singly or a set of conventionalized composites). Given that the system of interest includes the graphical component / aspect, I go along with this. Now let me disconnect the monitor. I no longer see the patterns visibly, but the "program" continues operating in a regular fashion. Assume some alternative means of inspection, e.g. numerical printout, as assurance of continued operation. I can still apply Conor's mappings, by shifting my application of "structure" from the screen to the printer.
Now let's shut off all the input / output peripherals. Something's still going on in the circuitry, and it presumably still manifests regularity. The observer now has to shift the "horizon" for discerning structure to (e.g.) RAM, registers, etc. Now we're getting down to the "minimal case" -- paring away the structural (display) extensions to (hopefully) leave only the most basis kernel of this C-Life beastie for further analysis. It's still the same beastie Barry offered up for inspection, but now stripped down to its "innards".
This reduction does not, however, permit me to "pin down" the C-Life "system". I have managed to restrict the scope of the space in which it is manifested (by removing the graphical / display extensions), but I have still not determined which space in which to address it. I can address the system in an abstract, "conceptual" space as a network of (e.g.) data structures for the cells and finite state transition networks for the "program" itself. I can also address it in the physical space as a network of electrical states (e.g., states of the registers and RAM locations) manifested in a particular physical architecture (e.g., the busses; the connectivity / transition constraints of the circuitry).
(Intermediate pause to note: I don't mean to seem needlessly pedantic -- I'm just trying to suggest that Barry's queries are more complex than they might initially seem. Furthermore, I don't think either of these (or any of many) alternative "interpretations" is necessarily "correct" in any absolute sense.)
Now, in both the stated cases, the manifestation (and continuance) of the network of relations is itself dependent on a subsuming "system" -- my cognitive domain in the "conceptual" case, and the computer in the "physical" case. Interruption or forgetfulness disintegrates the first version, while a fault in the underlying hardware or operating system disintegrates the second. In both cases, there is an implied agency which ensures the persistence of both the network of relations (organization) and its specific manifestations (structure). As such, I would dispose of Barry's question (v) by saying that in neither case is the C-Life beastie (distinguished from its supportive agency) autopoietic, because it does not reproduce its components. Furthermore, I would claim that in neither case is the C-Life beastie autonomous (in Varela's specific usage, cf. Principles of Biological Autonomy), because it does not (in and of itself) maintain its defining network of relations.
(NOTE: Since autopoiesis is a special case of autonomy, I suggest that the more general case be the focus for further analysis and discussion.)
For all 3 instances (Barry's a, b, c), taken with regard to the "stripped down" or "minimal" C-Life beastie, I would guess the following:
Case I: "Conceptual" Space
Case II: "Physical" Space
Closing Remarks: I never have been able to convince myself on these issues when it comes to software. If I had written this some other day, I probably would have come up with marginally different answers. I hope at least to have illustrated the necessity of trying to pinpoint the combination of space and observer involved.
Perhaps more importantly, I've never come up with a satisfactory account of what to do with the "supportive agency" which influences software's manifested "structure". Perhaps I've created my own problems here, but I keep coming back to this issue again and again. More specifically, I wonder:
I guess all this has more to do with Barry's last, general questions (Do these questions have definitive answers at all? And if not, why not?) than with C-Life per se.
Conor, I can't find where you got the phrase "pattern of relations of structural reproduction". Please clarify what you're pointing at; I think it's something I'd like to point at, too.
"Here I want to interject the semantic point that such words as life, purpose, and soul are grossly inadequate to precise scientific thinking. These terms have gained their significance through our recognition of the unity of a certain group of phenomena, and do not in fact furnish us with any adequate basis to characterize this unity. Whenever we find a new phenomenon which partakes to some degree of the nature of those which we have already termed 'living phenomena', but does not conform to all the associated aspects which define the term 'life', we are faced with the problem whether to enlarge the word 'life' so as to include them, or to define it in a more restrictive way so as to exclude them. We have encountered this problem in the past in considering viruses, which show some of the tendencies of life - to persist, to multiply, and to organize - but do not express these tendencies in a fully-developed form. Now that certain analogies of behavior are being observed between the machine and the living organism, the problem as to whether the machine is alive or not is, for our purposes, semantic and we are at liberty to answer it one way or the other as best suits our convenience. ...
. .. It is in my opinion, therefore, best to avoid all question-begging epithets such as "life," "soul," and "vitalism," and the like, and say merely in connection with machines that there is no reason why they not resemble human beings in representing pockets of decreasing entropy in a framework in which the large entropy tends to increase." (ibid. pp. 31-32)
AI also assumed the essence of its focus (intelligence) was "information", and that information-processing was the model for human intelligence (a faculty then, and still undefined). The problem was that:
To be sure, there was a diversity of opinions (and ambitions), leading to the distinction between "soft" and "hard" AI -- the former claiming only to use computer models to understand the real thing, and the latter claiming the computerized version was the real thing. In neither case did (or could) AI researchers claim that they (or anyone else) had an adequate grasp of the "intelligence" or "cognition" they sought to replicate. Imagine the pain of recognition when I read of "weak" versus "strong" a-life -- the former seeking "...to illuminate and understand more clearly the life that exists on earth...", and the latter aiming "...toward the long-term development of actual living organisms whose essence is information." (Levy*, pp. 5-6) Deja vu in the extreme!
AI's problem was a lack of balance between a theoretical understanding of intelligence and its attempted simulation in software. Preventing a similar imbalance in a-life research requires attention to theoretical understanding of the "real thing". Autopoiesis originated as such a theory -- a systemic framework for delineating those entities to which we attribute life. The questions Barry raises will hopefully serve as a beginning toward matching the ambitions of a-life research with the understanding of living systems afforded by autopoietic theory. Then let's go back and straighten out what's left of AI. ;-)
* Levy, S., Artificial Life: The Quest for a New Creation, New York: Pantheon, 1992.
"...It (the tessellation example in chapter 3) is fundamentally distinct from other tessellation models, such as Conway's well-known game of "life". (Gardner, 71) and other lucid games proposed by Eigen and Winkler (1976), because in these models the essential property studied is that of reproduction and evolution, and not that of individual self-maintenance. In other words the process by which a unity maintains itself is fundamentally different from the process by which it can duplicate itself in some form or another. Production does not entail reproduction, but reproduction does entail some form of self- maintenance or identity. In the case of von Neumann, Conway, and Eigen, the question of the identity or self-maintenance of the unities they observe in the process of reproducing and evolving is left aside and taken for granted; it is not the question these authors are asking at all." (ibid. p. 22)
At least it might show worth-while to investigate your questions with this example in mind. Kind regards, Hans-Erik
Zeleny, Milan, and Norbert A. Pierre. Simulation of Self-Renewing Systems in Jantsch, Eric, and Conrad H. Waddington (eds.), Evolution and Consciousness: Human Systems in Transition, Reading MA: Addison-Wesley, 1976.
Zeleny, Milan. Self-Organization of Living Systems: A Formal Model of Autopoiesis, International Journal of General Systems, Vol. 4 (1977), pp. 13 -28.
Well, that's about it for issue no. 5. The forum is growing, and we're starting to get some discursive "momentum". The foregoing responses to Barry's queries certainly do not exhaust the topic. Many more of you no doubt have ideas, comments, answers, suggestions, etc. -- share them with everyone else. In addition, there have been as many or more questions generated in this issue as have been answered -- meaning there should be a corresponding expansion of the volume and scope of discussion.
COMING ATTRACTIONS: This'n'that on: The Embodied Mind, Spencer Brown and his "calculus of indications", the conundrums of social systems, etc.