COMPUTATIONAL CONCEPTS IN SECOND-ORDER CYBERNETICS 108 CONSTRUCTIVIST FOUNDATIONs vol. 9, N°1 Ashby’s Passive Contingent Machines Are not Alive: Living Beings Are Actively Goal-directed Tom Froese Universidad Nacional Autónoma de México t.froese/at/gmail.com > Upshot •Franchi argues that Ashby’s homeostat can be usefully understood as a thought experiment to explore the theory that life is fundamentally heter- onomous. While I share Franchi’s inter - pretation, I disagree that this theory of life is a promising alternative that is at odds with most of the Western philo- sophical tradition. On the contrary, het- eronomy lies at the very core of compu- tationalism, and this is precisely what explains its persistent failure to con- struct life-like agents. Introduction « 1 » Stefano Franchi’s article is focused on a neglected aspect of Ross Ashby’s theo- retical framework, which characterizes the phenomenon of life in terms of passivity and contingency. In other words, the pri- mary condition of the organism is con- ceived of as an equilibrium of inactivity, achieved by random convergence. Franchi argues that this puts Ashby’s cybernetics in tension with contemporary trends in philosophy of biology, which emphasize intrinsic teleology, autonomy, agency, and enactive perception (e.g., Weber & Varela 2002; hompson 2007). Given Ashby’s ex- plicit inluence on these trends (e.g., Di Paolo 2003; Ikegami & Suzuki 2008; Froese 2009), his theory of life and this implicit tension deserve a closer analysis. « 2 » Although I concur with Franchi’s interpretation, I strongly disagree with his conclusions about how we should resolve the tension he has identiied. Whereas Fran- chi wants a fuller development of Ashby’s theory, I see an opportunity for exploring genuinely alternative possibilities. Briely, aspects of Ashby’s passive contingent theo- ry of life have already been implicitly tested in computationalism (passive, non-contin- gent machines) and dynamical systems ap- proaches (passive, contingent machines). For example, one passive non-contingent mechanism that is currently receiving a lot of high-proile interest is predictive coding based on Bayesian inferences. Perhaps ex- pectedly, following William Grey Walter’s joke about the homeostat’s “sleepiness” (§10), this approach is faced by a so-called “dark room” problem, i.e., why an agent should be motivated to do anything at all if it can just shut itself away (Clark 2013). Similar problems of overcoming passivity are also encountered by approaches that follow Ashby’s approach more closely, such as mobile robot designs that replace non- contingent cognitivist architectures with embodiment and situatedness (Dreyfus 2007). « 3 » I think that any theory that sees the phenomenon of life as essentially a quest for eternal stasis, be it in a contingent or non-contingent manner, is misguided. At best such a theory only accounts for pathological behavior (Froese & Ikegami 2013). Organisms are intrinsically active and their behavior is non-contingent. he real challenge therefore lies in the devel- opment of a formal framework that can do justice to a conception of life as a non- equilibrium, self-producing and self-trans- forming phenomenon that is guided by its own emergent goals (Froese et al. in press). A tension between Ashby and enaction « 4 » Franchi states that, “the Ashbian organism will always be trying to accom- modate itself to its environment by what- ever means necessary: it is essentially a pas- sive machine whose activity is a by-product of its search for non-action” (§10). I have reached a similar interpretation of Ashby’s ideas in their scientiic and historical con- text (Froese 2010). Moreover, I have also found that Ashby’s theory creates an im- plicit tension in the writings of later pro- ponents of systems biology. For example, his general systems theory signiicantly inluenced the development of the theory of autopoiesis by Humberto Maturana and Francisco Varela (1980), but since Ashby had in fact viewed life as essentially passive, his inluence undermined Maturana and Varela’s intended aim to provide a system- atic account for the self-asserting autono- my of the living (Froese & Stewart 2010). « 5 » It is therefore no surprise that Maturana (2011) has distanced himself from my Ashbyan interpretation of his work, although he still fails to overcome properly Ashby’s implicit inluences. By re- jecting Varela’s eventual turn toward a Kan- tian interpretation of autopoiesis (Weber & Varela 2002), Maturana stays committed to a theory of life as passive-contingent. Indeed, classical autopoietic theory can be interpreted as a well-developed instance of what Franchi has called Ashby’s general- ized homeostasis thesis (Froese & Stewart 2010). he growing tension between Mat- urana’s autopoietic theory and ongoing developments in enactive cognitive science (Villalobos 2013) can therefore be usefully understood as an echo of the tension origi- nally provoked by Ashby’s work. « 6 » Nevertheless, despite these fric- tions, I believe that the enactive approach can provide crucial help in defending Mat- urana’s guiding intuition that there is a qualitative diference that distinguishes the living from the non-living (Froese & Stew- art 2013), a diference that does not even exist from Ashby’s point of view. Practical failures, theoretical shortcomings « 7 » Why is it implausible to think of life as essentially passive? We can derive some insights from the practical failure of attempts to engineer artiicial agents on the basis of this principle. Already, Ashby’s own failure to take the homeostat work further with his follow-up DAMS project could be taken as an indication of the theory’s short- comings (Pickering 2010). Subsequently, all of symbolic AI shared his view of life as a passive phenomenon (albeit non-con- tingent passivity, since goals are explicitly represented). Just like Ashby’s homeostat, a computer only reacts to commands, either from the user or from sotware triggers, until it reaches yet another resting state. his fundamental heteronomy at the heart of symbolic AI can be used to explain its well-known practical failures to construct life-like artiicial agents, and thus motivates an enactive approach to AI that places au- tonomy at the core of life and agency (Fro- ese & Ziemke 2009).