1 Complexity and its observer Does complexity increase in the course of evolution? Manfred Füllsack Paper presented at the 11 th Congress of the Austrian Philosophical Society (OeGP) 2. - 4. 6. 2011, at the University of Vienna Abstract The paper endeavors to gain a better understanding of the role of the observer in attempts to answer questions like “what is complexity?”, “can it be measured?” and “does it increase?”. Following Heinz von Foerster and others in considering complexity observer-dependent, conceptions of Spencer-Brown, of Luhmann and of Varela and Maturana are taken to draft a formal conception of observation and to discuss on its base suggestions like informational (Shannon), algorithmic (Solomonov, Kolmogorov, Chaitin), statistical (Crutchfield) and physical (Adami) complexity. Finally, in order to illuminate the concept of the observer further, the presumably indispensable aspect of complexity reduction is illustrated with the help of a Genetic Algorithm. Does complexity increase in the course of evolution? Or does it decrease? According to the Second Law of Thermodynamics the latter should be the case. But looking just superficially at the richness of nature, one comes to believe in an ongoing and open-ended emergence of increasingly complex structures that stabilize further and further from thermodynamic equilibrium - with humans and their creations possibly being the latest manifestations of this. But is this apparent rise in complexity an objective feature of the world? Or is it just the impression of an observer who’s capacity to cope with complexity is categorically limited? And if it is so, can we learn something about complexity by observing the observer? Among the numerous considerations on complexity, there are several suggestions on how to measure and therewith “objectively” prove it. Some biologists for example suggest to count the number of structural components and functional properties of organisms, such as number of limbs or number of possible behaviors, and deduct from rising numbers an evolutionary increase in complexity (McShea 1995, 2000). 1 Information theorists and physicists propose to measure the predictability of an event happening in a certain possibility space or a symbol being chosen from a sequence of symbols in a given alphabet (Shannon 1948). Computer scientists suggest to capture complexity in terms of the size of the shortest algorithm needed to reproduce complex phenomena (Solomonov 1964, Kolmogorov 1965, Chaitin 1966) and further refine this suggestion in regard to the runtime of the shortest algorithm when applied to an Universal Turing Machine (Bennett 1988) or in regard to the “thermodynamic depth” which would inform about how hard to build such algorithms are. (Lloyd/Pagel 1988, Crutchfield/Shalizi 1999, for further suggestions see: Lloyd 2001, Biggiero 2001, Mitchell 2009, and below) As interesting and inspiring these suggestions are, they all seem to run up on a fundamental epistemological (and thus philosophical) problem. As Foerster (1982), Casti (1994), Gell- Mann (1995, 1996) and others emphasized, methods to measure complexity depend on the knowledge and the understanding of the one who perceives a phenomenon as complex and 1 Complexity theorists, however, object that the multeity of structural or functional parts might indicate complicatedness rather than complexity. In terms of informational complexity measures - Shannon-entropy or algorithmic-complexity for instance (see below) - small and weakly coupled systems can exhibit more complexity than large and strongly coupled ones. Even an increase in fitness cannot unambiguously be taken as increase of complexity, as bacteria for instance might be quite fit in their niches compared to more complex multi-cellular organisms.