Implementing Antifragiles: Systems that get better under Change IMPLEMENTING ANTIFRAGILES: SYSTEMS THAT GET BETTER UNDER CHANGE Andreas Tolk, PhD Chief Scientist SimIS, Inc. 200 High St. Suite 305, Portsmotuh, VA 23704, USA John J. Johnson IV Old Dominion University 22340 Belle Terra Dr., Ashburn, VA 20148 ____________________________________________________________________________________________ Abstract Just as human bones get stronger when subjected to stress and tension, and rumors or riots intensify when someone tries to repress them, many things in life benefit from stress, disorder, volatility, and turmoil. Antifragiles are a category of systems that not only survive in such environments but actually get better. Current system engineering methods focus on survivability and stability, but how can antifragile systems be architected and implemented? The answer proposed in this paper is the use of the agent metaphor to enable a fuller treatment of systems’ functionality. Instead of defining system functionality as fixed functions, system engineers define agents providing this function- ality, but in addition can learn and improve in an agile environment. The paper presents a taxonomy of learning agents applicable in this context as well as the related utility functions required to enable the learning in an environment where the engineer may not even anticipate what the system shall learn in the changing future. Keywords Antifragile, Agility, Fragility, Robustness, Sustainability, Survivability Introduction The traditional systems engineering process is rooted in the demand to deal with large and expensive systems, as they were technically enabled in the second half of the 20 th century. Systems engineering started with an emphasis on methodologies derived from operations research, putting great emphasis on decision making, problem solving, and the analysis of alternatives. Systems engineering evolved into an effort supporting life cycle management that brought more and more specialists together to support the creation, maintenance, and retirement of system solutions. The IEEE 1220 Standard on Application and Management of the Systems Engineering Process (IEEE, 1998) therefore defined systems engineering as “an interdisciplinary collaborative approach to derive, evolve, and verify a life-cycle balanced system solution which satisfies customer expectations and meets public acceptability.” (p. 11) The governing idea was that a group of experts are able to understand overall life cycles to understand the cus- tomers’ expectations - as normally captured by requirements - and their public acceptability. The ideal system solution would address technical needs, business models, governance, security, and conceptual alignment of components in a coherent way that made a system robust and sustainable enough to survive changes in its environment. The well-known waterfall model, Vee-model, and spiral model are known to most traditional systems engineers. But at the heart of all these ideas was the assumption that it is the task of the system engineer to come up with a concept; create a design; develop the system and implement it; address operation, administration, and maintenance; and finally, retire it. If all that was not enough, system engineers are to honor the constraints of costs, risks, governance, and the list of system “ilities” that describe the quality attributes. In 2007, Taleb published the first edition of his book “The Black Swan” that initiated a paradigm shift in the view of systems. A black swan is a positive or negative event that was deemed improbable – that is until it was discovered - but that has massive consequences once it happened. Examples of black swan events are the rise of the personal computer and the Internet on the positive side, or the attacks on September 11, 2001 and the Lehman Brothers bankruptcy in 2008 on the negative side. While systems were robust enough to withstand the preconceived challenges, black swan events resulted in too much change to handle.