Evaluating Performance (Eoyang and Berkas) 1 July 5, 1998 Evaluating Performance in a CAS July 5, 1998 Eoyang, G. & Berkas, T. (1999) Evaluating Performance in a Complex Adaptive System. In _Managing Complexity in Organizations_. Lissack, M. & Gunz, H. (eds.) Westport, Connecticut: Quorum Books, an imprint of Greenwood Publishing Group, Inc. Glenda H. Eoyang Thomas H. Berkas, Ph.D. Chaos Limited Search Institute 50 East Golden Lake Road 700 3 rd Avenue South Suite 210 Circle Pines, MN 55014 Minneapolis, MN 55415-1138 Fax 612-379-3924 TomB@Search-Institute.org Eoyang@chaos-limited.com Abstract Evaluation is a central issue in all organizations. Many standard evaluation tools, techniques and methods rely on basic assumptions about linear organizational dynamics (predictability, low dimensionality, system closure, stability and equilibration). Some of these assumptions are not valid when a system enters the regime of a complex adaptive system (CAS). New strategies are required to evaluate complex adaptive human systems. New tools, techniques and methods must integrate assumptions about the dynamical and complex nature of human systems. This chapter summarizes the characteristics of CASs from an organizational perspective. It identifies properties of an evaluation system that are consistent with the nature of a CAS and describes tools and techniques that promise more effective evaluation. Finally, it outlines the emergent role of the evaluator in a complex environment. Introduction Individuals, programs and teams at all levels of an organization are expected to assess and report on their performance. Groups choose to evaluate performance for a variety of reasons. Evaluation data establish a foundation for continuous improvement and build frameworks for fact-based decision making. Such data establish individual and group accountability and support the effective use of resources. Organizations in education, non-profit public service, government and business recognize the need for effective formative and summative evaluation. Funders, participants, elected leaders, stakeholders and other constituencies expect organizations to be able to evaluate performance. Most evaluation processes are based on performance against predicted goals. Increasingly institutions that are not able to provide such basic evaluative information risk losing the support of their funders and other stakeholders. Historically, evaluation programs were developed to work in organizations that were assumed to be closed, stable and predictable. And in many situations, linear, low-dimension evaluation systems provided adequate data to represent organizational performance approximately. Such evaluation approaches were close enough to meet the needs of organizations and their supporters. To be effective, however, an evaluation program must match the dynamics of the system to which it is applied. Recent research in organizational management, behavior and psychology indicate that human systems behave as complex adaptive systems. Organizational systems that were once stable are moving outside the range of linear, predictable behaviors and entering into the regime of chaotic or complex