ORIGINAL ARTICLE Relevance versus generalization in cognitive engineering Alex Kirlik Received: 10 May 2011 / Accepted: 10 November 2011 Ó Springer-Verlag London Limited 2012 Abstract The purpose of this article is to describe how research at the intersection of cognition, technology, and work can be generalized beyond the source context of scientific inquiry and confirmation. Special emphasis is given to resolve confusion about the use of terms such as ‘‘ecological validity’’ and the ‘‘real world.’’ The ultimate goal is to foster a more productive dialog on the merits of where and how research on important cognitive engineer- ing topics, such as cognitive adaptation to change and uncertainty, should be conducted. Keywords Cognition Á Representative design Á Scientific generalization Á Ecological validity 1 Introduction Anyone who is seriously involved in trying to solve socially relevant problems in human factors and cognitive engineering quickly learns that guidance for understanding how sociotechnical systems function and how interactive technologies should be designed is not easy to find in the academic literature in psychology or cognitive science. This situation recently prompted the noted Carnegie Mel- lon cognitive scientist Roberta Klatzky to implore her colleagues to ‘‘Find me the Apps!’’ (Klatzky 2009, p. 524), and to ‘‘teach applications, not promissory notes.’’ (p. 528). What is it about the social, behavioral, and cognitive sci- ences that make it so difficult to translate theory into practice? What are we to make of this situation? I once spent a considerable period of time seeking an answer to this question in an analysis of the differences between the content and logical structure of theories that would best provide guidance to those involved in designing the environment (for people), versus the content and logical structure of theories that would best provide guidance to those involved in understanding what goes on in people’s heads within a designed environment—designed, say, to conduct psychological research in a laboratory task or any other contrived setting (Kirlik 1995). At that time, I concluded that it seemed fairly straight- forward that one would be naı ¨ve to search for much guidance for human factors and cognitive engineering in psychological research that viewed its central aim as solving, as an engineer would put it, a ‘‘system identifi- cation’’ problem. That is, in any area of psychology in which the goal was to conduct research to inform theory about what goes on between the eye and hand, so to speak. As any engineer knows, one solves system identification problems by subjecting the system (a ‘‘black box’’) to a variety of inputs, or input signals (for dynamic systems), one records the outputs so generated, and then one works from knowledge of these inputs and outputs to define the nature and parameters of the function that maps one to the A. Kirlik (&) Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL, USA e-mail: kirlik@illinois.edu A. Kirlik Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA A. Kirlik Department of Industrial Engineering, University of Illinois at Urbana-Champaign, Champaign, IL, USA A. Kirlik The Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Champaign, IL, USA 123 Cogn Tech Work DOI 10.1007/s10111-011-0204-5