BRIEF REPORT Accuracy of Inferring Self- and Other-Preferences from Spontaneous Facial Expressions Michael S. North • Alexander Todorov • Daniel N. Osherson Published online: 10 July 2012 Ó Springer Science+Business Media, LLC 2012 Abstract Participants’ faces were covertly recorded while they rated the attractiveness of people, the decorative appeal of paintings, and the cuteness of animals. Ratings employed a continuous scale. The same participants then returned and tried to guess ratings from 3-s videotapes of themselves and other targets. Performance was above chance in all three stimulus categories, thereby replicating the results of an earlier study (North et al. in J Exp Soc Psychol 46(6):1109–1113, 2010) but this time using a more sensitive rating procedure. Across conditions, accuracy in reading one’s own face was not reliably better than other- accuracy. We discuss our findings in the context of ‘‘simulation’’ theories of face-based emotion recognition (Goldman in The philosophy, psychology, and neuroscience of mindreading. Oxford University Press, Oxford, 2006) and the larger body of accuracy research. Keywords Face perception Á Facial expressions Á Accuracy Á Self-accuracy Á Social cognition Introduction How accurately can people read a casual emotion from a face whose owner does not suspect that s/he is under observation? Few quantitative studies have addressed this question. Although many valuable experiments have focused on the recognition of emo- tion, aside from North et al. (2010) none has involved low-level emotions decoded from dynamic, unwitting faces that are briefly encountered. Research evaluating people’s ability to infer emotions from the face typically relies on posed, still images. For example, studies incorporating Ekman’s battery of basic emotions find reliable emotion categorization from photos of posing actors (e.g., Ekman 1989; Hess M. S. North (&) Á A. Todorov Á D. N. Osherson Department of Psychology, Princeton University, Green Hall, Princeton, NJ 08540, USA e-mail: mnorth@princeton.edu 123 J Nonverbal Behav (2012) 36:227–233 DOI 10.1007/s10919-012-0137-6