Abstract American Sign Language (ASL) uses the face to express grammar and inflection, in addition to emotion. Research in this area has mostly used photo- graphic stimuli. The purpose of this paper is to present data on how deaf signers and hearing non-signers recognize and categorize a variety of communicative facial expressions in ASL using dynamic stimuli rather than static pictures. Stimuli in- cluded six expression types chosen because they share overt similarities but express different content. Hearing participants were more accurate in their categorizations but expressed overall lower confidence regarding their performance. Keywords ASL Æ Dynamic facial expressions Æ Categorization Æ Accuracy Æ Confidence Introduction American Sign Language (ASL) requires the use of the face not only to express the full range of emotional facial expressions (Ekman & Friesen, 1975, 1978), but also to mark a large variety of language-specific grammatical constructs, such as topics (Aarons, 1996), agreement (Bahan, 1996; MacLaughlin, 1997; Neidle, MacLaughlin, Bahan, & Kegl, 1996), and several different kinds of questions: wh-questions (questions using who, what, where, when or why), yes/no questions (Baker-Shenk, 1983, 1986; Neidle, MacLaughlin, Bahan, Lee, & Kegl, 1997; Petronio & Lillo- Martin, 1997), and rhetorical questions (Hoza, Neidle, MacLaughlin, Kegl, & Bahan, 1997). Additionally, both spoken and signed languages use facial expressions, such as quizzical, doubtful, and scornful expressions, that accompany natural conversational R. B. Grossman (&) Lab of Developmental Cognitive Neuroscience, Boston University School of Medicine, 715 Albany Street, L-814, Boston, MA 02118, USA e-mail: ruthberg@bu.edu J. Kegl University of Southern Maine, Portland, ME, USA 123 J Nonverbal Behav DOI 10.1007/s10919-006-0022-2 ORIGINAL PAPER Moving Faces: Categorization of Dynamic Facial Expressions in American Sign Language by Deaf and Hearing Participants Ruth B. Grossman Æ Judy Kegl Ó Springer Science+Business Media, LLC 2006