Commentary: The Torrance Tests of Creative Thinking Already Overcome Many of the Perceived Weaknesses That Silvia et al.’s (2008) Methods Are Intended to Correct Kyung Hee Kim The College of William and Mary Silvia et al.’s (2008) primary motivations for exploring and proposing their subjective scoring method are their perceived deficiencies of current divergent thinking tests. First, scores on divergent thinking tests frequently correlate highly with general intelligence. Second, the scoring of divergent thinking tests has changed little since the 1960s. Third, the necessity of instructing people to be creative prior to taking divergent thinking tests is integral to obtaining useful responses and needs to be reaffirmed. Fourth, and finally, the problems posed by uniqueness scoring— confounding with fluency, ambiguity of rarity, and the seeming “penalty” imposed on large samples—that need to be addressed. First, Kim’s (2005) meta-analysis indicated that the relationship between divergent thinking test scores and IQ (r .17) is negli- gible, which supports the underlying belief that creativity and intelligence are separate constructs. According to Kim’s (in press) meta-analysis, divergent thinking test scores predict creative achievement (r .22) better than IQ (r .17). Further, 51.8% of the 274 correlation coefficients incorporated in the study used the Torrance Tests of Creative Thinking (TTCT; Torrance, 1996), and the TTCT predicted (r .33, p .0001) creative achievement better than other measures of creative potential (e.g., Wallach & Kogan Divergent Thinking Tasks [Wallach & Kogan, 1965], Guil- ford Divergent Thinking Tasks [Guilford, 1967], Sounds and Im- ages [Torrance, Khatena, & Cunnington, 1973], Word Association Tests [Gough, 1976], etc.). In this meta-analysis, art, music, writ- ing, science (including mathematics, medicine, and engineering), leadership, and social skills were used to measure creative achieve- ment. Among these different types of creative achievement, mu- sical achievement was predicted better by IQ than by measures of creative potential, whereas art, science, writing, and social skills were predicted by measures of creative potential better than by IQ. This finding suggests that creativity test scores account for more variance in creative achievement than IQ, and therefore may predict overall creative achievement better. Second, in Silvia et al.’s (2008) objection, “Methods of admin- istering and scoring divergent thinking tasks have changed little since 1960s,” the authors cited Torrance (1967) as one example. However, scoring of the TTCT has changed and improved since the 1960s: The TTCT has been renormed in 1974, 1984, 1990, 1998, and 2007. The 1984 TTCT–Figural manual simplified the scoring procedures and provided a detailed Scoring Workbook (Ball & Torrance, 1984) in addition to the Norms-Technical Manual. Two norm-referenced measures of creative potential, Abstractness of Titles and Resistance to Premature Closure, were added to Fluency, Origi- nality, and Elaboration; and Flexibility was eliminated because it correlated highly with Fluency (He ´bert, Cramond, Neumeister, Mil- lar, & Silvian, 2002). Further, (third) the TTCT instructs people to be creative. For instance, the TTCT directions state for Fluency: “Make as many different pictures or objects as you can and put as many ideas as you can in each one,” and for Originality: “Try to think of things that no one else will think of” (1966, p. 2), which Torrance specifi- cally emphasized. When Torrance developed the TTCT in 1966, he had a disagreement with Guilford in that Guilford did not want to give clues concerning desired responses when the test taker was asked to think of as many as use of each object. Guilford’s instructions did not motivate the participants for divergent thinking, whereas the direc- tions for the TTCT are intended to specifically motivate the test taker for fluency and originality (Torrance, 1994). Torrance, like Silvia et al., considered the motivation given in the instructions to be very important. He explained that we would never attempt to measure jumping ability by measuring how high or far a person just happened to be jumping at a particular time; rather, we would try to motivate him or her to jump as high or as far as he or she can (Torrance, 1994). Fourth, Silvia et al. (2008) correctly point out the fact that, on most measures of creativity, many research findings support the existence of high correlations between originality and fluency (e.g., Chase, 1985; Dixon, 1979; Heausler & Thompson, 1988; Kim, 2006b; Torrance, 1979). Simonton (1990) indicated that a person’s originality is a function of the number of ideas formu- lated. Torrance and Safter (1999) confirmed that a person who generates a large number of alternatives is more likely to produce original ideas. However, measures of originality usually predict creative behavior more accurately than do measures of fluency (Torrance, 1972a, 1972b, 1972c, 1974). Thus, although fluency increases the chance that original ideas will be produced; there is no guarantee that fluency will generate original ideas (Torrance, 1979). However, more importantly, Silvia et al. could not avoid the high positive correlations between fluency scores and uniqueness scores in their study although they intended to solve their per- ceived problem. In Silvia et al.’s (2008) objection to the ambiguity of rarity the authors stated, The objective 0/1 system is not as objective as it seems: It will tend to give 1s to weird responses and to common responses that raters would judge as uncreative. Some evidence for this claim comes from Kyung Hee Kim, School of Education, The College of William and Mary. Correspondence concerning this article should be addressed to Kyung Hee Kim, School of Education, Jones Hall, The College of William and Mary, PO Box 8795, Williamsburg, VA 23187. E-mail: KKim@wm.edu Psychology of Aesthetics, Creativity, and the Arts Copyright 2008 by the American Psychological Association 2008, Vol. 2, No. 2, 97–99 1931-3896/08/$12.00 DOI: 10.1037/1931-3896.2.2.97 97