Psychological Review 1987, Vol. 94, No. 4.455-468 Copyright 1987 by the American Psychological Association, Inc. 0033-295X/87/J00.75 Optimal Timing and the Weber Function Peter R. Killeen Arizona State University Neil A. Weiss Department of Mathematics Arizona State University How is it that counting to ourselves helps us to estimate an interval of time? To address this question, we develop a generalized clock-counter model of duration discrimination that allows error in both the timing and the counting processes. We show that in order to minimize variability in temporal judgments, it is usually to the subject's advantage to segment the interval to be judged into subinter- vals. The optima] duration of the subintervals will depend on the parameters of the fundamental error equations that relate variance to the duration and number of the subintervals; in most cases, however, the optimal duration will be independent of the duration of the interval to be timed. The canonical form of the Weber function derived from our analysis takes as special cases the forms predicted by various other models of temporal discrimination. For long intervals it reduces to We- ber's law, with the constant in that law solely a function of counting error. When people must produce or estimate an interval of time, but find themselves without a chronometer, it is not uncommon for them to count "one thousand-one, one thousand-two,..." and so on. Just as tapping movements of the foot help musicians to "keep time," pendular movements of people's vocal track es- tablish a consistent rhythm for counting. This increases the reli- ability of their evaluation of the interval (i.e., it decreases the variance of their estimates; Gilliland & Martin, 1940; Petnisic, 1984). But why should it do that? What must the timing process be like to make such a maneuver advantageous? This question motivates the present analysis. Intuitively one can see that breaking a long interval into sub- intervals will improve accuracy in timing only if two conditions are satisfied: The sum of the variances of the durations of the subintervals must be less than the variance involved in estimat- ing the interval as a whole, and the counting process should not itself add too much variance to the estimate. We assume that the subject behaves in the following way: If the task is to generate an interval of duration t, the subject indicates when n counts have been completed, each count marking the end of a subinter- val of duration d, with n - t/d. If the task is to evaluate the interval, the subject responds "t," corresponding to t, when the count is interrupted by the end of the interval at a value of n, and t = nd. If the task is to judge which of two intervals is longer, the subject decides on the basis of whether n, > n?. Such as- sumptions are common to most clock-counter models of tim- ing. We will now examine several specific models to see what conclusions are forced upon us by the acknowledged fact that people count in order to time accurately. This research was assisted by Award 1 RO1 MH 39496 from the Na- tional Institute of Mental Health to Peter R. Killeen. We thank R. Church and J. Gibbon for their comments on an earlier draft of this article. Correspondence concerning this article should be addressed to Peter R. Killeen, Department of Psychology, Arizona State University, Tempe, Arizona 85287. Case 1: Assume that the counting is errorless and that the variance in timing is proportional to the duration of the subin- terval to be timed—<r D 2 = kd—where A: is a constant of propor- tionality. One example of this is a clock-counter model in which the subintervals are generated by a Poisson process. The vari- ance of the sum of the n subintervals, a-?, will be a-f = nkd = nkt/n = kt. The last term shows that the variance is independent of n and thus is the same whether the subjects count or do not count (i.e., "count" once at the end of the interval). But the research cited above shows this conclusion to be counterfactual; we must therefore reject this set of assumptions. Case 2: Assume that the counting is errorless and the variance in timing is proportional to the square of the duration of the subinterval to be timed—o- D 2 = kd 1 . This assumption is consis- tent with Weber's law, which states that the standard deviation (<7 D ) is proportional to the magnitude of the stimulus to be eval- uated. One example of this is a clock-counter model in which the subintervals are exponentially distributed. The variance of the sum of the subintervals will be <r r 2 = nkd 2 = nkt 2 /n 2 = kt 2 /n. Here we see that the variance decreases as n increases, so it is to the subjects' advantage to count. Although these assumptions get the subjects counting, they unfortunately get them going too fast: Because timing error decreases uniformly as n increases, subjects should count as fast as their tongues will permit, and this is seldom observed. We might proceed in this fashion to converge on other as- sumptions that might "save" the observed data. But it is conve- nient to do so only if we believe the counting process is errorless. That may seem a reasonable assumption, but it is generally in- correct. Even in simply counting to 100, people make mistakes (the expected error rate is about 0.2% per digit counted; Healy & Nairne, 1985); error rates may be higher when attention is 455