fMRI analysis with the general linear model: removal of latency-induced amplitude bias by incorporation of hemodynamic derivative terms V.D. Calhoun, a,b,c, * M.C. Stevens, a,b G.D. Pearlson, a,b,c and K.A. Kiehl a,b a Olin Neuropsychiatry Research Center, Institute of Living, Hartford, CT 06106, USA b Department of Psychiatry, Yale University, New Haven, CT 06520, USA c Department of Psychiatry, Johns Hopkins University, Baltimore, MD 21205, USA Received 5 August 2003; revised 11 December 2003; accepted 12 December 2003 Functional magnetic resonance imaging (fMRI) data are often analyzed using the general linear model employing a hypothesized neural model convolved with a hemodynamic response function. Mismatches between this hemodynamic model and the data can be induced by spatially varying delays or slice-timing differences. It is common practice to desensitize the analysis to such delays by incorporation of the hemodynamic model plus its temporal derivative. The rationale often used is that additional variance will be captured and regressed out from the data. Though this is true, it ignores the potential for amplitude bias induced by small model mismatches due to, for example, variable hemodynamic delays and is not helpful for ‘‘random effects’’ analyses which typically do not account for the first level variance at all. Amplitude bias is due to the use of only the nonderivative portion of the model in the final test for significant amplitudes. We propose instead testing an amplitude value that is a function of both the nonderivative and the derivative terms of the model. Using simulations, we show that the proposed amplitude test does not suffer from delay-induced bias and that a model incorporating temporal derivatives is a more natural test for amplitude differences. The proposed test is applied in a random- effects analysis of 100 subjects. It reveals increased amplitudes in areas consistent with the task, with the largest increases in regions with greater hemodynamic delays. D 2004 Elsevier Inc. All rights reserved. Keywords: fMRI; Functional; Brain; Temporal derivative; Delay; General linear model Introduction The traditional approach for analyzing functional magnetic resonance imaging (fMRI) data utilizes the general linear model using a hypothesized neural model convolved with a canonical hemodynamic response function. Mismatches of the data to the specified hemodynamic model can be induced by, for example, small hemodynamic delays or slice-timing differences. The use of a hemodynamic model and its temporal derivative for fMRI analysis was proposed in Friston et al. (1998) as a parsimonious model with additional flexibility to address delay-induced modeling mis- matches. In Friston et al. (1998), it was suggested to calculate the amplitude parameter estimates from the nonderivative terms only. The effects modeled by the derivative terms were interpreted as a shift of the hemodynamic model in time. In subsequent work, it was concluded that the hemodynamic response function plus temporal derivative produced the most sensitive analyses for event-related fMRI analyses (Hopfinger et al., 2000). It has since become common practice to fit the full model (nonderivative and derivative together) but to use only the nonderivative terms as estimates of hemodynamic amplitude and to test for amplitude differences using a t test (e.g., Bunge et al., 2002; Cabeza et al., 2003; Kiehl et al., 2001; McGonigle et al., 2002). Such an approach for estimating the amplitude ignores the potential for an amplitude bias induced by a delay difference between the hemodynamic model and the data. This amplitude bias is due to the use of only the nonderivative portion of the model in the test for significant amplitudes. This effect has been observed previously, but was used to justify not using the temporal derivative (Della-Maggiore et al., 2002). We propose instead testing an amplitude estimate that is a function of both the nonderivative and the derivative terms of the model. Using simulations, we show that the proposed amplitude test does not suffer from delay-induced bias, is a more natural test for amplitude differences when using a model incorporating temporal derivatives, and improves the fit of the model to the data (when compared to a model not using the temporal derivative term). We apply the proposed test in a random-effects analysis of 100 subjects and reveal increased amplitudes in areas consistent with the task, with the largest increases in regions consistent with greater hemodynamic delays. Theory In the simplest case, the data (assumed to be zero mean) at a given voxel are modeled as: y t ¼ ˆ b 0 þ ˆ b 1 x t þ e t ð1Þ 1053-8119/$ - see front matter D 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2003.12.029 * Corresponding author. Olin Neuropsychiatry Research Center, Institute of Living, 200 Retreat Avenue, Hartford, CT 06106. E-mail address: vince.calhoun@yale.edu (V.D. Calhoun). Available online on ScienceDirect (www.sciencedirect.com.) www.elsevier.com/locate/ynimg NeuroImage 22 (2004) 252 – 257