IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 17, NO. 6, DECEMBER 1998 1089 Gaussian priors. The application of Bayesian error bounds analysis to various prior functions, as well as the application of this analysis to experimental data from EEG recordings, are promising topics for future research. Our methods also offer the capability to evaluate the information content of EEG recordings as a function of the number of sensors used. However, our methods cannot establish an upper bound on the useful number of sensors, because the amount of information would continue to increase asymptotically as the number of sensors increases. In actual practice, the limit on the maximum number of sensors is set by practical difficulties such as sensor sizes, mechanical clearances, the creation of “salt bridges” and other leakage paths between sensors, as well as cost. Our simulation addresses only the very limited issue of whether 128 sensors might provide additional information over 64 sensors, and our conclusion is specific to the particular signal-to-noise ratio that was selected. Therefore, our example should be considered as an illustration of the possible use of our technique, rather than as a generally conclusive result. Again, this is a potentially interesting topic for future experimental research. REFERENCES [1] S. Baillet and L. Garnero, “A Bayesian approach to introducing anatomo-functional priors in the EEG/MEG inverse problem,” IEEE Trans. Biomed. Eng., vol. 44, pp. 374–385, May 1997. [2] B. Buck and V. Macaulay, “Entropy and sunspots: Their bearing on time-series,” in Maximum Entropy and Bayesian Methods, C. R. Smith, G. J. Erickson, and P. O. Neudorfer, Eds. Dordrecht, the Netherlands: Kluwer, 1992. [3] C. J. S. Clarke and B. S. Janday, “The solution of the biomagnetic inverse problem by maximum statistical entropy,” Inverse Problems, vol. 5, pp. 483–500, 1989. [4] B. R. Frieden, “Restoring with maximum likelihood and maximum entropy,” J. Opt. Soc. Amer., vol. 62, no. 4, pp. 511–518, 1972. [5] A. A. Ioannides, J. P. R. Bolton, and C. J. S. Clarke, “Continuous probabilistic solutions to the biomagnetic inverse problem,” Inverse Problems, vol. 6, pp. 523–542, 1990. [6] D. MacKay, “Bayesian interpolation,” in Maximum Entropy and Bayesian Methods, C. R. Smith, G. J. Erickson, and P. O. Neudorfer, Eds. Dordrecht, the Netherlands: Kluwer, 1992. [7] R. D. Pascual-Marqui, C. M. Michel, and D. Lehmann, “Low resolution electromagnetic tomography: A new method for localizing electrical activity in the brain,” Int. J. Psychophysiol., vol. 18, pp. 49–65, 1994. [8] J. W. Phillips, R. M. Leahy, and J. C. Mosher, “MEG-based imaging of focal neuronal current sources,” IEEE Trans. Med. Imag., vol. 16, no. 3, pp. 338–348, 1997. [9] J. Skilling, “Fundamentals of MaxEnt in data analysis,” in Maximum Entropy in Action, B. Buck and V. Macaulay, Eds. Oxford, U.K.: Clarendon, 1991, pp. 19–40. [10] J. Skilling and R. K. Bryan, “Maximum entropy image reconstruction: General algorithm,” Monthly Notices Roy. Astronom. Soc., vol. 211, pp. 111–124, 1984. [11] R. Srinivasan, P. Nunez, D. Tucker, R. Silberstein, and P. J. Cadusch, “Spatial sampling and filtering of EEG with spline Laplacians to estimate cortical potentials,” Brain Topogr., vol. 8, pp. 355–366, 1996. [12] D. M. Tucker, “Spatial sampling of head electrical fields: The Geodesic Sensor Net,” Electroencephalogr. Clin. Neurophysiol., vol. 87, pp. 154–163, 1993. [13] C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. London, U.K.: Pitman, 1984. [14] J. C. Mosher, M. E. Spencer, R. M. Leahy, and P. S. Lewis, “Error bounds for EEG and MEG dipole source localization,” Electroen- cephalogr. Clin. Neurophysiol., vol. 86, pp. 303–321, 1993. Optimization and FROC Analysis of Rule-Based Detection Schemes Using a Multiobjective Approach Mark A. Anastasio,* Matthew A. Kupinski, and Robert M. Nishikawa Abstract— Computerized detection schemes have the potential of in- creasing diagnostic accuracy in medical imaging by alerting radiologists to lesions that they initially overlooked. These schemes typically employ multiple parameters such as threshold values or filter weights to arrive at a detection decision. In order for the system to have high performance, the values of these parameters need to be set optimally. Conventional optimization techniques are designed to optimize a scalar objective function. The task of optimizing the performance of a computerized detection scheme, however, is clearly a multiobjective problem: we wish to simultaneously improve the sensitivity and false-positive rate of the system. In this work we investigate a multiobjective approach to opti- mizing computerized rule-based detection schemes. In a multiobjective optimization, multiple objectives are simultaneously optimized, with the objective now being a vector-valued function. The multiobjective opti- mization problem admits a set of solutions, known as the Pareto-optimal set, which are equivalent in the absence of any information regarding the preferences of the objectives. The performances of the Pareto-optimal solutions can be interpreted as operating points on an optimal free- response receiver operating characteristic (FROC) curve, greater than or equal to the points on any possible FROC curve for a given dataset and detection scheme. It is demonstrated that generating FROC curves in this manner eliminates several known problems with conventional FROC curve generation techniques for rule-based detection schemes. We employ the multiobjective approach to optimize a rule-based scheme for clustered microcalcification detection that has been developed in our laboratory. Index Terms—Computer-aided diagnosis, free-response receiver oper- ating characteristic (FROC) analysis, multiobjective optimization. I. INTRODUCTION Computerized detection schemes have the potential of substantially increasing diagnostic accuracy in radiological imaging [1]–[4]. Com- plicated image features, eye fatigue, and low conspicuity are factors that may cause a radiologist to miss a lesion in a mammogram or chest radiograph. One method for reducing the number of misses is to have two radiologists read the same image separately. This double-reading method is not performed routinely because of the added expense and logistical difficulties in clinical implementation. A computerized detection scheme may provide the advantages of having a second reader when one would, otherwise, be absent. As with any complicated pattern recognition system, computerized detection schemes typically use multiple parameters such as threshold values, filter weights, and region of interest (ROI) sizes to arrive at a detection decision. For the scheme to have a high performance (high sensitivity and a low false-positive rate), the values of these parameters need to be set optimally. In general, the optimal set of parameters may change when a component of the imaging chain is modified or changed. When the number of parameters is large, it Manuscript received September 13, 1998; revised November 20, 1998. This work was supported in part by the U.S. Army Medical Research and Materiel Command under Grant 17-97-1-7202 and in part by the USPHS under Grants CA24806 and RR11459. The Associate Editor responsible for coordinating the review of this paper and recommending its publication was M. W. Vannier. Asterisk indicates corresponding author. *M. A. Anastasio is with the Department of Radiology, MC2026, The University of Chicago, Chicago, IL 60637 USA (e-mail: anas- tasi@jedi.bsd.uchicago.edu). M. A. Kupinski, and R. M. Nishikawa are with the Department of Radiology, The University of Chicago, Chicago, IL 60637 USA Publisher Item Identifier S 0278-0062(98)09748-1. 0278–0062/99$10.00 1999 IEEE