Medical Image Analysis 51 (2019) 169–183
Contents lists available at ScienceDirect
Medical Image Analysis
journal homepage: www.elsevier.com/locate/media
Disease quantification on PET/CT images without explicit object
delineation
Yubing Tong
a
, Jayaram K. Udupa
a,∗
, Dewey Odhner
a
, Caiyun Wu
a
, Stephen J. Schuster
b
,
Drew A. Torigian
a,b
a
Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
b
Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
a r t i c l e i n f o
Article history:
Received 30 July 2018
Revised 17 October 2018
Accepted 9 November 2018
Available online 10 November 2018
Keywords:
Image segmentation
Object recognition
PET/CT
Total lesion glycolysis (TLG)
Cancer
Disease quantification
Quantitative radiology
a b s t r a c t
Purpose: The derivation of quantitative information from images in a clinically practical way continues
to face a major hurdle because of image segmentation challenges. This paper presents a novel approach,
called automatic anatomy recognition-disease quantification (AAR-DQ), for disease quantification (DQ) on
positron emission tomography/computed tomography (PET/CT) images. This approach explores how to
decouple DQ methods from explicit dependence on object (e.g., organ) delineation through the use of
only object recognition results from our recently developed automatic anatomy recognition (AAR) method
to quantify disease burden.
Method: The AAR-DQ process starts off with the AAR approach for modeling anatomy and automatically
recognizing objects on low-dose CT images of PET/CT acquisitions. It incorporates novel aspects of model
building that relate to finding an optimal disease map for each organ. The parameters of the disease
map are estimated from a set of training image data sets including normal subjects and patients with
metastatic cancer. The result of recognition for an object on a patient image is the location of a fuzzy
model for the object which is optimally adjusted for the image. The model is used as a fuzzy mask on
the PET image for estimating a fuzzy disease map for the specific patient and subsequently for quantify-
ing disease based on this map. This process handles blur arising in PET images from partial volume effect
entirely through accurate fuzzy mapping to account for heterogeneity and gradation of disease content at
the voxel level without explicitly performing correction for the partial volume effect. Disease quantifica-
tion is performed from the fuzzy disease map in terms of total lesion glycolysis (TLG) and standardized
uptake value (SUV) statistics. We also demonstrate that the method of disease quantification is applicable
even when the “object” of interest is recognized manually with a simple and quick action such as interac-
tively specifying a 3D box ROI. Depending on the degree of automaticity for object and lesion recognition
on PET/CT, DQ can be performed at the object level either semi-automatically (DQ-MO) or automatically
(DQ-AO), or at the lesion level either semi-automatically (DQ-ML) or automatically.
Results: We utilized 67 data sets in total: 16 normal data sets used for model building, and 20 phan-
tom data sets plus 31 patient data sets (with various types of metastatic cancer) used for testing the
three methods DQ-AO, DQ-MO, and DQ-ML. The parameters of the disease map were estimated using the
leave-one-out strategy. The organs of focus were left and right lungs and liver, and the disease quantities
measured were TLG, SUV
Mean
, and SUV
Max
. On phantom data sets, overall error for the three parame-
ters were approximately 6%, 3%, and 0%, respectively, with TLG error varying from 2% for large “lesions”
(37 mm diameter) to 37% for small “lesions” (10 mm diameter). On patient data sets, for non-conspicuous
lesions, those overall errors were approximately 19%, 14% and 0%; for conspicuous lesions, these overall
errors were approximately 9%, 7%, 0%, respectively, with errors in estimation being generally smaller for
liver than for lungs, although without statistical significance.
Conclusions: Accurate disease quantification on PET/CT images without performing explicit delineation
of lesions is feasible following object recognition. Method DQ-MO generally yields more accurate results
than DQ-AO although the difference is statistically not significant. Compared to current methods from
the literature, almost all of which focus only on lesion-level DQ and not organ-level DQ, our results were
∗
Corresponding author.
E-mail addresses: jay@pennmedicine.upenn.edu, jay@mail.med.upenn.edu (J.K.
Udupa).
https://doi.org/10.1016/j.media.2018.11.002
1361-8415/© 2018 Published by Elsevier B.V.