↓ Skip to main content

Wiley Online Library

Classification and evaluation strategies of auto‐segmentation approaches for PET: Report of AAPM task group No. 211

Overview of attention for article published in Medical Physics, May 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Above-average Attention Score compared to outputs of the same age and source (62nd percentile)

Mentioned by

twitter
3 X users

Citations

dimensions_citation
170 Dimensions

Readers on

mendeley
178 Mendeley
Title
Classification and evaluation strategies of auto‐segmentation approaches for PET: Report of AAPM task group No. 211
Published in
Medical Physics, May 2017
DOI 10.1002/mp.12124
Pubmed ID
Authors

Mathieu Hatt, John A. Lee, Charles R. Schmidtlein, Issam El Naqa, Curtis Caldwell, Elisabetta De Bernardi, Wei Lu, Shiva Das, Xavier Geets, Vincent Gregoire, Robert Jeraj, Michael P. MacManus, Osama R. Mawlawi, Ursula Nestle, Andrei B. Pugachev, Heiko Schöder, Tony Shepherd, Emiliano Spezi, Dimitris Visvikis, Habib Zaidi, Assen S. Kirov

Abstract

The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Available comparison studies suggest that PET-AS algorithms relying on advanced image paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, in order to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps towards designing and building such a standard are undertaken by the task group members. This article is protected by copyright. All rights reserved.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 178 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 178 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 37 21%
Student > Ph. D. Student 31 17%
Other 15 8%
Student > Master 15 8%
Student > Postgraduate 10 6%
Other 22 12%
Unknown 48 27%
Readers by discipline Count As %
Medicine and Dentistry 35 20%
Physics and Astronomy 31 17%
Engineering 19 11%
Computer Science 15 8%
Mathematics 4 2%
Other 19 11%
Unknown 55 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 25 January 2017.
All research outputs
#15,986,161
of 25,377,790 outputs
Outputs from Medical Physics
#4,832
of 7,984 outputs
Outputs of similar age
#185,150
of 326,960 outputs
Outputs of similar age from Medical Physics
#53
of 145 outputs
Altmetric has tracked 25,377,790 research outputs across all sources so far. This one is in the 36th percentile – i.e., 36% of other outputs scored the same or lower than it.
So far Altmetric has tracked 7,984 research outputs from this source. They receive a mean Attention Score of 3.5. This one is in the 38th percentile – i.e., 38% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 326,960 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 43rd percentile – i.e., 43% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 145 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 62% of its contemporaries.