Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • AJNR Case Collection
    • Case of the Week Archive
    • Classic Case Archive
    • Case of the Month Archive
  • Special Collections
    • Spinal CSF Leak Articles (Jan 2020-June 2024)
    • 2024 AJNR Journal Awards
    • Most Impactful AJNR Articles
  • Multimedia
    • AJNR Podcast
    • AJNR Scantastics
    • Video Articles
  • For Authors
    • Submit a Manuscript
    • Author Policies
    • Fast publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Manuscript Submission Guidelines
    • Imaging Protocol Submission
    • Submit a Case for the Case Collection
  • About Us
    • About AJNR
    • Editorial Board
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Other Publications
    • ajnr

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • AJNR Case Collection
    • Case of the Week Archive
    • Classic Case Archive
    • Case of the Month Archive
  • Special Collections
    • Spinal CSF Leak Articles (Jan 2020-June 2024)
    • 2024 AJNR Journal Awards
    • Most Impactful AJNR Articles
  • Multimedia
    • AJNR Podcast
    • AJNR Scantastics
    • Video Articles
  • For Authors
    • Submit a Manuscript
    • Author Policies
    • Fast publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Manuscript Submission Guidelines
    • Imaging Protocol Submission
    • Submit a Case for the Case Collection
  • About Us
    • About AJNR
    • Editorial Board
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

Welcome to the new AJNR, Updated Hall of Fame, and more. Read the full announcements.


AJNR is seeking candidates for the position of Associate Section Editor, AJNR Case Collection. Read the full announcement.

 

LetterLETTER

Radiomics Approach Fails to Outperform Null Classifier on Test Data

J.B. Colby
American Journal of Neuroradiology November 2017, 38 (11) E92-E93; DOI: https://doi.org/10.3174/ajnr.A5326
J.B. Colby
aDepartment of Radiology and Biomedical Imaging University of California, San Francisco San Francisco, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for J.B. Colby

It is with great pleasure that I read the recent article, the accompanying commentary, wide popular press, and lively ongoing discussion in the community regarding “Computer-Extracted Texture Features to Distinguish Cerebral Radionecrosis from Recurrent Brain Tumors on Multiparametric MRI: A Feasibility Study” by Tiwari et al.1

With increasingly ubiquitous, cheap, computing infrastructure and the commoditization of high-quality machine-learning algorithms, multivoxel and multimodal pattern classification techniques, so-called “radiomics,” are increasingly being used to incorporate subtle but useful imaging features into our routine clinical decision-making as radiologists. To this end, I commend the authors on their well-thought-out design and implementation of a machine-learning classifier for differentiating tumor recurrence versus radiation necrosis in treated primary and metastatic human brain tumors.

The authors used a state-of-the-art but straightforward method incorporating the following: 1) image texture-based feature extraction, 2) feature selection/reduction via minimum redundancy maximum relevance, 3) generalization performance estimation of a support vector machine classifier, and most important, 4) an external layer of cross-validation to ensure that the feature selection and performance estimation steps were unbiased (ie, not overfit).

The authors widely acknowledged that this is, indeed, an early feasibility study, using limited retrospective data at hand, and this point has already been further explored by other commenters.2 Additionally, however, there was no discussion of base rate effects or inclusion of a null classifier. A discussion here will hopefully be useful in both understanding the authors' specific results and generalizing to the future because we aim to target the specific clinical scenarios where these advanced techniques may have their greatest clinical utility.

Diagnostic testing can be framed in the Bayesian sense of a pretest (prior) probability, which is updated by some new evidence, to yield a posttest (posterior) probability.3 The pretest probability is often informed by some general knowledge about the background prevalence (ie, base rate) in the community. The new evidence typically takes the form of a test result for the individual. Consider the fringe cases: On the one hand, we can imagine a clinical scenario where the base rate is 50%. Under this regimen, similar to the authors' training data, incorporation of individual data—even of marginal reliability—will nudge us in favor of 1 group and improve clinical diagnostic accuracy. On the other hand, as the base rate approaches 0% or 100%, even the best diagnostic tests will be useless in practice. For example, consider the challenge of identifying an uncommon disease in a hypothetic healthy population of 1000 individuals. If we examined the whole population, given a supposed baseline prevalence of 0.001 (1 in a 1000), even a terrific “rule out” screening diagnostic test with 100% sensitivity and 95% specificity will result in 1 true-positive test result and 49.95 false-positive test results, for an overall positive predictive value of only approximately 2%.

The crux of the issue then lies in the middle gray zone: As the base rate slides more in favor of 1 group, the bar rises for any additional candidate predictors/features to be “worth it” in terms of the marginal discriminating information they provide with respect to their inherent variability and accompanying measurement error. In the present feasibility study, tumor recurrence was only slightly more prevalent than radiation necrosis among the primary brain tumor training data (12 of 22 cases, or 55%). Therefore, a null classifier incorporating only this information would perform with 55% accuracy on average (the null information rate) and would be beaten handily by the authors' imaging-based classifier, which achieved 75% estimated generalization accuracy via cross-validation-based resampling on the training data. This 75% number is the benchmark we would like to compare against human performance; however, such an analysis was not performed in the present study.

In the holdout test sample, however, the recurrence group was much more enriched. Therefore, while it may seem impressive that the imaging-based classifier attained 91% accuracy (10/11 cases) and this was the main headline widely publicized in the popular press, we would, in fact, have attained the exact same diagnostic accuracy by ignoring all the machine-learning algorithms, relying solely on our general knowledge of the base rate that tumor recurrence is more common and assigning every holdout test case to the “recurrence” class label without looking at a single image.

This leads to several important discussion points: Because there are so many subtle ways for classification experiments to be methodologically invalidated, there is a strong intuitive desire to see the methods tested on a truly independent test set, held out from the get-go, as was done here. This approach does have the desired effect of making us feel more comfortable with the methods; however, it also has negative effects. Not only does it make less data available for training, thereby decreasing the quality of the classifier, but, as we see here, the test set itself may be biased due to small sample size or other effects. This then provides yet another argument in favor of data-sharing and “reproducible research” in neuroimaging,4 whereby the community could easily validate the authors' methods, confirm their cross-validated results, and obviate a separate holdout test.

It is also worth revisiting the uncomfortable fact that humans are useful but flawed statistical machines (and hence clinical decision-makers), subject to a variety of cognitive biases that have been explored in the literature on the psychology of decision-making during the past half-century.5 We underestimate the value of base rate information compared with individual information, overestimate the generalizability of our talents, overestimate the confidence/precision of our estimates, and are able to be systematically nudged by a variety of factors, including the arbitrary sequence in which cases are presented. In particular, as the validity of a task decreases (ie, the signal gets smaller, subtler, or more complex) or accompanying uncertainty increases, the consistency of our approach to intuitive reasoning suffers, and the net effects of these underlying biases may dominate.6 Although it was not investigated here, we may speculate that some (or all) of these effects may help explain why the “experts” performed even more poorly than would be expected on the test data. On the bright side, ample data do suggest that the performance of expert intuitive reasoning under this regimen can be successfully augmented by the introduction of even simple algorithms,7 as evidenced in our field by the success of the Breast Imaging Reporting and Data System, the Liver Imaging Reporting and Data System, and so forth.

In summary, while widely publicized, the presented radiomics approach fails to outperform a null classifier on the given test set. Conversely, we are unable to compare the classifier cross-validated performance estimates on the training set with human performance because this analysis was not performed. If one looks forward, this interesting article describes a state-of-the-art radiomics classifier, though it highlights the importance of base rate effects and other cognitive bias when evaluating the usefulness of such a classifier and again argues in favor of both enhanced data-sharing in neuroimaging and enhanced incorporation of our expert intuitive reasoning into more structured frameworks for clinical decision-making.

References

  1. 1.
    1. Tiwari P,
    2. Prasanna P,
    3. Wolansky L, et al
    . Computer-extracted texture features to distinguish cerebral radionecrosis from recurrent brain tumors on multiparametric MRI: a feasibility study. AJNR Am J Neuroradiol 2016;37:2231–36 doi:10.3174/ajnr.A4931 pmid:27633806
  2. 2.
    1. Schweitzer AD,
    2. Chiang GC,
    3. Ivanidze J, et al
    . Regarding “Computer-Extracted Texture Features to Distinguish Cerebral Radionecrosis from Recurrent Brain Tumors on Multiparametric MRI: A Feasibility Study.” AJNR Am J Neuroradiol 2017;38:E18–19 doi:10.3174/ajnr.A5019 pmid:27908871
  3. 3.
    1. Elstein AS,
    2. Schwartz A,
    3. Schwarz A
    . Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. BMJ 2002:324:729–32 pmid:11909793
  4. 4.
    1. Poldrack RA,
    2. Baker CI,
    3. Durnez J, et al
    . Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci 2017;18:115–26 doi:10.1038/nrn.2016.167 pmid:28053326
  5. 5.
    1. Tversky A,
    2. Kahneman D
    . Judgment under uncertainty: heuristics and biases. Science 1974:185:1124–31 doi:10.1126/science.185.4157.1124 pmid:17835457
  6. 6.
    1. Kahneman D,
    2. Klein G
    . Conditions for intuitive expertise: a failure to disagree. Am Psychol 2009;64:515–26 doi:10.1037/a0016755 pmid:19739881
  7. 7.
    1. Grove WM,
    2. Zald DH,
    3. Lebow BS, et al
    . Clinical versus mechanical prediction: a meta-analysis. Psychol Assess 2000;12:19–30 pmid:10752360
  • © 2017 by American Journal of Neuroradiology
Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editors Choice
  • Fellow Journal Club
  • Letters to the Editor

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

Special Collections

  • Special Collections

Resources

  • News and Updates
  • Turn around Times
  • Submit a Manuscript
  • Author Policies
  • Manuscript Submission Guidelines
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Submit a Case
  • Become a Reviewer/Academy of Reviewers
  • Get Peer Review Credit from Publons

Multimedia

  • AJNR Podcast
  • AJNR SCANtastic
  • Video Articles

About Us

  • About AJNR
  • Editorial Board
  • Not an AJNR Subscriber? Join Now
  • Alerts
  • Feedback
  • Advertise with us
  • Librarian Resources
  • Permissions
  • Terms and Conditions

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire