Prediction of Reader Estimates of Mammographic Density using Convolutional Neural NetworksCitation formats

  • External authors:
  • Georgia Valeria Ionescu
  • Adam R Brentnall
  • J Cuzick
  • Susan Astley

Standard

Prediction of Reader Estimates of Mammographic Density using Convolutional Neural Networks. / Ionescu, Georgia Valeria; Fergie, Martin; Berks, Michael; Harkness, Elaine; Hulleman, Johan; Brentnall, Adam R; Cuzick, J; Evans, D Gareth; Astley, Susan.

In: Journal of Medical Imaging, Vol. 6, No. 3, 031405, 2019.

Research output: Contribution to journalArticle

Harvard

APA

Vancouver

Author

Bibtex

@article{a0a9a03973a14f6e8070c5cbfab1d502,
title = "Prediction of Reader Estimates of Mammographic Density using Convolutional Neural Networks",
abstract = "Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by handcrafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95{\%} CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, p =0.134). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.",
keywords = "breast cancer, mammographic density, deep learning, risk, visual analogue scale, neural network",
author = "Ionescu, {Georgia Valeria} and Martin Fergie and Michael Berks and Elaine Harkness and Johan Hulleman and Brentnall, {Adam R} and J Cuzick and Evans, {D Gareth} and Susan Astley",
year = "2019",
doi = "10.1117/1.jmi.6.3.031405",
language = "English",
volume = "6",
journal = "Journal of Medical Imaging",
issn = "2329-4302",
publisher = "S P I E - International Society for Optical Engineering",
number = "3",

}

RIS

TY - JOUR

T1 - Prediction of Reader Estimates of Mammographic Density using Convolutional Neural Networks

AU - Ionescu, Georgia Valeria

AU - Fergie, Martin

AU - Berks, Michael

AU - Harkness, Elaine

AU - Hulleman, Johan

AU - Brentnall, Adam R

AU - Cuzick, J

AU - Evans, D Gareth

AU - Astley, Susan

PY - 2019

Y1 - 2019

N2 - Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by handcrafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, p =0.134). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.

AB - Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by handcrafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, p =0.134). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.

KW - breast cancer

KW - mammographic density

KW - deep learning

KW - risk

KW - visual analogue scale

KW - neural network

U2 - 10.1117/1.jmi.6.3.031405

DO - 10.1117/1.jmi.6.3.031405

M3 - Article

VL - 6

JO - Journal of Medical Imaging

JF - Journal of Medical Imaging

SN - 2329-4302

IS - 3

M1 - 031405

ER -