Robust learning at noisy labeled medical images: Applied to skin lesion classification

Cheng Xue, Qi Dou, Xueying Shi, Hao Chen, Pheng Ann Heng

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

37 Citations (Scopus)


Deep neural networks (DNNs) have achieved great success in a wide variety of medical image analysis tasks. However, these achievements indispensably rely on the accurately-annotated datasets. If with the noisy-labeled images, the training procedure will immediately encounter difficulties, leading to a suboptimal classifier. This problem is even more crucial in the medical field, given that the annotation quality requires great expertise. In this paper, we propose an effective iterative learning framework for noisy-labeled medical image classification, to combat the lacking of high quality annotated medical data. Specifically, an online uncertainty sample mining method is proposed to eliminate the disturbance from noisy-labeled images. Next, we design a sample re-weighting strategy to preserve the usefulness of correctly-labeled hard samples. Our proposed method is validated on skin lesion classification task, and achieved very promising results.

Original languageEnglish
Title of host publicationISBI 2019 - 2019 IEEE International Symposium on Biomedical Imaging
PublisherIEEE Computer Society
Number of pages4
ISBN (Electronic)9781538636411
Publication statusPublished - Apr 2019
Externally publishedYes
Event16th IEEE International Symposium on Biomedical Imaging, ISBI 2019 - Venice, Italy
Duration: 8 Apr 201911 Apr 2019

Publication series

NameProceedings - International Symposium on Biomedical Imaging
ISSN (Print)1945-7928
ISSN (Electronic)1945-8452


Conference16th IEEE International Symposium on Biomedical Imaging, ISBI 2019


  • Melanoma
  • Noisy-labels
  • Robust learning
  • Uncertainty
  • Weighted loss

ASJC Scopus subject areas

  • Biomedical Engineering
  • Radiology Nuclear Medicine and imaging

Cite this