Retinal fundus images reveal the condition of retina, blood vessels and optic nerve, and is becoming widely adopted in clinical work because any subtle changes to the structures at the back of the eyes can affect the eyes and indicate the overall health. Recently, machine learning, in particular deep learning by convolutional neural network (CNN), has been increasingly adopted for computer-aided detection (CAD) of retinal lesions. However, a significant barrier to the high performance of CNN based CAD approach is the lack of sufficient labeled image samples for training. Unlike the fully-supervised learning which relies on pixel-level annotation of pathology in fundus images, this paper presents a new approach to discriminate the location of various lesions based on image-level labels via weakly learning. More specifically, our proposed method leverages the multilevel feature maps and classification score to cope with both bright and red lesions in fundus images. To enhance capability of learning less discriminative parts of objects (e.g. small blobs of microaneurysms opposed to bulk of exudates), the classifier is regularized by refining images with corresponding labels. The experimental results of the performance evaluation and benchmarking at both image-level and pixel-level on the public DIARETDB1 dataset demonstrate the feasibility and excellent potentials of our method in practical usage.
|Title of host publication||Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition|
|Number of pages||8|
|Publication status||Published - 2020|
|Name||Proceedings - International Conference on Pattern Recognition|
- Computer Vision and Pattern Recognition