advertisement
Ophthalmic images, along with their derivatives like retinal nerve fiber layer (RNFL) thickness maps, play a crucial role in detecting and monitoring eye diseases such as glaucoma. For computer-aided diagnosis of eye diseases, the key technique is to automatically extract meaningful features from ophthalmic images that can reveal the biomarkers (e.g., RNFL thinning patterns) associated with functional vision loss. However, representation learning from ophthalmic images that links structural retinal damage with human vision loss is non-trivial mostly due to large anatomical variations between patients. This challenge is further amplified by the presence of image artifacts, commonly resulting from image acquisition and automated segmentation issues. In this paper, we present an artifact-tolerant unsupervised learning framework called EyeLearn for learning ophthalmic image representations in glaucoma cases. EyeLearn includes an artifact correction module to learn representations that optimally predict artifact-free images. In addition, EyeLearn adopts a clustering-guided contrastive learning strategy to explicitly capture the affinities within and between images. During training, images are dynamically organized into clusters to form contrastive samples, which encourage learning similar or dissimilar representations for images in the same or different clusters, respectively. To evaluate EyeLearn, we use the learned representations for visual field prediction and glaucoma detection with a real-world dataset of glaucoma patient ophthalmic images. Extensive experiments and comparisons with state-of-the-art methods confirm the effectiveness of EyeLearn in learning optimal feature representations from ophthalmic images.
Full article