advertisement
Optical coherence tomography imaging provides a crucial clinical measurement for diagnosing and monitoring glaucoma through the two-dimensional retinal nerve fiber layer (RNFL) thickness (RNFLT) map. Researchers have been increasingly using neural models to extract meaningful features from the RNFLT map, aiming to identify biomarkers for glaucoma and its progression. However, accurately representing the RNFLT map features relevant to glaucoma is challenging due to significant variations in retinal anatomy among individuals, which confound the pathological thinning of the RNFL. Moreover, the presence of artifacts in the RNFLT map, caused by segmentation errors in the context of degraded image quality and defective imaging procedures, further complicates the task. In this paper, we propose a general framework called RNFLT2Vec for unsupervised learning of vectorized feature representations from RNFLT maps. Our method includes an artifact correction component that learns to rectify RNFLT values at artifact locations, producing a representation reflecting the RNFLT map without artifacts. Additionally, we incorporate two regularization techniques to encourage discriminative representation learning. Firstly, we introduce a contrastive learning-based regularization to capture the similarities and dissimilarities between RNFLT maps. Secondly, we employ a consistency learning-based regularization to align pairwise distances of RNFLT maps with their corresponding thickness distributions. Through extensive experiments on a large-scale real-world dataset, we demonstrate the superiority of RNFLT2Vec in three different clinical tasks: RNFLT pattern discovery, glaucoma detection, and visual field prediction. Our results validate the effectiveness of our framework and its potential to contribute to a better understanding and diagnosis of glaucoma.
Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA.
Full article