advertisement

Topcon

Editors Selection IGR 7-3

Clinical examination methods: Machine classifiers, neural networks and progression

Michael Bach

Comment by Michael Bach on:

13171 Effects of input data on the performance of a neural network in distinguishing normal and glaucomatous visual fields, Bengtsson B; Bizios D; Heijl A, Investigative Ophthalmology and Visual Science, 2005; 46: 3730-3376


Find related abstracts


Interdisciplinary work is laudable, but difficult, because one competes with the 'local' experts in each discipline involved. Bengtsson et al. (943) conclude that "the choice of data input had important effects on the performance of the neural networks […]." The results are both surprising and unsurprising: Their feature vector (= the input data to the artificial neural network (ANN), consisting of 74 points corresponding to the field locations) was either (1) raw threshold intensity, (2a) Total Deviations, (2b) the same on a probability scale, (3a) Pattern Deviations, and (3b) the same on a probability scale. Thus age information was missing from feature vector (1), but is 'built in' for the other four. For comparison the authors commendably calculated the receiver operating characteristic of their five ANNs. Since their normal training group had an average age of 52 years, and the glaucoma group 75 years, it is unsurprising that a lower ROC area was found for feature vector (1, the one without age information), and a higher one for feature vectors (2a/b). What surprised me is that feature vectors (3a/b) performed just like feature vector (1). Unsurprising is the finding that decibel and probability scales compared likewise, since this is just a non-linear scaling which any ANN should be able to deal with (especially since theirs had two hidden layers, there is no information how other ANN dimensions performed). The ROC areas were all between 0.94 and 0.99, very high values. This may be due to the fact that many glaucoma cases were in advanced stages, allowing easy discrimination from normal. It may also be caused by training cases among the test cases, the manuscript is somewhat cryptic here; one-leave-out training should avoid 'overtraining' (against which undisclosed stopping rules were invoked). What always leaves me disappointed with ANNs: they may do their work, but we do not understand why, their 'wisdom' here is hidden in 74 x 74 x 75 numbers between -1 and +1.



Comments

The comment section on the IGR website is restricted to WGA#One members only. Please log-in through your WGA#One account to continue.

Log-in through WGA#One

Issue 7-3

Change Issue


advertisement

Oculus