advertisement

WGA Rescources

Editors Selection IGR 19-2

Clinical Examination Methods: Head-Mounted Perimetry

Vincent Michael Patella

Comment by Vincent Michael Patella on:

111826 Validation of the Iowa Head-Mounted Open-Source Perimeter, Heinzman Z; Linton E; Marín-Franch I et al., Translational vision science & technology, 2023; 12: 19


Find related abstracts


The authors' stated goal was to assess the validity of visual field (VF) test results from their Iowa Head-Mounted Display Open-Source Perimeter (HMD). More importantly, they have also reminded us of the many steps that must be taken when assessing the performance any diagnostic device.

While many new head-mounted perimeters have been developed in recent years, associated validation studies are sparce and mostly focus on agreement of summary statistics without evaluating other relevant performance metrics

A few details of interest:

  1. In this study, both the HMD and the reference device ‒ an Octopus 900 perimeter – were configured as open-source testing systems,1 allowing both devices to perform VF testing using the same control software, testing algorithm, test pattern, stimulus duration, and background intensity.
  2. The authors have provided a detailed description of how they calibrated stimulus and background intensities and also stimulus location and stimulus size.
  3. They have described how they re-positioned HMD's stimulus intensity range to make threshold testing with Size V stimuli possible.
  4. They have introduced a repeatability coefficient (RC) in which findings are presented in decibels – a clinically relevant metric that avoids potential complications associated with use of correlation coefficients.2 They calculated mean sensitivity and pointwise RCs and reported the effects of eccentricity on testing variability. They directly compared HMD repeatability results to the Octopus device and also to published values for the Humphrey perimeter.
  5. The authors promise a more complete evaluation of their device soon and emphasize that final validation of any new perimeter must include ensuring that the dynamic range is adequate, repeat variability is low, agreement with comparable standards is good and sensitivity and specificity for disease detection are useful. They observe that, while many new head-mounted perimeters have been developed in recent years, associated validation studies are sparce and mostly focus on agreement of summary statistics without evaluating other relevant performance metrics.
  6. The corresponding author reminds me that the Iowa HMD is fully open source and that the team will publish the software once the current study is complete. The authors continue to enroll normal subjects and hope to release initial normative limits soon.

Comment: Preliminary studies like this, performed early and often during instrument development, are basic to device success. However, before commercial release, a complete and final validation is also required, with the product configured exactly as it will be used clinically, including final normative data. The reasons for this are many, but one critical point is that results from a new device can be more accurately compared to those from existing devices if test results are compared relative to each device's specific and empirically derived normative limits.3 Another critical reason is that scotoma depths measured with modern quick algorithms tend to be shallower than those found by older slower legacy strategies, which is fine, as long as inter-subject normal sensitivity ranges shrink commensurately. Regardless, the only way to establish that diagnostic performance has been preserved is to compare new and old test results using their respective normative limits.4,5

The authors note that the present study is limited to assessment of dynamic range, retest variability and pointwise threshold comparisons to the Octopus. I would also suggest that the number of subjects examined is limited – 20 controls and nine glaucoma patients. Thus, this is only the beginning of discussions on this topic. Nevertheless, I congratulate and thank these authors for reminding us of what is required in order to properly evaluate the performance of new diagnostic devices.

References

  1. Marín-Franch I, Swanson WH. The visualFields package: A tool for analysis and visualization of visual fields. J Vis. 2013;13(4):10, 1-12
  2. Vaz S, Falkmer T, Passmore AE, et al. The case for using the repeatability coefficient when calculating test-retest reliability. PLoS One. 2013;8:e73990.
  3. Heijl A, Bengtsson B, Patella VM. Glaucoma follow-up when converting from long to short perimetric tests. Arch Ophthalmol. 2000;118: 489-493.
  4. Bengtsson B, Heijl A. Comparing significance and magnitude of glaucomatous visual field defects using the SITA and Full Threshold strategies. Acta Ophthalmol Scand. 1999:77:143-146.
  5. Heijl A, Patella VM, Chong LX, et al. A New SITA Perimetric Threshold Testing Algorithm: Construction and a Multicenter Clinical Study. Am J Ophthalmol. 2019;198:154-165.


Comments

The comment section on the IGR website is restricted to WGA#One members only. Please log-in through your WGA#One account to continue.

Log-in through WGA#One

Issue 19-2

Change Issue


advertisement

WGA Rescources