advertisement

Topcon

Editorial IGR 10-1

Robert N. Weinreb

The Impact of Glaucoma
R.N. Weinreb, MD
La Jolla

Human genetic variation

Perhaps some of you ponder, as I do often, how to effectively measure the impact of one's scientific contributions. I have been thinking about it even more since the demise in January of one of my heroes, Judah Folkman MD. In 1971, he had suggested that angiogenesis was a vital biologic process, and that cancers could be controlled by shrinking their blood supply. I met him that year during my first year at Harvard Medical School and attended a weekly seminar that he organized. In the ensuing years, I marveled at the scientific successes that cascaded from his laboratory. It was astonishing, however, how slowly the general medical community aligned with his ideas and how long it took to translate some of them into clinical therapies. If your idea succeeds, everybody say you're persistent," he would joke. If it doesn't succeed, you're stubborn." His obituary in the Boston Globe quoted and also commented on his free-thinking style and public openness. These were contrary to what was practiced in conservative Boston, and often set him up as a target for criticism. But his new ideas and revolutionary science prevailed and, particularly for cancer and age-related macular degeneration, his laboratory breakthoughs eventually were not only introduced, but also have been sustained in clincial practice.

I had no contact with Professor Folkman for a lengthy period until 2002, a time when anti-angiogenesis for macular degeneration was still new. As President of ARVO that year, I invited him to be the keynote speaker at the annual meeting in Ft. Lauderdale, and he flew down from Boston to join us. Not surprisingly, he mesmerized the overflowing lecture hall with a scholarly discourse on his research and talked about the potential for anti-angiogenesis. But, interestingly, what I best remember from the lecture is his humility and humor while citing, what must have been his very few manuscript and grant proposal rejections, something to which each of us can relate. I also remember how he generously recognized some of the vision researchers with whom he collaborated. And then in January 2008, he died at age 74 in an airport while travelling to a meeting.

Although there is no post-humous Nobel Prize, Professor Folkman certainly received a plethora of accolades during his career. And like those scientists who are awarded a Nobel Prize, the impact and relevance of his research is unquestionable. Among the rest of us, though, how does one measure the cumulative impact and relevance of our scientific impact? And why is measuring it even important? Measurement of scientific impact, even if it is distasteful for some, provides an important dimension for evaluation of a scientist and comparison with their peers. In one form or another, it is used to judge the suitability of appointment to an academic position, career advancement and tenure, and awarding of grants or prizes. At the outset, let me emphasize that there is no method that can fully capture the complexity of scientific success or predicting it. It is clear that scientific impact and success are only one aspect.

Among the criteria that have been used to measure impact is the number of published manuscripts. This assesses productivity, but does not indicate the importance of the published work. Total number of citations also can be tracked. It does measure total impact, but does not weigh the contribution of each author to a manuscript, disproportionately values highly cited manuscripts, and also may disproportionately value a review article in comparison with an original scientific contribution as both are weighed similarly. Citations per manuscript allows comparisons among scientists independent of age or stage in career, but it can penalize high productivity. Among other measures, the impact factor seems to be the prevailing one for most institutions to judge the value of one's scientific contributions.

The impact factor was created in 1955 by Eugene Garfield as a means to evaluate the significance of a particular work and its impact on the literature and thinking of the period."1 Ostensibly, he did not appreciate that it would be used fifty years later to rank journals, evaluate institutions and judge individuals. A journal's impact factor is calculated annually using citation and publication data from the previous two years.2 For example, the impact factor for 2007 is: (Citations in 2007 to articles published in 2005 and 2006)/(Number of citable articles published in 2005 and 2006). While the numerator count comprises citations to any article published by that journal in the previous two years, the denominator of citable articles comprises research articles and reviews only. It excludes editorials, letters, news items and meeting abstracts. The impact factor is readily measurable and objectively calculated.

There are a number of problems associated with use of the impact factor to measure scientific contributions, particularly when comparing those of a glaucoma researcher with contributions from someone outside of glaucoma. The number of peer-reviewed publications relating to glaucoma that might cite another glaucoma publication is small in comparison to fields such as cardovascular medicine and cancer, for example. Hence, the journals for these larger fields typically have higher impact factors than those in vision research or ophthalmology. Should one assume that a manucript published in a high impact general medical or scientific journal necessarily be considered to have higher impact in glaucoma than one published in a lower impact ophthalmic or visual research journal? I do not think so. There are other general factors that can influence the impact factor, as well.

For example, an editor of a journal can enhance it by publishing more widely cited review articles, media promotion and, more recently, having online availability.2 Moreover, it is thought that the two-year time span of the impact factor favors dynamic research fields such as the basic sciences, rather than clinical medicine. Perhaps most significantly, the two-year period for citation counts also tends to discount scientific contributions that are the most enduring, particularly if they are not cited widely within this brief period.

There are still other limitations of the use of the impact factor. Although citation counts do correlate to some extent with quality and proposed hierarchies of evidence, a poorly conducted study or research from a low impact journal can be cited frequently. Bias may also arise from author or journal self-citation. One fifth of citation in the diabetes literature have been shown to be author self-citations unrelated to the quality of the original article!3 It is clear that we do not have a single and unassailable method for measuring impact and relevance of one's scientific contributions. Each of the above factors has a role and contributes to the texture of scientific significance, even though each provides just a glimpse at the complexity of the task. Interpreting the significance of the science, in many respects, is as challenging as elucidating the science. As described by Professor Folkman, The ideas are simple, but getting them figured out is very complicated."

References

  1. Garfield E. Citation indexes to science: a new dimension in documentation through association of ideas. Science 1955; 122: 108-111.
  2. Chew M, Villanueva EV, Van Der Weyden MB. Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994-2005) and their Editors' views. J R Soc Med 2007; 100: 142-150.
  3. Gami AS, Montori VM, Wilczynski NL, Haynes RB. Author self-citation in the diabetes literature. CMAJ 2004; 170: 1925-1927.

Issue 10-1

Change Issue


advertisement

Oculus