Philip Ball wonders how to give credit where it's due
Top ten lists are always guaranteed to start an argument. That’s why, when in 2005 physicist Jorge Hirsch of the University of California at San Diego devised a way of ranking scientists’ research impact1, he was essentially making an incendiary device. Now it seems that everyone knows and monitors their Hirsch index or h-index: it’s the academic equivalent of furtively Googling your own name, an exercise in vanity that is impossible to resist.
But there’s more than pride at stake. Your h-index matters. It’s often now used informally as a rough guide for assessing job applicants - some helpfully put it on their CVs. The ’top ten’ lists that the index generates are indisputably filled with top-rate scientists. All living chemists with an h-index over 50 are ranked on Chemistry World’s website, and the current leader is the illustrious George Whitesides, of Harvard University, with an h-index of 135.
What does that number mean? Here’s the formula: your h-index is the largest number (n) of your publications that have each received n citations. An index of 20 means you have 20 papers that have been cited at least 20 times. This is better than a bare count of publications, which fails to take account of how significant they are: they could all be salami slices that barely anyone read, or found useful enough to cite. Counting up the number of papers you have published in high-impact journals is arguably better, but still somewhat invidious: those journals don’t always get the best work, and some of their papers are inevitably trendy but short-lived. The h-index gives a measure of how your publications have been judged by your peers, regardless of where they appeared.
Your career trajectory can be read in your h-index. It inevitably increases over time, but in a manner that varies from one person to another: tracking a meteoric rise and early burn-out, say, or a slow build to eminence. Hirsch has recently argued that the h-index is not just a measure of past performance, but also a good predictor of future potential - a distinction that matters if it is being used for making appointments.
But as many have pointed out, the h-index is not perfect. It goes without saying that no single number can gauge the creativity, originality, and short- and long-term significance of a research oeuvre. That’s why no one is advocating its use as a sole assessment tool. Even on its own terms, it has pitfalls. It tends to discriminate against female scientists who take career breaks for family reasons, for their h-index may then fall irretrievably behind those of their contemporaries. While it clearly identifies the best scientists, there is some debate about how discerning it is among the vast majority of middle-rankers. Others worry about the possibilities of inflating an h-index by self-citation.2 Alternative indices have been proposed that claim to ameliorate these drawbacks, but the simplicity and transparency of the h-index surely counts in its favour.
One of the big challenges for any bibliometric scheme is how to deal with multi-author papers. Particularly in large collaborations, not all authors contribute equally. Hirsch argues that the h-index does afford some sort of proportionate weighting, but this may often be unfair.
Traditionally, a crude measure of authorship priority has been to list the principal contributor first. It also means this individual is named in citations when the others become ’et al.’ Qualifying footnotes to the effect that ’A and B contributed equally to this work’ do little to alter that. But last place in the list also tends to be awarded more significance - that’s often where the senior, supervisory author goes. None of this is universal or explicitly codified, but a recent study3 by Jonathan Wren of the Oklahoma Medical Research Foundation and his coworkers shows that it is generally assumed. They canvassed opinion among 87 promotion committees in North American medical schools, and found that names in the middle of an author list are regarded as having made less of a contribution than those at the beginning or the end. The longer the list, the more the middle positions are discounted - and so the less the paper helps their prospects for tenure.
Here’s one facet of life, then, where it pays to come last. The moral seems to be that you should either limit yourself to a single collaborator, or put everyone at the end of the list. Although speaking for myself, I quite like the alphabetical approach.
References
1 J E Hirsch, Proc. Natl Acad. Sci. USA, 2005, 102, 16 569 (DOI: 10.1073/pnas.05076551)
2 M Schreiber, Ann. Phys., 2007, 16, 640 (DOI: 10.1002/andp.20095210903)
3 J D Wren et al, EMBO Reports, 2007, 8, 988 (DOI: 10.1038/sj.embor.7401095)
No comments yet