A recent issue of Nature - 17 June 2010 - has an editorial on "assessing assessment", wherein the Editor has a number of thought-provoking things to say.

Such as the opening sentence:

"The use of metrics to measure and assess scientific performance is a subject of deep concern, especially among younger scientists." 

He could say that again - although we older folk also have our concerns.  And further:

"Many researchers say that, in principle, they welcome the use of quantitative performance metrics because of the potential for clarity and objectivity. Yet they also worry that the hiring, promotion and tenure committees that control their fate will ignore crucial but hard-to-quantify aspects of scientific performance such as mentorship and collaboration building, and instead focus exclusively on a handful of easy-to-measure numbers related mostly to their publication and citation rates."

Ye-e-es...well, it's hard to quantify community interactions, for example, and certain things to do with teaching, given the stellar ratings students seem give to people who spoon-feed them rather than those who challenge them - itself the partial subject of a recent article - although a retroidal colleague did once try, in our very own local Nature SA J Sci.  He came up with a very interesting graphical approach (GG Lindsey, South African Journal of Science 101, May/June 2005, pp. 211-212) which is worth reproducing here.

 

Simple and clear...one wonders how many Distinguished Teacher awards could be reconsidered in the light of this?  However, this is not the point; therefore, let us return to the distinguished Editor for more illumination:

"Most institutions seem to take a gratifyingly nuanced approach to hiring and tenure decisions, relying less on numbers and more on wide-ranging, qualitative assessments of a candidate's performance made by experts in the relevant field.

Yet such enlightened nuancing cannot be taken for granted. Numbers can be surprisingly seductive, and evaluation committees need to guard against letting a superficial precision undermine their time-consuming assessment of a scientist's full body of work."

 Yes, the Kollectiv has heard terms such as "impact factor" and "h factor" being bandied about in connection with evaluating people, and it is now surprisingly easy to get such information, thanks to our very excellent Library and its electronic database access <note librarians: kudos on offer>.  But the good Editor has this to say further:

 "Academic administrators, conversely, need to understand what the various metrics can and cannot tell them. Many measures — including the classic 'impact factor' that attempts to describe a journal's influence — were not designed to assess individual scientists. Yet people still sometimes try to apply them in that way." 

And further:

 "...transparency is essential: no matter how earnestly evaluation committees say that they are assessing the full body of a scientist's work, not being open about the criteria breeds the impression that a fixed number of publications is a strict requirement, that teaching is undervalued and that service to the community is worthless. Such impressions do more than breed discontent — they alter the way that scientists behave. To promote good science, those doors must be opened wide." 

Amen, brother Editor, amen.  And to all that wish further enlightenment, there are two articles in the same issue - Do metrics matter? and A profusion of measures - which discuss the issue in serious detail.And include the fact that Google Scholar may in fact be the best - if clunkiest - means of accurately assessing just how many people cite one's work, beating out ISI's Web of Science and even Scopus.  

Come on, Google yourself: you know you want to...B-)