A Metric for Academics: A Personal Suggestion

Every year in the US, and at various intervals in other countries, academics must pull together what they have done to provide administrators with the data required for their indicators of performance. Just as metrics provided baseball teams with a new tool for more systematically choosing players, based on their stats, as portrayed in the popular film Moneyball, so universities hope to improve their performance and rankings by relying more on metrics rather than the intuitions of faculty. Metrics are indeed revolutionizing the selection, promotion, and retention of academics, and units within universities. Arguably, they already have done so. The recruitment process increasingly looks at various scores and stats about any given candidate for any academic position.

Individual academics can’t do much about it. And increasingly, the metth-1rics will be collected without the academic even doing any data gathering, as data on citations, publications, and teaching ratings get generated in the course of being an academic. Academic metrics are becoming one more mountain of big data ready for computational analysis.

I am too senior (old) to be worried about my own metrics. They are not great, but they are as good as they will ever be. My concern is most often with administrators tending to count everything that can be counted, rather than trying to develop indicators that get to the heart of academic performance. Of course, this is extremely difficult since academics seldom agree on the rating of their colleagues. A scholar who is a superstar to one academic is conceptually dead from another academic’s perspective. So this controversy is one of many factors driving academia towards more indicators or hard evidence of performance. The judgments of scholars vary so dramatically. At least by counting what can be counted, there is some harder evidence that might be indicative of what we try to measure – quality.

So what can we count? It varies by university, but I’ve been in universities that count publications, of course, but every kind of publication, from refereed journal articles to blogs. And each of these might be rated, such as by the status of the journal in which an article appears, or the prestige of the publisher of a book. But that is only the beginning. We count citations, conference papers, talks, committees, awards, and more. Therefore, we perennially worry about whether we published enough in the right places, and did enough of anything that is counted.

In the UK, there has been an effort to measure the impact of an academic’s work. There have been entire conferences and publications devoted to what could be meant by impact and how it could be measured. Arguably, this is a well intentioned move toward measuring something more meaningful. Rather than simply counting the number of publications (output), why not try to gauge the impact (outcomes) of the work? It is just that it is difficult to reliably and validly measure impact, given that the lag between academic work and its impact can be years or decades. Take Daniel Bell’s work on the information society, which had a huge impact, which went well beyond what might have been expected in the immediate aftermath of his publication on The Coming of Post-Industrial Society. Nevertheless, indicators of impact will inevitably be added to all the other growing number of indicators, even thought universities will spend an unbelievable amount of time trying to document this metric. th

In this environment, because I am a senior in academia, I sometimes get asked how a colleague should think about these metrics. Where should they publish? How many articles should they publish? Which publisher should they submit their book for publication? It goes on and on.

I try to give my opinion, but my most general response, when I feel like it will be accepted as advice and not criticism, is to focus on contributing something new to your field. Rather than think about numbers, think about making a contribution to how people think about your field.

This must go beyond the topic of one’s research. It is okay to know what topics or areas an academic works in, but what has he or she brought to that field? Is it a new way for doing research on a topic, a new concept for the area, or a new way of thinking about the topic?

In sum, if an academic’s career was considered, by another academic familiar with their work, could they say that the person had made an original, non-trivial contribution to the study of their field? This is very subjective and difficult to answer, which may be why administrators move to hard indicators. Presumably, if someone has made an important new contribution, their work will be published and cited more than someone who has not. That’s the theory.

However, the focus on contributing new ideas can give academics a more constructive motivation and an aim to guide their work. Rather than feeling that your future is based on getting x number of journal articles published, you make publication a means to a more useful end in itself, furthering progress in your field of study. If you accomplish this, the numbers, reputation, and visibility of your work will take care of themselves. What would be a new contribution to your field? That is exactly the right question.

 

 

 

 

Comments are most welcome