Apologies - the end part of this blog, so carefully crafted by Eve Gray - did not make it to the first post! So here is the rest (Cheryl)
The national policy for the reward of
scholarly publication pays substantial sums of money for publication in
‘accredited’ publications. This means that the emphasis is on authorship of
individual articles, linked to what is largely a pre-determined list of
publications. In this system it is where
something is published that counts, rather than what is published.
We thus learned the perhaps obvious lesson that
what is recorded and what is tracked in university systems reflects the
(financial) value systems enshrined in national policy. Support for publication activities does not
attract immediate financial rewards and so is not tracked, although in
conversation, the realisation dawned that the lack of support for South African
journals and presses was having a prejudicial effect on the ability of authors
to get published professionally and speedily in South African publications and
for locally-focused research to find sufficient outlets, particularly when it
came to new fields of knowledge.
The second set of questions had to do with
the social responsiveness of the university’s research activities. We knew,
from our own contacts and enquiries that UCT has a rich record of socially
research projects that provide considerable social and economic benefits. We
also know that the national government is calling on universities to
demonstrate the impact of publicly-funded research efforts on national
development targets – economic growth, human resource development, employment,
public health, and education, to name but a few. Here the financial rewards are
indirect: as Martin Hall put it, bluntly:
for the university is survival.
Governments everywhere are reducing unconditional support for higher education
institutions, promoting responses that range from near privatization (through
escalating fees) to marketization (requiring an ever-increasing emphasis on
directly marketable research outputs).
In these circumstances, as Hall insists, it
is wise for universities to ensure that their research is effectively
disseminated and its impact demonstrated.
We found here, too, that there was no
comprehensive record of the research output that could deliver social impact.
In this case, though, UCT, in the context of its transformation agendas, has
started to build up records of research projects and innovations that are
having development impact, through the recent creation of a Social Responsiveness website
(see also the UCT Transformation
blog) However, this kind of research output is not is not yet reflected in
the IRMA database. Thus, if one asks the question, ‘What is UCT doing, by way
of development-related research?’ there can be an answer, drawn from the new
website, but as yet this reflects only a fraction of what is being done.
All of this obviously raised questions
about the relationship between a university’s mission and how well this is
articulated through its administrative records. From this perspective we
realised the potential importance of a system like IRMA and the political
sensitivity of the way such a system is used.
Once at the IRMA workshop, it became clear
that a descriptive case study can be worth many pages of policy documents and
so I listened with great interest to the account by Merrill Bouckley, the Reports,
Systems and Data Manager in the Research
Office of the University of Sydney of
how research records are kept there.
Sydney is one of the 'Group of 8” - the leading Australian universities.
These institutions are fiercely competitive (like UCT) when it comes to their
ability to attract research funding for their publications and the
international profile they attract from their research outputs. Thus far, this
sounds very like the South African situation, in which the pressure to produce
publications that qualify for Department of Education subsidy and attract
citations and international rankings is a key competitive driver in university
research departments. However, Merrill's description of how the University of Sydney implements the Australian Research
reveals key differences both in policy approach, philosophy and the
implementation processes. In particular, the system relies less on the prior
identification of 'accredited' publications and more on a qualitative judgement
on the value of the particular publication output.
As Merrill described the system, the
university research office records all outputs – not only journal articles,
conference proceedings and books, but also research reports, media articles,
posters, presentations, creative works and computer programmes. In the first
instance, they are not interested primarily in what is accredited but want to
capture as much as possible of what is being disseminated. They do this
progressively throughout the year and scan and store the publications in their
D-Space repository. This information is then available to be used to profile
the university and its individual researchers and to enable it to respond to
changes in national policy (given a recent change in government – something not
lacking in relevance to South Africans, given the current state of ANC politics).
The way the system of research rewards
works in Australia is
different to South Africa
in significant ways. First of all, the emphasis is – beyond metrics and
numerical counts - on the qualitative
value of what is published, and not on the vehicle through which it is published.
In other words, it is what is
published that is important, more than where
it is published. It is up to the university to tag and then sift through the
output of books, book chapters, journal articles and conference proceedings to evaluate
what it regards as outputs that are of sufficient quality to qualify for
government subsidy. Because Sydney
records publications progressively throughout the year, it can blind-review
where it feels this is necessary as soon as publications are recorded on the
system. If this review process rejects an output as not being of sufficient
value (and this can happen even in the case of publication in prestigious
international journals) then there is an appeal system that goes to a committee
whose judgement is final. If a journal is not of well-established quality, then
the article will inevitably go off to review. And if something that emerges as
being of exceptional research value, but is published in a non-peer-reviewed
publication, then the university has the option of reviewing it and including
it in the accredited list.
There is a risk value to all this: the
university submits a very brief report
to the Department of Education, Science and
Technology (DEST), simply enumerating the publications proposed for
accreditation in a very brief report and DEST then carries out a rolling audit
through the different universities from year to year. (There is, by the way, no
need to send container-loads of paper copies to the government office, as
everything is accessible online in the institutional repository.) If there is a
variance of more than 10% between the university’s evaluation of its
publications and the government’s, the university suffers a proportional cut in
its research funding (where the publication component is not, as in South Africa,
per publication, but is proportional across a number of funding fields).
The value allocated to different kinds of
publication, by the way, is also weighted differently to South Africa:
journal articles and conference proceedings get 1 point, books 5 points and
chapters in books is a complicated computation.
While this system does seem to mean a lot
of work for the university in entering and evaluation publications, it does
seem to have significant advantages. First of all, the university has control
over what it considers to be valuable research. This means that
locally-relevant research can be valued, as can new fields of knowledge
development. There seems to be a better chance of their being consistent and
appropriate values applied to research output. The system enables extensive
tracking and analysis of trends in research output and also enables the
institution to leverage this information to promote and position the
university’s reputation. For example, the kind of question I suggested above,
about socially responsive research, could be answered at the push of a button
(as the system potentially has fine granularity and can break down what kind of
research has been done, with what kinds of socio-economic implications).
Jean-Claude Guedon sums up his positive
evaluation of the Australian system in a recent article:
One country, Australia, has
found an interesting and intelligent way of promoting the creation of open
archives in its universities and getting them to fill up effectively through
coupling the creation of these archives with the national procedures for the
evaluation of research (the Research Quality Framework, or RQF). The
fundamental idea is to base the evaluation of university research uniquely on
what is to be found in the institutional repository (and these in turn
determine the level of financial support to the university in question),
because, it is argued, efficiency requires that one relies on digital documents
that are easily accessible. The complementary idea is to make available a
substantial budget to help the universities to set up institutional
repositories rapidly…: this is the Australian Scheme for Higher Education
Repositories, or ASHER.
This system arose, in fact, out of an earlier study designed to demonstrate the
importance of research for the Australian economy: Public Support for Science
I suspect that there are some lessons to be
learned here for South African institutions and national policy-makers.
Freeing the Knowledge Resources of Public
Universities. KM Africa – Knowledge to Address Africa’s