Empathy and grit – not just publication records – should be considered in researcher assessment
Is this the future of metrics in academia?
12 May 2020
How should researchers be assessed in a way that’s fair and accurate, and not over-reliant on publication metrics?
Critics of current methods for evaluating researchers’ work – including Nobel Prize-winners journal editors – say a system that relies on bibliometric parameters favours a ‘quantity over quality’ approach, and undervalues achievements such as social impact and leadership.
A recent workshop organised by the Australian Early- and Mid-Career Researcher (EMCR) Forum, a representative network for emerging scientists in Australia, discussed how fairer metrics can become more widespread.
Attendees, more than half of whom were senior academics and section heads, were asked to nominate the most important skill of an effective researcher. The top answers were resilience, perseverance, curiosity, empathy, and flexibility – qualities that go far beyond what bibliometrics alone can measure.
Workshop organiser, Joanne Bartley, principal of culture and change at Australia's Nuclear and Science Technology Organisation (ANSTO), says such qualities are rarely considered in academic hiring and promotion, yet are what makes someone who is technically capable more likely to be successful.
“It’s the differentiating factor,” she says.
How to measure success
The first calls to improve how research is assessed sounded almost a decade ago with the San Francisco Declaration on Research Assessment (DORA) and the Science in Transition initiative. An independent review of the role of metrics in research assessments followed in 2015, the same year that The Leiden Manifesto was published, which recommended 10 principles for responsible research evaluation.
Since then, efforts to correct the balance and improve research practice have been taking shape.
China, for example, has removed cash incentives for publishing papers this year in an effort to curb the ‘publish or perish’ culture amongst researchers.
In the UK, a recent proposal has recommended that ‘knowledge exchange’ activities, such as intellectual property commercialization and voluntary museum work, should be included in new impact indicators used to allocate funding to universities, and funders have been called upon to exclude institutions that use grant capture targets.
Young researchers from the University Medical Center Utrecht in the Netherlands – who were involved in the Science in Transition initiative – have introduced a new evaluation method for PhD candidates designed to promote professional diversity and growth.
But widespread change has yet to occur.
How to evaluate leadership
Structural biologist, Jenny Martin, deputy vice-chancellor of research and innovation at the University of Wollongong in Australia, says sponsorship is an emerging way for researchers to demonstrate effective leadership.
Examples of sponsorship include nominating a colleague or student for an award or providing advice on grant applications or papers without expecting to be listed as an author or chief investigator.
“The biggest role of a leader is to create other leaders, to make sure they are bringing somebody up to replace them when they move on,” says Martin.
In 2017, ANSTO revised its salary increase scheme to give equal weighting to research leadership and academic contributions. In addition to measures of research outputs and impact, staff are now scored on their efforts to grow and develop their scientific area through student supervision, user communities, and research partnerships across academia and industry, says Bartley.
In short, the scheme values how someone conducts their work just as much as what they produce.
In a similar move, the University of Glasgow in the UK began rating 'collegiality' in their professorial promotions in 2019 to "recognise the way in which candidates support their colleagues to succeed”. This includes award nominations.
While the strategy has been lauded by some, other researchers say ‘collegiality’ could present a challenge to diversity and inclusion initiatives if it is used as an excuse for perpetuating dominant cultural groups.
“I do think that collegiality poses some huge dangers for women and minorities,” says Janet Hering, director of the Swiss Federal Institute of Aquatic Science and Technology. “‘Collegiality’ can be an excuse for perpetuating the old boys’ club.”
Recognising subjectivity in assessment
Stephen Curry, professor of structural biology at Imperial College London and chair of the DORA steering committee, says academia needs to free itself from the idea that every aspect of performance can be objectively measured.
“We need to recognise the subjectivity inherent in the process, build in as many safeguards against bias as we can, and acknowledge that there is no perfect way to do this,” says Curry.
He told Nature Index that he welcomes more holistic evaluations that include structured narrative assessments asking candidates to express in their own words their contributions across multiple domains.
“[We need to] find out if they have a vision for their research that goes beyond their own career advancement,” he says.
Bartley is encouraged by organisations that are already experimenting with different forms of assessment and reporting on their outcomes.
“Cultural change across an industry will happen if you have people making small changes to the environments that they have control over,” she says.