News

Rethinking research assessment: 7 sources of bias to watch out for at your institution

Recognizing the signs of systemic bias is key to ensuring that hiring, promotion and tenure decisions are fair for everyone.

  • Gemma Conroy

Credit: erhui1979/Getty Images

Rethinking research assessment: 7 sources of bias to watch out for at your institution

Recognizing the signs of systemic bias is key to ensuring that hiring, promotion and tenure decisions are fair for everyone.

16 March 2021

Gemma Conroy

erhui1979/Getty Images

While diversity initiatives and anti-bias training programs have become commonplace at universities, they have done little to level the playing field.

So says Ruth Schmidt, who studies behavioural design and communication theory at the Illinois Institute of Technology’s Institute of Design in Chicago.

“Research assessment is a systemic issue, not an individual issue,” she says. “Unless you embed debiasing into institution systems and structures, you’re only solving a little piece of the problem.”

With Anna Hatch, program director for the Declaration on Research Assessment (DORA) in Rockville, Maryland, Schmidt published a four-part blog series highlighting seven common biases that stand in the way of equitable decision-making in hiring, review, promotion and tenure.

While these biases can be difficult to overcome, the first step is self-awareness, says Schmidt. Speaking up and being an advocate for diversity in your group or department can also help change things for the better.

Here are seven biases to watch out for in research assessment, and how they manifest in practice.

1. Campbell’s Law

Campbell’s Law states that once a metric is accepted as a measure of success, its value as such a measure tends to become compromised, for example, as people try to compete by “gaming” it.

In research assessment, when traditional metrics such as the h-index and Journal Impact Factor are used in isolation to assess academic achievement, they can cement biases towards certain groups – male researchers, for example.

Researchers with strengths not covered by the metrics, such as in mentoring and collaborating, may be overlooked.

What it looks like: Researchers resort to self-citations to boost their h-index, and hiring committees pick the candidates with the highest h-index.

2. Matthew effect

The Matthew effect describes the phenomenon whereby those who start with advantages accumulate more advantages over time compared with those who are disadvantaged.

“The idea is that resources are more likely to flow to those who already have them,” says Hatch.

Applied to research assessment, it suggests that candidates with high grant allocations and high citation numbers are more likely to be rewarded.

For example, says Hatch, evaluators tasked with processing hundreds of applications and proposals may prioritize candidates that fit into their preconceived notions of success.

What it looks like: Funding agencies may award more grants to researchers who already have a long funding track record.

3. Anchoring

Relying on the first piece of information we see as a reference point for future decision-making is known as anchoring.

“It’s like using the first price you see on a menu to set what you consider as normal,” says Schmidt. “It can hugely impact how we think about other people’s experiences and the value they may bring.”

What it looks like: In hiring and tenure decisions, committees may unintentionally judge candidates against the first person they interview.

4. Halo effect

When hiring or funding committees focus on ‘shiny’ attributes such as a candidate’s collaboration with a Nobel laureate or a strong track record of publishing papers in high-impact journals, the halo effect is in action, says Schmidt.

This can result in reinforcing inequitable norms, and cause more well-rounded candidates to be overlooked, she says.

“If someone has graduated from a certain institution or published in a particular journal, you’re more likely to excuse things that are not so great.”

What it looks like: Evaluators may give preferential treatment to a candidate from a prestigious institution without considering the intrinsic quality of their work.

5. Availability

Falling back on impressive stories or easy-to-recall information when assessing researchers is a sign that availability bias is at play. This can favour certain types of researchers, such as those who are more extroverted or well-connected.

“Certain things are more likely to make a dent than others, such as having a memorable conversation with somebody,” says Schmidt. “This can dictate how somebody thinks about you, which means they’re not actually looking at [the assessment] in an objective way.”

What it looks like: Hiring committees may focus too much on memorable anecdotes, such as a candidate having won well-known grant or worked with an evaluator’s colleague.

6. Confirmation bias

Prioritizing information that fits within existing belief systems is known as confirmation bias. This creates a narrow-minded approach to decision-making and can cause evaluators to overlook certain red flags.

“We tend to be very good at building on and bolstering the things we already know,” says Schmidt. “If we’ve already made a strong judgement, it can be incredibly difficult to look at things equitably and objectively.”

What it looks like: Evaluators may cherry-pick information that confirms their view when assessing applications.

7. Status quo bias

Challenging deeply entrenched practices can be tough, but taking the path of least resistance can result in status quo bias, where people and institutions rely on familiar processes rather than adopting new ones.

“Stability in a system is often a really useful thing, but it can also be dangerous,” says Schmidt. “If there’s not an incredibly strong reason to change, you are going to behave the way you always have.”

What it looks like: Evaluators continue to rely on narrowly-based metrics such as citations, despite their drawbacks, to gauge research impact or quality, rather than considering other indicators that may be less well-established or harder to measure.