Want to avoid discrimination? Then check your IT systems, urgently

Access your own AI brain and deliver cost-effective solutions, time-saving resources and targeted patient outcomes.

“Covid-19 is a great leveller.”

 

So said many politicians back in March, when we were still trying to understand a disease which seemingly infected everyone from prime ministers to paupers, sending us all into literal lockdown.

 

Until a few days later, when we emerged from our collective stupor and realised this lazy conclusion was offensive and myopic, another generality espoused by a ruling class unaware of everyday realities at the bottom of the social pyramid.

 

“They tell us the coronavirus is a great leveller. It’s NOT. It’s much, much harder if you’re poor,” observed BBC Newsreader Emily Maitlis, in a widely-praised effort to debunk the myths propagated by politicians. She went on to point out that those in manual labour jobs were more likely to be either out of work (or more exposed to the virus if they are working), and that lockdown is far harder if you live in a cramped basement with no outdoor space. We later learned that the death rate for black victims is three times that of white people.

 

What at first appears indiscriminate is rarely the case. There are always some who suffer more than others.

 

The future isn’t safe

Today we are embracing a technology that on the face of it, seems fair. We call it Artificial Intelligence.

 

Unaffected by privilege, unconcerned with race or gender, it apparently makes sense to give increasing power and capabilities to the computers that are replacing humans in charge of our world.

 

Unfortunately, they’re not fair either.

 

A new article in Nature studies how the decisions made by AI are increasingly biased not simply by race or gender, but something that could be far more insidious: influence and profit. That’s because many AI implementations are brought in by those who are already captains of society, in an effort to exert even greater control (often innocently referred to as ‘management’).

 

In the pharmaceutical industry, we are using AI to find new molecules, but we are also increasingly using it to enhance internal processes: guiding our sales forces, indicating where to place our marketing budget and recruiting for clinical trials.

 

Typically, a senior executive finds it straightforward to encourage the use of AI – as from a management perspective, it makes complete sense.

 

Little thought is given to the fate of data subjects – those at lower levels of the hierarchy, who must rely on the instructions given. There is no way for these people to investigate AI, to contest it, to influence it or to even dismantle it. The computer is a black box. One which must be fiercely obeyed – or else you’ll be out of a job before you can say “transparency”.

 

Like this, AI is little more than a tool for tyranny that suppresses the intelligence and agency of a workforce. It provides short-term utility, but ultimately encourages long-term limpness by disempowering teams across the company. It reinforces inequality.

 

As socially conscious companies, we must make sure our computers – not just our people – are embracing diversity and fairness.

 

The true meaning of fairness

Fairness is what we’re all looking for. Fairness “implies a commitment to ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation. If unfair biases can be avoided, AI systems could even increase societal fairness.” according to the High Level Expert Group (HLEG) on AI.

 

And Loubna Bouarfa, OKRA’s Founder and CEO, was one of the 52 HLEG experts assigned by the EU Commission to set ethical guidelines for artificial intelligence and machine learning businesses across Europe.

 

In healthcare, it is unfair for people to not get the right treatment for their condition due to socio-economical reasons like geography, ethnicity or social status.

 

Therefore, we need to make sure that AI systems prioritise the accessibility of treatments according to prevalence and impact rather than socio-economical reasons. For instance, AI systems should make sure that a single mother has the same access to healthcare whether they live in rural Scotland or Central London.

 

At OKRA, we have developed AI systems that guide pharma commercial and medical field teams towards the highest priority doctors. The priority score takes into account disease prevalence, access and demographics data, indicating the practises with the highest opportunity for patients with the most need of a particular treatment to avoid disease regression.

 

To do that right, we must focus on the right metrics. While senior executives may be interested in short-term sales, AI systems focus on identifying the location of under-treated patients, guiding field teams to the areas where treatments are needed most, providing a more authentic long-term opportunity.

 

Explainability beyond correlation

In this context, how can we change the behaviour of people? How can we make sure they understand the true meaning of fairness?

 

For a professional in the field it is easier to visit doctors in their urban district than to travel miles away to a remote rural area. AI is here to convince professionals to take that extra mile, to make every effort to ensure that medicines are distributed fairly across society.

 

But how? Well, with explainability.

 

At OKRA, explainability is baked in. So, as well as the usual focus on accuracy and reliability of output, we invest just as many resources into providing transparency and reason. Our aim is to build a system that’s trusted, that’s seen as a companion rather than a dictator.

 

The result is the hybrid explainability engine, a built-in function that expresses in plain language why every output was given.

 

OKRA’s engine combines causality and correlation using causal inference. In simple words, causal inference tries to find or guess why something happened in the past, in order to explain why something is expected to happen in the predicted future.

 

By navigating the past, a machine is able to explain the future.

 

Explainability encourages enquiry and builds trust. It meets broader ethical standards: AI by and for the people.

 

Redressing the balance

OKRA’s focus – on crucial health and patient outcome challenges – puts extra emphasis on the key requirements of diversity, non-discrimination and fairness. The AI HLEG ethics guidelines tackle this through three key requirements: (i) avoidance of unfair bias, (ii) accessibility and universal design, and (iii) stakeholder participation.

 

This means that OKRA focuses on creating a heterogeneity of opinions and perspectives in the working environment and has created an effective tool for recognising inconsistencies and biases. This tool might in some instances challenge commonly held perceptions of who the user or patient is, which confronts normative ideas around ethnicity and gender, to name a few.

 

Through ongoing engagements with stakeholders, feedback and dialogue is placed with both clients and users at the heart of its systems. According to the AI HLEG guidelines, “for AI systems to be trustworthy, it is advisable to consult stakeholders who may directly or indirectly be affected by the system throughout its lifecycle” – that means far more than just the bosses and architects of the system.

 

You have no obligation to work with OKRA – but please, as you advance your technical ambitions, avoid inadvertently steam-rollering the people you depend on.

Choose the right kind of intelligence

Request a demo

Please get in touch and let us walk you through what the AI brain can do for your team.

Contact the AI Experts