You bet your glass it’s a good idea

Remember that irritating guy at the front of your classroom? The one who got every question right, always handed his homework in on time, captained several sports teams and somehow still managed to be the most popular kid in the playground?


I still don’t know how he did it. Perhaps he was taking secret lessons after school. Perhaps he had a hidden style team to make his uniform fit better. And definitely a top comedian writing his witty one-liners.


Such high levels of performance have often been a mystery to the average person.

New kid on the block

Generally, the smarter someone is, the harder it is for the rest of us to understand. The average person doesn’t comprehend quantum physics or high-throughput genome sequencing.


It happens in human intelligence but, interestingly, the exact same thing happens in artificial intelligence. The smarter it is, the more opaque it becomes. Deep learning, where AI is not given any rules or parameters and has to figure out its own training, is the most sophisticated form - but it quickly becomes a black box, with humans incapable of understanding the unique language it has developed for itself.

In an industry like pharmaceuticals, this is a problem. We need to be able to explain decisions that have life-or-death consequences, to be able to query them and understand them. Otherwise, irrespective of AI’s cleverness, it will never fully be trusted.


Does that mean we’re condemned to inferior, ‘rules-based’ intelligence for such critical industries as ours?



Excellence in explainability

As a young company with big plans to reimagine life sciences through AI, OKRA set out to use only the smartest technology. Anything less would be a disservice to professionals and patients.


But the company quickly realised that by making this hardline decision, it would also need to dedicate an equivalent amount of energy towards ‘explainability’.


Explainability isn’t easy. It doesn’t just need to be accurate, it also needs to be understandable. It forces a translation from technical descriptions of algorithms to simple expressions of the supporting evidence for predictions.


In fact, explainability can be split down into 7 ‘pillars’ [1]:

  1. Transparency: the ability of the machine learning algorithm, model, and the features to be understandable by the user of the system.
  2. Domain sense: the explanation should make sense according to the type of user, industry they’re working in and typical language used.
  3. Consistency: the explanation should be consistent across different models and across different runs of the model.
  4. Parsimony: the explanation should be as simple as possible, to enhance understandability - though not so simple as to be unclear.
  5. Generalizability: models and explanations should be generalizable across problems whenever possible.
  6. Trust: users must be able to trust what they are acting on; that the explanation algorithm is capable and accurate.
  7. Fidelity: that the explanation and the predictive model align well with one another.


As AI comes into greater use, ethics and fairness become more important. Serious consequences can be far-reaching, with the cost of mistakes higher as we give more scope to automated systems. Meanwhile, compliance issues, from GDPR to the rights of individuals to question decisions made about them, are on the rise. It is clear that as with any technology, a focus on pure performance is not enough.



The ‘glass box’ promise

For AI to thrive in any part of the Life Sciences industry, explainability must be a key consideration. In the absence of regulation in this area, it’s up to us as individuals to adopt a strong moral code.


Let’s make sure transparency and explainability are a minimum standard going forwards. OKRA has introduced a ‘glass box’ promise on every one of its products, and we call for everyone in this nascent industry to follow suit.


References

[1] Ankur Teredesai, Muhammad Aurangzeb Ahmad, Carly Eckert, Vikas Kumar (2018) Explainable Machine Learning Models for Healthcare AI, available at https://www.youtube.com/watch?v=4pgLsDzrlB8



Follow the conversation on LinkedIn

WEBINAR: Why 'Next Best Action' is not enough - How to create empowering, trusted AI systems for pharma field teams

Leaders from Astellas, Bristol Myers Squibb, GSK and OKRA discuss how artificial intelligence empowers field teams in the pharmaceutical industry.

17th September 2020

OKRA raises ambitious target to 6 million predictions in Life Sciences by end of 2020

Empowering the pharma industry with real-time insights amidst Covid-19 uncertainty.

29th September 2020

FieldFocus

To achieve business continuity, sales teams have to be more productive, savvier and work harder to serve customers. But to achieve growth, teams need another edge.

Request a Demo

Do you want to learn more?
Leave your email here and we will contact you.

For any other requests including press and speaking opportunities, please email hello@okra.ai