Remember that irritating guy at the front of your classroom? The one who got every question right, always handed his homework in on time, captained several sports teams and somehow still managed to be the most popular kid in the playground?
I still don’t know how he did it. Perhaps he was taking secret lessons after school. Perhaps he had a hidden style team to make his uniform fit better. And definitely a top comedian writing his witty one-liners.
Such high levels of performance have often been a mystery to the average person.
New kid on the block
Generally, the smarter someone is, the harder it is for the rest of us to understand. The average person doesn’t comprehend quantum physics or high-throughput genome sequencing.
It happens in human intelligence but, interestingly, the exact same thing happens in artificial intelligence. The smarter it is, the more opaque it becomes. Deep learning, where AI is not given any rules or parameters and has to figure out its own training, is the most sophisticated form – but it quickly becomes a black box, with humans incapable of understanding the unique language it has developed for itself.
In an industry like pharmaceuticals, this is a problem. We need to be able to explain decisions that have life-or-death consequences, to be able to query them and understand them. Otherwise, irrespective of AI’s cleverness, it will never fully be trusted.
Does that mean we’re condemned to inferior, ‘rules-based’ intelligence for such critical industries as ours?
Excellence in explainability
As a young company with big plans to reimagine life sciences through AI, OKRA set out to use only the smartest technology. Anything less would be a disservice to professionals and patients.
But the company quickly realised that by making this hardline decision, it would also need to dedicate an equivalent amount of energy towards ‘explainability’.
Explainability isn’t easy. It doesn’t just need to be accurate, it also needs to be understandable. It forces a translation from technical descriptions of algorithms to simple expressions of the supporting evidence for predictions.
In fact, explainability can be split down into 7 ‘pillars’ [1]:
- Transparency: the ability of the machine learning algorithm, model, and the features to be understandable by the user of the system.
- Domain sense: the explanation should make sense according to the type of user, industry they’re working in and typical language used.
- Consistency: the explanation should be consistent across different models and across different runs of the model.
- Parsimony: the explanation should be as simple as possible, to enhance understandability – though not so simple as to be unclear.
- Generalizability: models and explanations should be generalizable across problems whenever possible.
- Trust: users must be able to trust what they are acting on; that the explanation algorithm is capable and accurate.
- Fidelity: that the explanation and the predictive model align well with one another.
As AI comes into greater use, ethics and fairness become more important. Serious consequences can be far-reaching, with the cost of mistakes higher as we give more scope to automated systems. Meanwhile, compliance issues, from GDPR to the rights of individuals to question decisions made about them, are on the rise. It is clear that as with any technology, a focus on pure performance is not enough.
The ‘glass box’ promise
For AI to thrive in any part of the Life Sciences industry, explainability must be a key consideration. In the absence of regulation in this area, it’s up to us as individuals to adopt a strong moral code.
Let’s make sure transparency and explainability are a minimum standard going forwards. OKRA has introduced a ‘glass box’ promise on every one of its products, and we call for everyone in this nascent industry to follow suit.
References
[1] Ankur Teredesai, Muhammad Aurangzeb Ahmad, Carly Eckert, Vikas Kumar (2018) Explainable Machine Learning Models for Healthcare AI, available at https://www.youtube.com/watch?v=4pgLsDzrlB8
Follow the conversation on LinkedIn