Explainable A.I. – Machine Learning Made Easy?

Trust is an important word in insurance and never more so when it comes to its ultimate purpose from a policyholder’s point of view – paying claims. Customers expect complete transparency as a minimum requirement and if they don’t get that there is a strong possibility that a regulator or industry ombudsman will act on their behalf to ensure compliance with the spirit as well as the letter of the law.

There is also a requirement for A.I. technology to be transparent about how it uses to build trust, clearness, and comprehension of applications. This is where Explainable A.I. comes in. Explainable AI gives insights into the data, factors and decision points used to make a suggestion. Explainable AI (XAI) is the opportunity to make the decision-making process quick and transparent. All in all, XAI ought to erase the alleged black boxes and clarify broadly how the decision was made.

As the world becomes more interconnected and, in many ways, more complex it seems to be driving a demand for applications that allow the layman to perform tasks that were traditionally the preserve of experts in the field. The movement to low code or no code springs to mind as an equivalent technology development.

Low-code enables developers of varied experience levels to create applications using a visual user interface in combination with model-driven logic. A common benefit is that a wider range of people can contribute to the application’s development—not only those with formal programming skills. In a similar way with low code, XAI gives the business owner direct control of AI’s tasks, since the proprietor definitely understands what the machine is doing and why.

That is one way of educating A.I. users about the benefits of this increasingly useful technology. Another way of advancing understanding would be to let the technology itself do the talking, which has got researchers thinking less about what AI can learn and more about what humans can learn from AI. One theory is that a computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.

That is either an exiting possibility or a very dangerous depending on where you sit on the A.I. cup half full or empty spectrum but there is no denying the possibilities from an insurance perspective. The technology could literally teach underwriters or claims experts to think outside of the “black box” and advance understanding of known or even unknown exposure or risks.

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”—Ray Kurzweil
In an idealistic worldview, this new form of collaboration between man and machine will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach. Those of a more lugubrious disposition will be inclined to take an alternative view and invest – if they have the funds – in their own doomsday bunkers, which is reportedly what some super rich denizens and investors in A.I. have done in Silicon Valley.

That is possibly a bit further down the road. For now, however, Explainable AI is all about improving the transparency of insurance decision making. How can insurers explain precisely why or how a decision was made to their customers and regulators? It is a fascinating subject and the possibilities are endless at the moment. Perhaps one day an A.I. will explain to us humble humans just how endless those possibilities are!

X