Ethics and Artificial Intelligence
The two big topics in machine learning/artificial intelligence (AI) at the moment are ethics and explainable AI. They are about how we should use and limit the use of AI and how AI should not be an understandable machine, which simply says yes or no.
These areas are relevant to insurance and need to be thought through carefully because not all of the benefits associated with AI are necessarily positive. Of course, the potential opportunities are great but the negatives could include AI bias, loss of certain jobs, a shift in human experience as machines take on more responsibilities, global regulations wars as different regimes adopt different approaches to ensure consent and transparency, accelerated hacking and AI terrorism.
AI can clarify but it can also obscure when employed to sow disinformation, or reap the rewards of other areas including privacy and facial recognition, which can drive profitability but at the expense of personal or public liberty.
Such concerns are causing companies to question the use of this technology. According to some experts, cited in this Fortune article, businesses are so concerned about ethics related to AI that they are killing projects involving the technology or never starting them to begin with.
That is a shame because AI promises enormous potential rewards to businesses that are able to harness and control the technology in an ethical and, dare I say, smart way.
AI and machine learning have the potential to create an additional $2.6tn in value in the next year in marketing and sales, and up to $2tn in manufacturing and supply chain planning, says Forbes magazine, while Gartner predicts the business value created by AI will reach $3.9tn in 2022.
Smart machines = smart ethics?
No one questions the value add of the technology but when trillion dollar numbers are involved, it is probably sensible to question the ethics involved within a business before the legal authorities or regulators do it for you.
The ethical challenges posed by AI range from face recognition to voter profiling, brain machine interfaces to weaponised drones. However, bias in AI-powered facial recognition systems has been identified as a major concern.
In 2018, a study by MIT found that while determining gender using three different facial recognition programmes, the error rate for light-skinned men was 0.8%, while darker-skinned women were mis-gendered 20% to 34% of the time.
According to a blog by KPMG’s Leanne Allen, published on the ABI website, regulators are on the ethics case. The Information Commissioners Office, Financial Conduct Authority (FCA) and the Bank of England are all planning work and guidance on data ethics and AI. Within insurance, the FCA recently published an interim report examining higher premiums for long-standing home and motor insurance customers caused by pricing practices focusing on higher margins.
The regulator fears that that AI will help insurers to produce unfair outcomes. For example, data that flows into insurers can tell them about the policyholder’s purchase habits such as how they shop, when they shop, what they shop for and how they pay.
So-called price optimisation is being widely adopted across insurance markets. Price optimisation is a controversial practice, as this post on the Nordisk försäkringstidskrift (Scandinavian Insurance Quarterly)website argues, and its claims variant even more so. Many see it as unethical for a risk-based business such as insurance.
Proceed with care
Insurers should therefore proceed with caution, particularly if we get to a point where we transition from largely human decision-making, to largely algorithmic decision-making.
This then brings us to questions of accountability. Many insurance policyholders will not want their insurance to be priced by an algorithm; they will be suspicious of the technology.
Think of the insurance implications of an environment where underwriting decisions are derived from a mix of algorithmic and human involvement. Trust is an important word in insurance and never more so when it comes to its ultimate purpose from a policyholder’s point of view – paying claims.
Customers expect complete transparency as a minimum requirement and if they don’t get that there is a strong possibility that a regulator or industry ombudsman will act on their behalf to ensure compliance with the spirit, as well as the letter of the law.
Explainable AI
There is also a requirement for AI technology to be transparent about how it uses to build trust, clearness, and comprehension of applications. This is where explainable AI comes in.
Explainable AI (XAI) gives insights into the data, factors and decision points used to make a suggestion. XAI is the opportunity to make the decision-making process quick and transparent, enabling so-called data-driven decision making (see this article from Analytics Insight).
All in all, XAI ought to erase the alleged black boxes and clarify broadly how the decision was made. As the world becomes more interconnected and, in many ways, more complex, it seems to be driving a demand for applications that allow the layman to perform tasks that were traditionally the preserve of experts in the field.
The movement to low code or no code springs to mind as an equivalent technology development. Low-code enables developers of varied experience levels to create applications using a visual user interface in combination with model-driven logic. A common benefit is that a wider range of people can contribute to the application’s development – not only those with formal programming skills.
In a similar way with low code, XAI gives the business owner direct control of AI’s tasks, since the proprietor definitely understands what the machine is doing and why (see also this Analytics Insight article).
That is one way of educating AI users about the benefits of this increasingly useful technology. Another way of advancing understanding would be to let the technology itself do the talking, which has got researchers thinking less about what AI can learn, and more about what humans can learn from AI.
One theory is that a computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.
That is either an exciting possibility or very dangerous depending on where you sit on the AI cup-half-full or half-empty spectrum, but there is no denying the possibilities from an insurance perspective.
The technology could literally teach underwriters or claims experts to think outside of the “black box” and advance understanding of known, or even unknown exposure or risks.
In an idealistic worldview, this new form of collaboration between man and machine will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.
That is possibly a bit further down the road. For now, however, Explainable AI is all about improving the transparency of insurance decision making, as this Aviana post explains.
How can insurers explain precisely why or how a decision was made to their customers and regulators?
These are questions that DOCOsoft is already considering in our own machine learning research and development. In our machine learning models aimed at reserve prediction and claims triage we have carefully selected algorithms that have a built-in balance between explainability and complexity.
It is a fascinating subject and the possibilities are seemingly endless at the moment. Perhaps one day an AI will explain to us humble humans just how endless those possibilities are!