Is A.I. the Right Thing to Do?

(Not so) simple human ingenuity and our restless desire to explore new frontiers – whether across new geographies or within the boundaries of the mind – always raise the question: “should we be doing this?”

Should we be sending radio messages into outer space that could be picked up by potentially hostile alien aggressors (or even foreign A.I.?) Should we dig up a corpse and attach electrodes to its neck to power and resurrect it, as Shelley imagined, in the same of science and a desire to defeat nature? Should we use a robot rather than a human to conduct invasive surgery on patients? Should we use machine learning to improving customer experiences by strengthening sales and marketing with greater insights?

These are not just questions for scientists, business people and governments; they are questions for philosophers. As the University of Oxford outlines, philosophers contributed to the development of medical ethics forty years ago, and a similar ethical intervention is required to answer raised by the rise of A.I. The ethical challenges posed by A.I. range from face recognition to voter profiling, brain machine interfaces to weaponised drones.

Bias in A.I.-powered facial recognition systems is a major concern. In 2018, a study by MIT found that while determining gender using three different facial recognition programs, the error rate for light-skinned men was 0.8%, while darker-skinned women were misgendered 20% to 34% of the time. Amid Black Lives Matter protests last year, IBM withdrew its facial recognition technology, condemning the wider technology’s use for “for mass surveillance, racial profiling, violations of basic human rights and freedoms.

According to the ABI, regulators are on the ethics case. The Information Commissioners Office (ICO), FCA and the Bank of England (BoE) are all planning work and guidance on data ethics and A.I. Within Insurance, the FCA recently published an interim report examining higher premiums for long standing home and motor insurance customers caused by pricing practices focusing on higher margins.

The regulator fears that that A.I. will help insurers to produce unfair outcomes. For example, data that flows into insurers can tell them about the policyholder’s purchase habits such as how they shop, when they shop, what they shop for and how they pay. So-called price optimisation is being widely adopted across insurance markets. Price optimisation is a controversial practice, and its claims variant even more so. Many see it as unethical for a risk-based business such as insurance.

“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?” —Gray Scott
Insurers should therefore proceed with caution, particularly if we get to a point where we transition from largely human decision making, to largely algorithmic decision making? This then brings us to questions of accountability. Many insurance policyholders will not what their insurance to be priced by an algorithm; they will be suspicious of the technology.

No one wants a repeat of the English schools GCSE fiasco of 2020 when schoolchildren and their parents leant that their children were going to be effectively marked by a machine! That led to a swift U-turn on the part of the UK Government when the resulting outcry forced a change of mind.

Think of the insurance implications of an environment where underwriting decisions are derived from a mix of algorithmic and human involvement. As the insurance thought leader, Duncan Minty, asks: “Should the firm’s code of ethics apply only to the latter and not to the former? Do your firm’s ethical values apply only to the human element and not to the algorithm element of ‘how things get done round here’? And if both, how is this being put into practice?”

X