Taking the pulse of AI and automation in the specialty claims space
Specialty claims are not like other claims. They tend to be more expensive, more complex, more sensitive, and more challenging in a host of other ways. And the exponentially increasing uncertainty and unpredictability of today’s economic, environmental and commercial landscape only amplifies those distinctive traits. Inflation is pushing costs up. Emerging risks are multiplying. And claims teams are being asked to move faster, provide greater transparency, and do more to support risk and underwriting decision-making – while still ploughing through the painstaking behind-the-scenes work that complex claims demand.
Against that backdrop, DOCOsoft was delighted to partner with Insurance Insider to host a fascinating live webinar earlier this month, AI and complex claims – clearing a path from complexity to clarity. Moderated by Insurance Insider’s Meg Green, with Joe Shaw, Director of Claims at the International Underwriting Association (IUA), and Bethan Hills, Head of Claims Operations at AXA XL, joining our own Head of London Market, Ian Gibbard, on the panel.
The result was a practical market-aware one-hour conversation about where AI and automation can genuinely help, where the barriers lie, and what good looks like in the specialty claims context.
A shared language for AI and automation in claims
Early on, Ian helped frame the terms of the discussion around AI and automation. Noting that using the term AI indiscriminately can create confusion and impede progress, Ian broke it down into four main categories:
1) Automation – rules that say ‘When Y happens, do X’
Simple non-AI automation involves the consistent rules-based execution of routine tasks. It does not learn and it does not exercise judgement.
In claims, that might include things like:
- Pre-handling checks before a claim is picked up
- Routing to the right team based on pre-agreed criteria such as class, value, authority
- Preventing progression until required data or documents are present
- Automated reminders for chasing outstanding documents or diary actions
- Posting settlements or updates into finance and market systems without rekeying.
This is an area in which a lot of insurers are already seeing tangible gains – because it removes some of the administrative friction that accumulates around almost every claim.
2) Basic AI (machine learning) – prediction and pattern-spotting
Machine learning uses historical data to identify patterns and output probabilities, not certainties.
In claims, that can help to:
- Flag claims likely to become large or complex
- Categorise incoming emails and documents
- Screen and prioritise claims based on risk indicators (litigation, vulnerability, complexity)
- Surface similar past claims to support more consistent reserving and strategy
- Identify unusual patterns in delegated authority data.
This is about decision support. It needs to be transparent in terms of its predictive confidence, its inputs and limits – and it needs human oversight.
3) Generative AI – creating drafts, summaries, and structured outputs
Gen AI is particularly relevant in the world of specialty claims because the work is so document-heavy: big files, lengthy email chains, and a multitude of reports in a wide variety of formats (e.g. delegated authority bordereaux) – all combining to offer multiple versions of the truth.
Gen AI can be used, for example, to:
- Summarise long claim files or adjuster reports
- Create consistent claim summaries for handovers and large-loss reporting
- Draft responses to routine broker or adjuster queries
- Extract structured data from unstructured documents
- Answer natural-language questions about a claim by using the information already in the file.
In other words, Gen AI has a powerful role to play turning unstructured noise into decision-ready signals.
4) Agentic AI – taking multi-step action within guardrails
At the most advanced end of the spectrum – and also the most sensitive operationally – lies Agentic AI. It combines AI with the ability to execute actions across systems – governed by rules, checks and audit trails.
Potential use cases discussed during the session included:
- Identifying missing information, requesting it, chasing it, and updating a diary
- Pulling up relevant policy wording sections and highlighting what matters for a coverage question
- Responding to natural language questions by scanning claim documents and surfacing relevant findings
- Preparing settlement information once conditions are met, then routing for review and approval.
There was general agreement among the panellists that, if it cannot be audited, explained, and controlled, it should not be operationalised in a regulated claims environment.
Where the market really is today
The audience poll was telling.
- 30% reported no meaningful AI use in claims (beyond incidental tools)
- 38% said they were exploring (pilots and early trials not yet tied to core workflows)
- 23% are using AI in a limited range of live-use cases (not consistent end-to-end)
- 7% said their firms have AI embedded across multiple workflows
- 0% described AI as fully core and continuously refined.
The panel agreed that these results fell pretty much where they would have expected. As Joe Shaw said, if you feel you’re possibly a bit behind, you may well be broadly in line with the market. The story right now is exploration, learning, and a careful transition from experiments and proof of concept into business-as-usual – and that this process transition is the hard bit.
The real blockers are operational, not technological
Bethan Hills provided a valuably grounded perspective: AI adoption in claims, she insisted, is not simply a tech change. It’s a people change, a governance change, and often a platform change.
The strategic challenges that came through most clearly in the session were:
- Cultural resistance and trust – especially among highly experienced specialty handlers who depend professionally on insight and judgement built up over years
- Data security and privacy – AI relies on sensitive information, and a failure here overwhelms any operational benefit
- Legacy systems and integration complexity – AI is difficult and costly to embed if core workflows remain fragmented
- Governance, explainability, and auditability – you need to know what data was used, what confidence was implied, and how recommendations can be challenged
- Cost and change management – training, oversight, and implementation effort are substantial, even when the long-term ROI is clear.
One particularly important nuance to emerge from the session was around how trust is generated when the system is transparent about what it relied on, how confident it is, and where humans can override. This helps shift away from the view that ‘AI that feels like one more overhead’ and an appreciation that AI is genuinely supporting skilled claims professionals.
Quick wins
When the conversation turned to value measurement, the panel took a refreshingly pragmatic approach. Bethan’s message was clear: start with small high-impact areas – and measure closely. Early value often comes from removing friction rather than attempting ambitious end-to-end automation from day one.
Examples discussed included:
- Faster claims processing times
- Reduced manual errors
- Better data accuracy, quality, and completeness
- Reduced FNOL handling time through automated data entry
- Better visibility of backlogs and bottlenecks.
The KPIs that show change quickly were identified as:
- Average handling times and time-to-action
- End-to-end cycle time, broken down by process steps
- Speed of payment
- Reduced outstanding items and statics
- Reserving accuracy
- Client experience measures, including Net Promoter Score
- Fraud and subrogation outcomes expressed in hard savings.
A clear takeaway was that early gains free capacity – and then that freed capacity enables claims teams to take on more of value-adding work, like engaging in complex conversations with customers and partners.
The feedback loop that could change underwriting outcomes
One of the most strategically important topics covered was the role of AI and automation in strengthening the feedback loop from claims back into underwriting and wordings.
Both Joe and Ian highlighted how much value remains locked up because claims data is often too disparate, too late arriving, or too unstructured to support timely learning. AI can change that – not by replacing judgement, but by making insights more immediate and usable.
Use cases discussed included surfacing:
- Recurring causes of loss
- Links between causes of loss and downstream financial impact
- Weaknesses in wordings and clauses
- Patterns in delegated authority business, including “drift” in DA arrangements.
This is where AI is not simply acting as an efficiency lever, but enabling better risk selection, better pricing and better market behaviour – provided the underlying data can be structured and trusted.
Pilots are easy, operationalising is hard
One of the clearest takeaways for the specialty market is that it’s relatively easy to pilot AI within a narrow workflow, on a single line, in a controlled environment. What’s far harder is turning it into part of day-to-day claims operations when processes and data are spread across multiple tools, inboxes, spreadsheets, document stores and market systems.
This is where integrated claims platforms matter, because in practice:
- AI needs to sit inside the workflow to remove friction
- It needs to use live, governed data – not disconnected copies
- It needs to be auditable, controlled, and reviewable
- It needs to integrate with documents, emails, policy systems, and with finance and market systems.
That is what turns AI from a window that’s open on an adjuster’s screen into an operational capability.
Potential pitfalls and how to avoid them
The final section of the webinar looked at some of the risks that come with the AI territory.
The pitfalls discussed were not abstract. They were the predictable failure modes of any major change programme:
- Doing nothing (and quietly falling behind on competitiveness, talent, and efficiency)
- Pursuing AI because of hype rather than a defined operational outcome
- Pilots that never scale into business-as-usual
- Over-reliance on AI outputs that erodes human capability over time
- Governance failures that create regulatory or reputational exposure
Preserving and transmitting human expertise
Joe noted that if AI removes the routine work juniors historically used to learn on, the market needs to rethink training and knowledge transfer. If we don’t do this, we run the risk of creating a capability gap, where expertise is not being developed and shared in the way it needs to be.
Conclusion
As the session wrapped, Ian raised the tantalising possibility that AI could help solve the problems around extracting data from delegated authority bordereaux that have frustrated the market for 25 years.
The panellists agreed that AI is here to stay and offers enormous benefits – along with one or two downside risks that will need careful handling in the months and years ahead. Done properly, AI and automation can have a transformative role in reducing friction, improving data confidence, strengthening governance, and connecting claims insight back into underwriting and wordings fast.
The objective with AI, all agreed, was not to elbow expert claims professionals aside but to empower claims teams and offer them a clearer path from complexity to clarity.