
The insurance industry is undergoing a massive, tech-driven shift. The next decade will be crucial in deciding the future of the insurance sector. Industry leaders have a massive role to play, particularly in terms of adopting disruptive technologies throughout the value chain, starting from underwriting to policy servicing and claim settlement.
According to Data Bridge Market Research, the AI in the insurance market is expected to touch $6.92 billion by 2028, growing at a CAGR of 24.05 percent for the forecast period of 2021 to 2028. The sector’s growth is expected to be fueled by AI technologies, including machine learning, deep learning, natural language processing (NLP) and robotic automation.
Below, we discuss how insurance companies are leveraging AI, along with some use cases, challenges, and solutions.
AI, NLP adoption
For any business working in the insurance space, the first and foremost step is to list all the sub-processes within the value chain instead of solving the complete value chain or a chunk of processes together. This includes size, wider applicability, and complexity. Based on these parameters, the right processes should be prioritised for a minimum viable product (MVP).
For example, a use case that involves extraction from two-three document types can give you volume, complexity, and wider applicability, such as email submission in underwriting.
It is important to ensure the first use case is successful as it paves the path for other use cases. Once the first successful MVP implementation is set, a roadmap should be created for multiple AI-based proof-of-value (POVs) and integrate these use cases to deliver enhanced efficiency, effectiveness and customer experience.
Challenges in deploying AI-at-scale
Many global insurance companies’ technology and data science teams are exploring multiple generic products to solve structural problems. However, such products tend to reach a saturation point after a few easy, quick wins. Due to the limited capabilities of these generic products, some of the leading companies are struggling to deploy AI at scale, and are now looking at solving the next set of business challenges related to unstructured, handwritten, video and voice data.
The major roadblocks in deploying AI at scale include:
- Continuous upgrades, and modifications in dependent systems
- Limited business domain knowledge of tech teams
- Lack of human-in-the-loop concept
Building comprehensive solutions to address these challenges is easier said than done. An end-to-end AI implementation leverages many tech systems, including ingestion from document management systems, to final posting into business applications such as policy admin system (PAS). While developing solutions, it is best to plan and accommodate all dependent systems upgrades or changes to avoid last-minute hurdles. Thus, timing and system flexibility are critical for smooth AI implementation.
Moreover, successful AI implementation requires contributions from various resources, like, AI-NLP data scientists, data and tech engineers, business and project managers. However, as we move to solve the next level of challenges, it is important that tech teams upskill themselves and learn business nuances (understanding underwriters instructions). A deep understanding of business nuances will enable solutions that can address business complexities and multi-user functionality.
Today, market expectations for 100 percent automation or state-thru-processing from AI solutions are reasonable, and current generalised products have been able to deliver this, albeit for simple problems. In my opinion, expecting 100 percent automation is the reason why these products are limited to straightforward cases.
