With the growing popularity of large language models such as Bard and ChatGPT, an increasing number of companies in the financial services industry are looking into the possible uses of artificial intelligence in their business.
The insurance sector is no exception. It sees the potential of the technology in facilitating business processes and enhancing customer experience, e.g., in the processing and settlement of claims.
Research conducted by Accenture, a digital solutions and cloud services provider, finds that four in five claims executives believe that AI and machine learning-based data analytics will add value to their business. At the same time, about 60% of respondent firms are also keen to pour more than US$10 million into AI tools between 2023 to 2025.
This is not surprising, considering that administrative procedures account for 40% of the underwriting business, with inefficiencies in the system resulting in losses of as much as US$160 billion between 2022 to 2027, according to Accenture.
Risk evaluation is often a meticulous and complicated process, involving customized terms and conditions for different claims. AI can perform redundant administrative functions, such as data inputting and other manual processes, enabling underwriters to focus on more complex tasks. It can even go beyond the clerical functions.
Daido Life Insurance, for example, has adopted an AI prediction model that enables the insurer to incorporate AI recommendations into the revision of terms and conditions in an insurance contract. Ant Financial is leveraging AI to determine payouts to policyholders while online insurance platform Lemonade is using natural language processing and machine learning to chat with customers and process their requests.
Greater benefits can be derived from AI if it is coupled with data analytics, which can dig into historical insurance data for patterns and structures that can be used for forecasting. This function, in fact, has enabled some insurers to build scenarios that incorporate incidents such as cyberattacks which are difficult to insure, The Geneva Association, an insurance think tank, says in a commentary.
Swiss Re has adopted an AI model that can predict flight delays, which it has incorporated into its flight delay compensation system. The solution makes use of more than 200 million historical data points, including data from over 90,000 flights per day, to automatically adjust rates, enabling passengers who have purchased insurance with their ticket to receive a payout even without filing a claim.
AI-enabled data can provide insights into complicated claims to assist human decision-making, thereby facilitating settlement with greater efficiency and accuracy.
A report by consultancy firm McKinsey predicts that while claims processing will remain a primary function of carriers in the near to medium term, more than half of related activities will be replaced by automation by 2030.
The report presents scenarios where the widespread use of connected devices can revolutionize the insurance business. “In the case of an auto accident, for example, a policyholder takes streaming video of the damage, which is translated into loss descriptions and estimate amounts. Vehicles with autonomous features that sustain minor damage direct themselves to repair shops for service while another car with autonomous features is dispatched in the interim,” the report says.
The Geneva Association insists that human decision-making will always be a critical part of the insurance business. “While there are differences between AI and non-AI insurance models, the main concerns surrounding AI are not new,” it says. “The human in the loop is a vital component of the extensive redress mechanisms that exist within the insurance sector, either internally or through external entities like an ombudsman.”
Like any other technology finding its way into the market, AI must come with adequate guardrails to ensure that ethical considerations prevail over coded biases. Regulators must closely monitor AI developers and construct a solid framework for compliance to maintain a balanced share in decision-making between humans and technology, the think tank says.