13/03/20
A responsible use of artificial intelligence (AI) is essential for creating future-proof insurers. Insurers are fully aware of this and are increasingly trying to apply AI in business processes. But what are the limits to responsible use? To what extent is the use of data still ethical and fair? Eugénie Krijnsen, industry leader for the financial sector at PwC, in conversation with Ruud Wetzels and Rian de Jonge, who are both experts in data analysis at PwC.
“If insurers want to apply AI in a way that does not undermine customer confidence, they will have to think very carefully about what is responsible for them and what is not”, says Krijnsen.
“The application of AI is in any case an absolute necessity for insurers”, according to Wetzels. “AI offers many opportunities to increase customer convenience, reduce costs and support trust needed for insurers. However, AI is also complex and it will create new risks.”
Insurers are now using artificial intelligence in call centres. By means of technology, the use of language and the emotional state of calling customers are registered to help determine the further routing of the call. To enable automated claim handling, algorithms search in the background for signs of fraud. AI is also used for dynamically setting the insurance premium, claims handling, and the development of new insurance services for specific target groups.
According to Wetzels, each insurer will have to make a fundamental choice that is in line with the strategy and culture of the insurer, taking into account the choices made by competitors. “The discussion about what constitutes responsible AI has not yet taken shape, and insurers are making different choices in this respect. Some insurers focus on the short term and go as far as legislators and regulators allow. Other insurers are mainly trying to set up AI properly for the long term.”
“The application of AI is an absolute necessity for insurers. AI provides many opportunities to increase customer convenience, reduce costs and support trust needed for insurers. However, AI is also complex and will create new risks.”
When making these choices, Krijnsen believes that insurers are well advised to take into account the three factors that determine the extent to which insurers are relevant to stakeholders. These factors are convenience, low costs and trust.
Krijnsen: “By convenience we mean, among other things, the extent to which customers are satisfied with their experiences with the product and service offerings of insurers. But also with the speed, stability and simplicity they experience in interacting and communicating with insurers through the channel of their choice.”
In addition, continuing to strive for low costs is a prerequisite for the existence of insurers. “It is necessary to be able to provide products and services at a competitive price that customers experience as fair”, says Krijnsen. Automation has already reduced costs by making manual processes redundant. AI also promises - once it has been implemented on a sufficient scale - to make us less dependent on human brains in day-to-day processes. Algorithms are already better and faster in certain decision-making processes than people, and in the long run it will also be cheaper.
Finally, the trust factor indicates the extent to which customers find their insurer reliable, predictable, transparent and sincere. “By trust we also mean the extent to which insurers make a useful and valued contribution to society, which they are also part of themselves of course. It is the foundation for the relevance of insurers. The application of AI inevitably involves dilemmas that directly affect this trust, and that can affect some of the most important foundations of insurance, risk sharing and solidarity”, says Krijnsen.
One of the dilemmas in AI is the use of data. Customers, sometimes without being fully aware of it, release a lot of sensitive, personal data about their preferences and, for example, their health or driving behaviour. Based on that data and AI, insurers are able to develop personalized products and services. Personalisation is only at odds with solidarity: in exchange for handing over data, premium discounts can be offered, but will customers who are not willing to share their data get a higher insurance premium?
“To what extent does an insurer still consider the application of AI ethical and fair? And do customers feel the same way? Wrong choices or an unfortunate communication around these kinds of dilemmas can seriously damage confidence in insurers", says De Jonge.
PwC has developed a Responsible AI Toolkit to structure thinking about the responsible use of AI, and to manage the risks associated with its use. Using a series of customized tools, companies can explore five dimensions of Responsible AI:
Ethics and regulation: where do insurers draw the line for themselves and where do legislators and regulators draw the line?
Bias and fairness: how do you carefully weigh up these complex, partly socio-cultural, concepts that are sometimes at odds with each other? Even after excluding any form of input bias, algorithms can still generate outcomes that are perceived as unfair.
Interpretability and explainability: no customer or supervisor accepts 'computer says no' as an explanation for an unfavourable decision. Insurers must be able to understand, explain and defend why their algorithm came to that decision.
Robustness and security: how can you prevent small input variations and malicious outsiders from influencing the outcomes produced by AI?
Governance: where lies the responsibility within the insurer for the content of an algorithm and the decisions it makes? At the data team, at the business owner or at the board of directors?
“Anyone who wants to deal with AI in a responsible manner, will in any case have to think very carefully about these five aspects, which position he chooses, and which way of working he wants to set up”, says Wetzels. “Many insurers are already investigating how AI can generate business value for them, but often they still find it difficult - quite understandably - to determine how, for example, honesty or explainability can contribute to this.”
“The risk is that thinking about strategic issues, such as 'what exactly does responsible AI mean for us?', will remain separate from the question of how you put this into practice”, De Jonge adds. “How do you make sure you have enough data scientists in-house? How do you build an AI platform? How do you validate your models? How do you set up the governance around AI? The strategic side and the practical side have to fit together in order to really get responsible AI off the ground and make it profitable. AI is certainly promising, but our message is that you need to know exactly what you are doing in order to deliver on the promises you make to your customers in a responsible way.”