New European AI law helps navigate complexities of AI adoption

AI Act gives financial sector opportunity to promote trust

  • Blog
  • 13 Aug 2024
Anthony Kruizinga

Anthony Kruizinga

Partner, PwC Netherlands

On 1 August 2024, the European Artificial Intelligence Act (the 'EU AI Act') came into force. As artificial intelligence (AI) rapidly evolves, the strategic deployment of AI technologies can help to keep banks’ business models competitive and sustainable. At the same time, the financial sector must tread cautiously to mitigate potential risks that could undermine the trust in the financial system, or that could be harmful for customers. The newly approved EU AI Act might help to solve this trade-off. Despite the challenges of regulatory compliance, the AI Act could play a pivotal role in fostering both innovation and trust in the banking industry, says PwC expert Anthony Kruizinga. 

The AI trade-off: innovation versus caution 

In today's rapidly evolving technology landscape, the conversation around AI and in particular generative AI is increasingly prominent, especially in the financial services sector, where its possibilities are immense. AI adoption in financial services can revolutionise customer engagement, streamline operations, and enhance predictive analysis and risk management.

For instance, machine learning algorithms can detect fraud patterns in real-time, significantly reducing financial losses and boosting customer confidence in account security. Natural language processing powers intelligent chatbots and virtual assistants to provide personalised support and financial advice, anticipating customer needs and preferences. Predictive modelling tools analyse vast datasets to deliver risk assessments that inform loan approvals, investment strategies, and portfolio management decisions. 

Banks and financial institutions now have the unprecedented opportunity to customise their products and services to meet evolving client demands. Failure to innovate and to leverage these technological advancements can result in a significant decrease of competitiveness and business model viability. 

Leveraging AI responsibly to sustain customer trust 

Yet, with great power comes great responsibility. The integration of AI must be approached with a commitment to maintaining trustworthiness. Trust is the foundation upon which the financial industry is built. Customers rely on financial institutions to protect their assets and personal data. As banks harness AI to enhance efficiency and offer personalised services, they must ensure that AI is deployed responsibly and ethically. This presents a critical trade-off: should financial institutions fully embrace AI systems' capabilities, or remain cautious due to the potential for AI errors, bias, hallucination, vulnerability to hacking, and lack of transparency in deep-learning models? What measures should be implemented to ensure AI systems are trustworthy, and AI is used responsibly and reliably? We believe the EU AI Act can help solve this seemingly impossible conundrum.

AI Act gives financial sector opportunity to promote trust

The EU AI Act: navigating the trust landscape  

Given the inherent risks associated with AI adoption in financial services, particularly around safeguarding sensitive customer data and protecting human rights, strategic measures are essential. Compliance with the new EU AI Act provides a robust solution to this trade-off. Initially possibly seen as a regulatory challenge, and yet another burden of implementation, the AI Act ensures that AI systems respect fundamental rights, safety, and ethical principles, fostering trustworthy AI.  

The EU AI Act, the first comprehensive legal framework on AI in the world, aims to build trust by addressing risks and ensuring AI systems adhere to stringent standards. This is particularly crucial for the financial sector, where the trust and safety of AI applications directly impact customer confidence and regulatory compliance. The Act tackles risks created by AI applications, prohibits AI practices that pose unacceptable risks, identifies high-risk applications, and sets clear requirements for AI systems used in high-risk applications. It defines obligations for deployers and providers of high-risk AI applications, requires a conformity assessment before AI systems are put into service, establishes enforcement mechanisms, and creates a governance structure at European and national levels. 

The EU AI Act has several key impacts on the financial sector: 

  • Increased transparency and accountability: the AI Act mandates transparency in AI operations, requiring banks to be able to explain AI decision-making processes. This transparency builds trust as clients can understand how their data is used and safeguarded.  
  • Risk mitigation: by classifying certain AI applications as high-risk, the Act ensures that stringent measures are in place to mitigate potential harms. This is crucial for financial services that deal with sensitive personal and financial data.  
  • Innovation within ethical boundaries: the Act provides guidelines for ethical AI development, reducing uncertainties and encouraging innovation within established boundaries. This accelerates the adoption of new technologies while ensuring ethical considerations are met.  
  • Operational efficiency: by streamlining compliance processes and integrating them into AI development, banks can reduce the time and resources spent on trial-and-error phases, speeding up the deployment of AI innovations. This can result in faster time-to-market for AI-driven products and services, providing a competitive edge. 

However, more importantly, by adhering to the EU AI Act, banks can demonstrate their commitment to deploying AI responsibly and ethically. This compliance not only mitigates risks but also enhances customer trust, ensuring AI innovations are seen as tools for enhancing security and personalisation rather than threats to privacy and fairness. Customers who understand how their data is used and the safeguards in place to protect it are more likely to trust and engage with AI-driven services. This can enhance customer loyalty and lead to increased adoption of AI-powered offerings. 

EU AI Act: tackling the challenge to enhance trust 

In conclusion, compliance with the EU AI Act is a challenge for sure, but it is also a means to cultivate trust in the financial industry. The Act provides 'guard rails', within whose boundaries it is safe to innovate, much like traffic lights, signs and rules help creating a safer environment to drive. By embracing this regulatory framework, banks can confidently navigate the complexities of AI adoption, ensuring their clients’ trust is not only maintained but strengthened in this new era of technological advancement. 

You want to know more about the implications of the EU AI Act?

Over de auteur

Anthony Kruizinga
Anthony Kruizinga

Partner, PwC Netherlands

leads the Risk & Regulatory consulting practice in the Netherlands and is part of the EMEA Financial Services team. He also leads PwC's advisory activities on the Single Supervisory Mechanism (SSM) of the ECB in Europe. He focuses on: risk management, risk appetite frameworks, capital management, stress testing, liquidity, SREP, risk modelling and TRIM. Anthony is an engineer and studied Business Management & Administration at Eindhoven University.
Follow us