The launch of ChatGPT by OpenAI in November 2022 was followed by rapid technological advances, during which GenAI use cases evolved from simple chatbots that answer questions into virtual agents that can generate video, audio and code, as well as handle more complex tasks. This makes it increasingly difficult to determine what is real and what is not, especially for original documents or images that are partly altered by GenAI. For example, research by the Dutch government highlighted this challenge, revealing that most individuals struggle to recognise a deepfake voice.
With the growing number of publicly available GenAI tools, companies and organisations face heightened risk from both internal and external actors that can now harness the power of GenAI to commit fraud. For example, in 2025 the Association of Certified Fraud Examiners listed fake identity fraud, document fraud, and deepfake attacks among the most prominent schemes in which GenAI can be leveraged to execute the fraud. Furthermore, an analysis by Feedzai in 2025 across multiple financial institutions found that more than 50% of the encountered external fraud involved the use of GenAI.
The ongoing wave of GenAI developments has reshaped the fraud risk landscape. While traditional fraud schemes such as misappropriation of assets, financial reporting fraud or intellectual property theft continue to exist, the rise of consumer-friendly GenAI tools has lowered the barriers for executing these schemes. It enables individuals to perpetrate sophisticated fraud more easily and without technological expertise.
Below, we outline some examples of fraud risks amplified by GenAI, as well as new risks arising from its growing use and integration into business processes.
Fraudsters can use GenAI to credibly manipulate or fabricate (numerical) data, financial records, and supporting documents (such as invoices), even without deep financial expertise. For example, models like GPT 5, Claude 4.5, and DeepSeek-R1 are able to solve equations, analyse spreadsheets data, and, when prompted, generate or fabricate entire financial statements. These risks manifest in various forms of fraudulent activities, such as fraudulent financial reporting, misappropriation of assets, and theft of intellectual property.
Overall, data fabrication threatens any organisation or external stakeholders that rely on third-party documents and datasets. The ease of producing credible fake materials increases the need for enhanced verification of source data to ensure data reliability. Moreover, it increases the need for robust procurement workflows and verification of the validity of documents and document sources.
GenAI can both generate entirely new images and seamlessly modify existing photos, leaving no visible signs of alteration. Recent advances in model performance even allow individuals to alter their appearance in live streams, such as video calls, with a single consumer-grade laptop. While all GenAI-generated or altered content is considered fake, the term ‘Deepfake’ — a combination of 'deep learning' and 'fake' — specifically refers to GenAI-created or modified images, videos or audio that are used to imitate a real person. For example, in June 2025, a deepfake video of former Prime Minister Schoof was used by fraudsters in Facebook advertisements to promote a fake investment platform, reaching over 250,000 views. Some other examples of this type of fraud include:
The use and integration of GenAI in business processes increases the risk of other types of fraud. For example, without proper data classification and governance, embedded GenAI tools can inadvertently expose information that would otherwise be unreachable for specific users, creating new avenues for fraud.
The rise of Generative AI has heightened fraud risks, but it also presents many opportunities to strengthen procedures and implement GenAI-driven controls to prevent and detect fraudulent attempts. Raising awareness of potential misuse of GenAI, how to identify manipulated content, and how to use GenAI responsibly, serves as the best first line of defence against any attempt.
A fraud risk management framework can significantly enhance the understanding of fraud risks and contribute to their mitigation. A key component of this framework should be assessing how GenAI influences fraud risk. This requires an understanding of the GenAI technologies being adopted internally and those used by threat actors and mapping current processes vulnerable to GenAI exploitation. Updating the assessment regularly as technology and tactics evolve is key to identify new risks early and prioritise mitigation efforts.
To apply this in practice:
To ensure employees keep pace with the evolving capabilities of GenAI, the fraud awareness and training program should be enhanced to cover GenAI-enabled fraud so employees can recognise warning signs early. Develop and strengthen employees' practical skills in using GenAI in regular business workflows and verification tools. This may include training them to accurately validate digital and handwritten signatures, critically review metadata, and use corporate approved GenAI solutions.
Fraudsters often exploit routine weaknesses – such as approving payments via a single channel or skipping verification checks on urgent requests. Integrating GenAI into the anti-fraud strategy can transform the way organisations detect and prevent fraud.
GenAI-enhanced tools can analyse transaction patterns, documents, and communication in real-time and flag anomalies. For example, it can detect when payment details are changed within invoices or highlight suspicious vendor activities outside the 'normal' profile. Other solutions specialise in e-commerce fraud prevention, by using GenAI to evaluate the legitimacy of transactions at checkout, spotting signs of stolen credentials or synthetic identities before payments are processed.
When organisations harness GenAI correctly, they not only boost operational efficiency, but also strengthen the organisation's defence.
Generative AI is here to stay and will have a permanent impact on fraud risks. This can undermine business integrity and stakeholder trust. The threats it introduces affect nearly every aspect of an organisation. However, a combination of robust workflows, stringent internal controls, and GenAI-literate employees trained in both the power and pitfalls of GenAI can significantly mitigate the GenAI-enhanced fraud risks. By doing so, businesses can turn the tide on the GenAI fraud wave and secure their future in the AI era.