The double-edged sword: How generative AI fuels fraud

The double-edged sword: How generative AI fuels fraud
  • 07 May 2026

By: Lenda Pacaj (Technical Office), Bas Castelijns (Advisory), and Stefan van Deelen (Advisory)

In today's rapidly evolving digital landscape, generative artificial intelligence (GenAI) is transforming how businesses operate and innovate. From creating content to automating customer services and business processes, the possibilities seem limitless. However, the widespread adoption of GenAI has also substantially increased the possibilities for individuals to commit fraud. As GenAI becomes more sophisticated, fraudsters are exploiting its potential to enhance deceptive tactics. Understanding how GenAI fuels fraud and how organisations can respond is crucial to stay ahead of these new threats.

Rise of AI-driven fraud

The launch of ChatGPT by OpenAI in November 2022 was followed by rapid technological advances, during which GenAI use cases evolved from simple chatbots that answer questions into virtual agents that can generate video, audio and code, as well as handle more complex tasks. This makes it increasingly difficult to determine what is real and what is not, especially for original documents or images that are partly altered by GenAI. For example, research by the Dutch government highlighted this challenge, revealing that most individuals struggle to recognise a deepfake voice. 

With the growing number of publicly available GenAI tools, companies and organisations face heightened risk from both internal and external actors that can now harness the power of GenAI to commit fraud. For example, in 2025 the Association of Certified Fraud Examiners listed fake identity fraud, document fraud, and deepfake attacks among the most prominent schemes in which GenAI can be leveraged to execute the fraud. Furthermore, an analysis by Feedzai in 2025 across multiple financial institutions found that more than 50% of the encountered external fraud involved the use of GenAI.

Fraud techniques enhanced by GenAI

The ongoing wave of GenAI developments has reshaped the fraud risk landscape. While traditional fraud schemes such as misappropriation of assets, financial reporting fraud or intellectual property theft continue to exist, the rise of consumer-friendly GenAI tools has lowered the barriers for executing these schemes. It enables individuals to perpetrate sophisticated fraud more easily and without technological expertise. 

Below, we outline some examples of fraud risks amplified by GenAI, as well as new risks arising from its growing use and integration into business processes.

Data fabrication or manipulation

Fraudsters can use GenAI to credibly manipulate or fabricate (numerical) data, financial records, and supporting documents (such as invoices), even without deep financial expertise. For example, models like GPT 5, Claude 4.5, and DeepSeek-R1 are able to solve equations, analyse spreadsheets data, and, when prompted, generate or fabricate entire financial statements. These risks manifest in various forms of fraudulent activities, such as fraudulent financial reporting, misappropriation of assets, and theft of intellectual property. 

  • Fictitious revenue: Employees can use GenAI to fabricate customer accounts, purchase orders, delivery notes, and invoices, generating fictitious transactions that artificially inflate revenue to hit performance targets (e.g., trigger bonuses, comply with debt covenants, or window dress financials ahead of a sale).
  • Valuations: GenAI-generated valuation reports, discounted cash flow models, and fabricated market data can be used to support inflated valuations. 
  • Top-side journal entry postings: Polished management memos and falsified board minutes can be used to justify improper top‑side journal entries in the financial records (e.g., as support for manual adjustments).
  • Manipulating bank statements and bank confirmations: GenAI can generate highly realistic bank statements and simulate plausible transaction patterns from historical data, making forged documents difficult to distinguish from authentic ones. It can also produce correspondence that appears to originate from banks or third parties and fabricate bank confirmation responses used in external audits.
  • Fraud in payment organisation: GenAI‑generated invoices that replicate legitimate vendor details and formatting can evade automated and manual controls, leading to unauthorised payments (e.g., to personal or controlled accounts).
  • Expense reimbursement fraud: An employee submits a claim for a two-day ‘client visit’ that never occurred, attaching realistic-looking hotel and taxi receipts and a polished meeting itinerary. Using GenAI, the employee fabricated receipts that mirror the company’s usual vendors and price ranges and include plausible dates and logos. The claim passes basic policy checks, and the reimbursement is approved.
  • A short‑term contractor on an R&D team uses the company’s GenAI assistant which can retrieve internal documents and code comments to produce high‑level summaries of a proprietary design. They then move the AI‑paraphrased summaries out via an unapproved channel, exposing trade secrets; because the content is paraphrased and sent in small pieces, basic keyword/DLP checks don’t flag it.

Overall, data fabrication threatens any organisation or external stakeholders that rely on third-party documents and datasets. The ease of producing credible fake materials increases the need for enhanced verification of source data to ensure data reliability. Moreover, it increases the need for robust procurement workflows and verification of the validity of documents and document sources. 

Deepfakes and synthetic media

Deepfakes and synthetic media 

GenAI can both generate entirely new images and seamlessly modify existing photos, leaving no visible signs of alteration. Recent advances in model performance even allow individuals to alter their appearance in live streams, such as video calls, with a single consumer-grade laptop. While all GenAI-generated or altered content is considered fake, the term ‘Deepfake’ — a combination of 'deep learning' and 'fake' — specifically refers to GenAI-created or modified images, videos or audio that are used to imitate a real person. For example, in June 2025, a deepfake video of former Prime Minister Schoof was used by fraudsters in Facebook advertisements to promote a fake investment platform, reaching over 250,000 views. Some other examples of this type of fraud include: 

  • Social engineering: GenAI-generated voice bots can mimic trusted callers like IT support to extract credentials or confidential information from employees over the phone. Moreover, GenAI can generate personalised and error-free phishing messages and websites that mimic the corporate style to collect employee credentials. 
  • CEO fraud: Deepfake videos or phone calls of senior executives, based on publicly available material, can persuade staff to conduct urgent payments. For example, in 2024 a finance worker in Hong Kong was tricked into transferring €23 million during a video conference where all other participants were deepfake recreations of senior staff, including the CFO.
  • Bypassing segregation of duties: Fraudsters can use GenAI to fabricate fake email conversations or chat messages with (higher) management that show approval to gain unauthorised access to data or systems.
  • Identity fraud: GenAI can be used to generate completely fabricated identities of remote workers, including CVs, social media profiles, and live face swapping to pass virtual interviews. For example, North Korean IT workers use deepfakes to infiltrate western high-tech companies and extract proprietary information.

Data leaks and misuse 

The use and integration of GenAI in business processes increases the risk of other types of fraud. For example, without proper data classification and governance, embedded GenAI tools can inadvertently expose information that would otherwise be unreachable for specific users, creating new avenues for fraud.

  • Data misuse: As GenAI, such as CoPilot, is embedded in business processes, users can quickly search and find emails and relevant documents. However, if data classification and governance are not set properly, CoPilot might identify data sources or information which a user should not have access to. This could lead to an employee having access to sensitive or classified information, creating opportunities for insider trading, corporate espionage, or extortion.
  • Data leaks: Misuse of GenAI, especially ‘Shadow AI’, could lead to unintentional data leaks. Shadow AI refers to unmonitored use of open-source GenAI tools such as ChatGPT with proprietary company information. This could lead to data breaches as public GenAI suppliers potentially take user inputs as training material for their models. Moreover, it could be regarded as a breach of contractual agreements with clients or regulatory policies concerning data privacy. 

Turning the tide: What can organisations do to address the risk?

The rise of Generative AI has heightened fraud risks, but it also presents many opportunities to strengthen procedures and implement GenAI-driven controls to prevent and detect fraudulent attempts. Raising awareness of potential misuse of GenAI, how to identify manipulated content, and how to use GenAI responsibly, serves as the best first line of defence against any attempt. 

Conduct periodic AI integrated fraud risk assessment

A fraud risk management framework can significantly enhance the understanding of fraud risks and contribute to their mitigation. A key component of this framework should be assessing how GenAI influences fraud risk. This requires an understanding of the GenAI technologies being adopted internally and those used by threat actors and mapping current processes vulnerable to GenAI exploitation. Updating the assessment regularly as technology and tactics evolve is key to identify new risks early and prioritise mitigation efforts.

To apply this in practice: 

  • Consider scenarios such as deepfake executive requests, GenAI-generated invoices and receipts, spoofed login pages, and GenAI-enabled data leaks. 
  • Identify where these risks could manifest within key business processes, such as payments, vendor onboarding, revenue recognition, procurement, HR onboarding, executive communications, and customer support. 
  • Define simple but effective red flags that should trigger manual checks: urgent after‑hours requests, sudden vendor or bank detail changes, use of non-corporate communication channels like WhatsApp, documents with unclear origin, and unusual login activity (e.g. from an unfamiliar location or at an odd time). 
  • Clearly outline the steps that an employee must follow if they suspect or detect fraudulent activity, including who to report to, how to document concerns and how to respond without compromising evidence or escalating the risk.

Awareness

To ensure employees keep pace with the evolving capabilities of GenAI, the fraud awareness and training program should be enhanced to cover GenAI-enabled fraud so employees can recognise warning signs early. Develop and strengthen employees' practical skills in using GenAI in regular business workflows and verification tools. This may include training them to accurately validate digital and handwritten signatures, critically review metadata, and use corporate approved GenAI solutions.

GenAI to combat GenAI-induced frauds

Fraudsters often exploit routine weaknesses – such as approving payments via a single channel or skipping verification checks on urgent requests. Integrating GenAI into the anti-fraud strategy can transform the way organisations detect and prevent fraud.

GenAI-enhanced tools can analyse transaction patterns, documents, and communication in real-time and flag anomalies. For example, it can detect when payment details are changed within invoices or highlight suspicious vendor activities outside the 'normal' profile. Other solutions specialise in e-commerce fraud prevention, by using GenAI to evaluate the legitimacy of transactions at checkout, spotting signs of stolen credentials or synthetic identities before payments are processed.

When organisations harness GenAI correctly, they not only boost operational efficiency, but also strengthen the organisation's defence. 

Conclusion

Generative AI is here to stay and will have a permanent impact on fraud risks. This can undermine business integrity and stakeholder trust. The threats it introduces affect nearly every aspect of an organisation. However, a combination of robust workflows, stringent internal controls, and GenAI-literate employees trained in both the power and pitfalls of GenAI can significantly mitigate the GenAI-enhanced fraud risks. By doing so, businesses can turn the tide on the GenAI fraud wave and secure their future in the AI era.

Contact

Pavel Jankech

Director, Forensic Services, PwC Netherlands

+31 (0)64 131 60 18

Email

Micha Soentpiet

Senior Manager, PwC Netherlands

+31 (0)62 049 24 12

Email

Follow us