Public concern is increasing. The Dutch House of Representatives is currently exploring legal measures against the misuse of deepfakes. Inspired by Denmark, several political parties have proposed granting individuals copyright over their own appearance and voice, which could help prevent fraud and identity theft.
According to our Global Economic Crime Survey 2024, cybercrime—including impersonation via deepfakes—is now the most reported form of fraud in Europe. The Global Digital Trust Insights 2025 also shows that 67 percent of European security experts believe generative AI and cloud technologies have expanded the digital attack surface.
Examples of impersonation fraud
Impersonation fraud often follows a layered approach, combining various techniques to deceive employees and bypass security measures. Here are some common examples:
- Phishing: Emails that appear to come from trusted colleagues or organizations, prompting recipients to share confidential information.
- Business Email Compromise (BEC): Targeted email attacks where executives or employees are impersonated to mislead internal stakeholders.
- Deepfakes: AI-generated audio or video convincingly mimicking an executive’s voice or appearance, often used to persuade employees—especially those unfamiliar with the person.
These methods are often combined. For example, a fraudster might first make contact via WhatsApp or a phishing email, followed by a deepfake video call. This layered strategy creates a highly convincing scenario, making it increasingly difficult to distinguish real from fake.
What to do in case of an incident
Time is critical in impersonation fraud. A swift and structured response can significantly reduce the impact and support recovery. Organizations should immediately:
- Respond quickly: Isolate the incident and prevent further damage.
- Investigate and analyze: Assess the impact and initiate a forensic investigation.
- Recovery and legal action: Trace the money trail and explore legal and insurance options.
- Remediation: Strengthen processes, restore systems, and train employees.
Building resilience before an attack
True cyber resilience starts well before an incident. By taking proactive steps, organizations can reduce risks, strengthen defenses, and better prepare for potential fraud:
- Risk assessment: Identify vulnerabilities and sector-specific threats.
- Cyber readiness: Establish clear incident procedures and conduct crisis simulations.
- Awareness training: Educate employees about deepfakes and warning signs.
PwC’s vision: Navigating the future of AI and deepfakes
While deepfakes currently pose serious risks, PwC believes the same technology can also be part of the solution. Our vision for AI is based on four core principles:
- From risk to strategic value: AI is not just a threat—it’s a driver of innovation. Organizations that use AI responsibly can increase efficiency, improve decision-making, and enhance customer experience.
- Detection through AI: New technologies are being developed to detect deepfakes in real time. PwC helps evaluate and implement AI tools that verify authenticity through biometrics, metadata, and behavioral analysis.
- Governance and ethics: As AI’s impact grows, so does the need for regulation. We expect more legal frameworks around digital identity and AI use. Organizations must prepare for transparency, accountability, and ethical deployment.
- People remain central: Technology alone is not enough. Cyber resilience requires a strong focus on employee awareness, behavior, and culture.
Want to learn more?
Contact us to discover how PwC can help you prevent and respond to digital fraud—and prepare your organization for the future of AI.