Why operating models, not tools, are now the constraint

Claude Mythos and the acceleration of cyber risks

Claude Mythos en de versnelling van cyberrisico
  • 17 Apr 2026

Since Anthropic unveiled insights on Claude Mythos Preview on 7 April, it has become a focal point in AI and cybersecurity discussions. Mythos Preview marks a significant in cyber capabilities, particularly in coding, reasoning, and identifying vulnerabilities, as many organisations grapple with the risks posed by generative AI tools. Anthropic has limited access to the model and launched “Project Glasswing”, a cross-industry defensive initiative, enabling trusted partners to secure critical software while delaying broader release until defences are strengthened.

Models like Claude Mythos underscore that the primary challenge for many organisations isn't detection capability but execution speed. This doesn't imply AI is introducing something entirely new in cybersecurity. Frontier models were already progressing in this direction. What's evolving is performance: the ability to execute familiar tasks faster, at a larger scale, and with greater autonomy. Anthropic asserts that Mythos Preview significantly surpasses earlier Claude models on various benchmarks and has uncovered serious vulnerabilities in widely used software, including long-standing bugs overlooked by humans and automated testing.

The scale of improvement is noteworthy. Mythos Preview is reportedly nearly 100 times more successful than Claude Opus 4.6, Anthropic’s current frontier model, at creating working exploits for discovered vulnerabilities.

For business and security leaders, this distinction is crucial. The issue isn't a new category of cyber threat; it's the acceleration of existing ones. Frontier AI is reducing the time between discovering a weakness and producing a working exploit, and the advantage will go to whichever side, whether offence or defence, adopts and integrates it faster.

Performance level is changing

Claude Mythos signals a compression of timelines. Vulnerability discovery becomes cheaper and easier to run at scale, triage and validation speed up through autonomous proof-of-concept generation. Patching, disclosure, and deployment all face greater pressure as a result. The Linux Foundation has already noted that AI-generated bug reports and pull requests are straining open-source maintainers’ capacity, even with today’s models.

Both sides benefit, but not equally. Threat actors can use AI to scan for and exploit known weaknesses, such asunpatched systems, misconfigurations, and weak credentials, faster and more cheaply than before. Defenders can use the same tools to find and fix vulnerabilities first. In practice, however, most organisations already struggle to remediate known issues quickly enough,not because discovery falls short, but because remediation ownership, decision rights, and change processes slow execution. The near-term reality is that offence scales more easily than defence.

Even if Mythos itself remains restricted for now, it should be seen as a leading indicator. Other actors will pursue similar capabilities, and less capable models already enable practical exploitation of known issues today. Most enterprise exposures are not zero-days; they are known vulnerabilities compounded by misconfigurations and identity weaknesses that agentic AI can exploit more efficiently.

Key implications

  • Existing weaknesses become more exposed. The greatest risk in most organisations isn't the absence of advanced tools, but inconsistent hygiene, stale credentials, fragmented telemetry, and slow change processes.
  • The response window is shrinking. If AI materially accelerates vulnerability discovery and exploitation, slow remediation becomes a direct business risk.
  • The skill barrier continues to fall. Anthropic reports that engineers without formal security training generated working exploits overnight using Mythos, implying a meaningfully lowered skill threshold.
  • Governed AI adoption is now essential. Stopping or delaying internal AI adoption isn't a viable response. Organisations that do so will fall further behind as threat actors and peers accelerate. What's needed is a clear plan for deploying AI with appropriate governance, oversight, and monitoring built in from the start.

What you should do now

The most important response isn't panic; it's operating discipline. AI-accelerated cyber risk raises the bar on how fast organisations can decide, act and remediate, not just on what they can detect. These are immediate priorities.

Reassess identification and remediation speed. Review how quickly your organisation can identify, validate, prioritise, and patch critical vulnerabilities. Accelerating discovery at scale puts patch velocity and triage automation at the centre of security competitiveness. Many organisations will find that their current timelines, approval thresholds, and change processes are still built for a slower threat environment than the one now emerging.

Harden identity and configuration hygiene. Treat this as a primary line of defence against AI-accelerated exploitation of known weaknesses. Eliminate stale credentials, enforce multi-factor authentication universally, and audit configurations. These are the weaknesses agentic models exploit most efficiently.

Improve visibility into assets and critical dependencies. You can't secure what you don't know you have. Asset inventory, external attack-surface visibility, and dependency mapping become more valuable as vulnerability discovery accelerates. Organisations with large, complex legacy environments and heavy reliance on open-source components face disproportionate exposure.

Consolidate security telemetry. Reduce tooling fragmentation and unify data lakes and detection pipelines. Signals buried across disparate tools are far harder to act on at AI-accelerated speeds.

Use AI defensively now. The same capabilities that may help attackers can also help defenders move faster. Security teams should test AI-enabled approaches in core processes such as vulnerability triage, source-code analysis, patch generation, and response workflows.

Strengthen crisis response and recovery. Prevention alone won't close the gap. Pressure-test detection, containment, backup and recovery capabilities against scenarios where exploitation timelines are measured in minutes rather than days. Incident-response plans, crisis management playbooks, and escalation paths need to reflect that reality.

Update governance for internal AI tools and agents. As teams adopt coding assistants and agentic tooling, revisit approval paths, access boundaries, monitoring, sandboxing, and human review. Secure AI deployment increasingly becomes part of operational resilience and compliance—including auditability, monitoring, and incident-response preparedness—not just innovation.

Summary

Claude Mythos may or may not live up to the strongest claims being made about it, but the signal is already clear. AI is reaching,and in some areas matching,human-expert performance on key security tasks, including vulnerability scanning, triage, and exploit generation.

AI is no longer a future consideration for cybersecurity: it is reshaping the threat landscape now. The central question for every organisation is whether you can patch, respond and govern at AI speed. The ones that build that discipline today will lead. The ones that invest heavily but leave their decision-making, remediation and escalation processes unchanged, risk discovering too late that they were solving the wrong problem.  

Curious about the latest in cyber developments?

Sign up for our PwC newsletter

Questions? Feel free to reach out to us:

Gerwin Naber
Gerwin Naber

Partner, PwC Netherlands

Bram van Tiel
Bram van Tiel

Partner Cybersecurity, resilience & privacy, PwC Netherlands

Peter Avamale
Peter Avamale

Director, PwC Netherlands

Follow us