Mythos Changes Less Than You Think, and More Than You’re Ready For

 By Justin “Hutch” Hutchens | Trace3  Innovation Principal  

 

Anthropic’s announcement of Project Glasswing and its underlying Mythos model sparked a wave of attention across the cybersecurity world. Much of the conversation focuses on one thing: AI that can hack. And to be clear, the capabilities described are serious. But the bigger story is not that AI can now do this. It is what happens next for enterprise security 

Background: What is Project Glasswing and Mythos?

Project Glasswing is Anthropic’s new initiative focused on applying advanced AI to cybersecurity. At the center of it is Mythos, a model designed to identify and exploit vulnerabilities in software systems at a very high level.

Unlike general-purpose LLMs, Mythos is purpose-built for cyber. According to its system card, it can discover vulnerabilities across complex systems and chain exploits together like a sophisticated human adversary. Glasswing brings this capability into a controlled environment, partnering with major technology vendors to test and improve the security of real-world systems. That partner list is notable: AWS, Apple, Cisco, Microsoft, Google, NVIDIA, Palo Alto Networks, and others. These are not edge cases. This is the backbone of enterprise infrastructure.

Overhyped, but Not for the Reason You Think...

There is a growing narrative that Mythos represents a major leap forward in offensive cyber capability. That may be true. But it is also missing an important point. We have already been living in a world where frontier models can support autonomous hacking workflows.

Research from Google, Check Point, Anthropic, and others have shown that LLMs could already assist in zero-day discovery, automation of exploit development, and chaining together multi-step attack paths within the context of agentic workflows.

So yes, Mythos may be more specialized and more capable.

But the threshold between “AI can assist in hacking” to “AI can autonomously hack” had already been crossed. What Glasswing really represents is not the beginning of this trend, but its acceleration and industrialization.

The Good News: Secure-by-Design Might Finally Happen

For years, cybersecurity leaders like Jen Easterly (former Director of CISA) have pushed a simple idea: we cannot keep bolting security on after the fact. Software needs to be secure by design. The problem has always been execution. Developers move fast. Complexity grows. Vulnerabilities slip through.

Glasswing changes that dynamic.

For the first time, vendors can use AI to continuously test their own systems at scale, with a level of depth that was previously impractical. Instead of waiting for attackers or external researchers to find issues, companies can proactively uncover and fix them.

If this works as intended, it could mark a real shift in the long term. Fewer unknown critical vulnerabilities, faster identification of systemic weaknesses, and more resilient core infrastructure. In that sense, the optimism is justified. This could genuinely improve the baseline security of the software that runs the world.

The Bad News: A Patch Tsunami is Coming

While the long-term outlook of a program like Glasswing is largely positive, it could also present some significant challenges for organizations in the short-term. While new software created in the post-Glasswing era may be secure-by-design, the software currently running inside your business is not.

If Glasswing is as effective as advertised, it will likely surface a large number of previously unknown vulnerabilities across widely deployed platforms. And that means one thing: patches. A lot of them. Every major vendor in the program runs software that enterprises depend on every day. As new issues are discovered, those vendors will have to release fixes quickly.

At the same time, another trend is already underway. The gap between when a patch is released and when an exploit appears is shrinking. Attackers routinely reverse engineer patches to understand what changed and turn that into working exploits. And now, AI can accelerate patch analysis, automate reverse engineering workflows, and reduce the time needed to weaponize a vulnerability.

So, in the short-term, you end up with a dangerous dynamic:

  1. Glasswing uncovers more critical vulnerabilities

  2. Vendors release more patches

  3. Attackers use AI to rapidly turn those patches into exploits

  4. Organizations race to keep up

For enterprises with strong, automated patch management, this is manageable. For those without it, the risk increases significantly.

A New Security Stack for the Emerging Threat

Even if Mythos itself remains gated, it is a signal of where things are going. Comparable capabilities will not stay exclusive forever. That means enterprise security teams need to adapt now.

A few priorities stand out.

  1. Automated patch management is no longer optional - Manual or slow patching processes will not keep up. Organizations need the ability to deploy critical updates quickly and consistently across their environments.

  2. Test like the attacker - Autonomous penetration testing and attack simulation will become essential. If attackers are using AI to find and exploit weaknesses, defenders need similar capabilities to stay ahead.

  3. Focus on real exposure, not theoretical risk - Continuous Threat Exposure Management (CTEM) helps prioritize what actually matters based on how systems can be attacked in practice. This becomes more important as the volume of vulnerabilities increases.

  4. Bring AI into DevSecOps – While Glasswing may address many of the flaws in your software infrastructure stack, it will not solve the vulnerabilities in your own custom code. Next-generation application security needs to go beyond traditional SAST. AI-driven analysis can help identify deeper issues, including business logic flaws often missed by legacy tools.

Final Thoughts

Glasswing and Mythos are important developments. But not because they introduce the idea of AI-powered hacking. That shift has already happened. What Glasswing signals is scale. AI is moving from assisting individual researchers to systematically testing the infrastructure enterprises rely on. That will make software more secure over time.

But in the short term, it will also increase pressure on organizations to move faster, patch faster, and think differently about risk.

The gap between those who adapt and those who do not is about to widen.

Hutchens_Headshot
 Justin “Hutch” Hutchens is an Innovation Principal at Trace3 and a leading voice in cybersecurity, risk management, and artificial intelligence. He is the author of “The Language of Deception: Weaponizing Next Generation AI,” a book focused on the adversarial risks of emerging AI technology. He is also a co-host of The Cyber Cognition Podcast, a show that explores the frontier of technological advancement and seeks to understand how cutting-edge technologies will transform our world. Hutch is a veteran of the United States Air Force, holds a Master’s degree in information systems, and routinely speaks at seminars, universities, and major global technology conferences.
Back to Blog