
TL;DR
Intruder, a UK cybersecurity startup accelerated by GCHQ, launched AI pentesting agents that replicate manual pentesting methodology in minutes. The broader market is rushing to automate vulnerability discovery as AI bridges the gap between attack and defense.
A manual penetration test costs between $10,000 and $50,000. It takes weeks to program, days to run, and produces an outdated report before the ink is dry. Intruder, a London-based cybersecurity company that graduated from GCHQ’s Cyber Accelerator, has launched AI pentesting agents which replicate the methodology of a human pencil tester and deliver results in minutes.
The company’s CEO Chris Wallis will present the technology at KnowBe4’s KB4-CON conference on May 13. The argument is simple: the depth of a manual pentest, available on demand, at a fraction of the cost.
The moment is not coincidental. The cybersecurity industry is watching as AI transforms the attack side of the equation faster than the defense side can adapt. Anthropic’s Claude Mythos preview found thousands of zero-day vulnerabilities across all major operating systems and browsers in a single screening pass.
xBow, an autonomous pentesting startup, achieved unicorn status in March 2026 after raising $120 million. The question is no longer whether AI will replace human pentesters. It’s about whether the replacement will happen fast enough to close the gap between the vulnerabilities AI can find and the speed at which organizations can fix them.
the product
Intruder’s AI pentesting agents work by investigating vulnerability scanner findings using the same methods a human pentester would employ. When the scanner detects a potential problem, the AI agent interacts directly with the target system, sending requests, analyzing responses, and searching for exposed data to determine whether the finding represents a genuine exploitable flaw or a false positive. The investigations cover injection attacks, client-side vulnerabilities, and information disclosure.
Historically, the distinction between a vulnerability scanner and a penetration test has been the difference between pointing out a potential problem and demonstrating that it can be exploited. Scanners produce lists of thousands of findings, many of which are false positives or low-risk issues that consume security teams’ time without improving their posture. A penetration tester takes those findings and determines which ones are important. Intruder AI agents automate that second step.
Problem-level investigations are now available. More extensive web application penetration testing, in which agents chain multiple findings to map attack paths through an application, is expected by the end of the current quarter. The company describes this as a first wave, with subsequent releases planned to expand the scope of what agents can investigate autonomously.
The company
Wallis founded Intruder in 2015 after working as an ethical hacker and then moving into corporate security. The company was selected for GCHQ’s Cyber Accelerator, a program run by the UK signals intelligence agency to identify and support cybersecurity startups with commercial potential. Intruder was subsequently named the UK’s fastest-growing cybersecurity company in Deloitte’s Tech Fast 50 list in 2023.
The company now protects more than 3,000 organizations, generated approximately $16 million in revenue in 2024, up from $10 million in 2023, and has grown from $900,000 in 2020. It has raised just $1.5 million in external funding, a notable figure in an industry where competitors routinely raise hundreds of millions before reaching profitability. The intruder is initiated in all but name.
Its platform unifies attack surface management, cloud security, continuous vulnerability scanning, and now AI pentesting into a single interface. The company’s market position is that of the mid-market: organizations large enough to face serious cyber risks, but too small to afford the $50,000 manual pentests and dedicated security teams that enterprise customers take for granted.
Intruder’s own research, published in its Security Middle Child Report in March 2026, found that 42 percent of mid-market security teams describe themselves as overstretched, overwhelmed, or constantly behind.
the market
The penetration testing market is valued between $2.5 billion and $3 billion and grows 12 to 16 percent annually. The native AI segment is growing faster. xBow reached a valuation of $1 billion on total funding of $237 million. Pentera, which performs agentless automated attack simulation on endpoints, has surpassed $100 million in annual recurring revenue. Horizon3.ai’s NodeZero has run over 170,000 autonomous penetration tests in production environments.
The economics of manual pentesting are structurally broken. The global cybersecurity workforce gap, estimated at 3.4 million unfilled positions, means there are not enough qualified penetration testers to meet demand, even if all organizations could afford it. 32 percent of companies still test only annually. Those who test quarterly spend more on pentesting than many on their entire security toolset. AI collapses the cost curve, but it also raises a question the industry hasn’t answered: If AI can find vulnerabilities faster than humans, will it find them faster than attackers?
The push for governed cybersecurity AI in 2026 reflects the tension between speed and supervision. Industry telemetry in 2025 exceeded 308 petabytes across more than four million identities, endpoints and cloud assets, generating nearly 30 million investigative leads. No human team can process that volume. But the EU AI Law classifies many security automation tools as high-risk AI systems, requiring compliance with requirements for transparency, human oversight, and robustness that autonomous pentesting agents may struggle to meet.
The arms race
European finance ministers demand access to Anthropic Myths after learning that no European government or bank had had access to the most powerful vulnerability discovery tool ever built. The geopolitics of AI cybersecurity has arrived: tools that find vulnerabilities are becoming strategic assets, and access to them is distributed along lines that favor American technology companies and their chosen partners.
Unauthorized users gained access to Mythos the day Anthropic announced itapparently guessing the model url. The irony is characteristic of the current moment: the world’s most advanced AI cybersecurity tool was compromised by one of the most basic security flaws imaginable. Anthropic’s most capable AI previously escaped its sandbox and emailed a researcherwhich led the company to suspend the launch of the model. The tools that are being created to protect systems are not yet such in themselves.
Intruder operates on a different scale than Mythos. It does not discover zero days in the operating system kernels. It is about automating the work of a mid-level penetration tester for a medium-sized company that cannot afford to hire one. But the principle is the same. AI is reducing the time between discovery and exploitation of vulnerabilities towards zero on both sides. Companies that implement AI pentesting agents will find their flaws more quickly. Attackers who deploy their own agents will encounter the same flaws on the same timeline.
the question
Trump administration told banks to use Anthropic AI for cybersecurity and at the same time restricts the company’s access to government contracts, a contradiction that illustrates how quickly AI cybersecurity has outpaced the policy frameworks designed to govern it. The regulatory, commercial and technical levels of the AI pentesting market are moving at different speeds, and the gaps between them are where risk accumulates.
Wallis will appear at KB4-CON on Tuesday. Their argument is that annual pentests cannot keep pace with a world where the time to exploit has gone from months to hours. Forty-nine percent of security leaders in Intruder’s survey cited artificial intelligence and automation as their top investment priority for 2026. The market agrees with the thesis. The question is whether AI agents that find vulnerabilities will consistently arrive before AI agents that exploit them, or whether the gap between attack and defense that has defined cybersecurity for decades will simply reproduce itself at machine speed.





