AI is writing more of the world’s code, and with it, more of the world’s mistakes. A new study says nearly one in five serious security breaches now start with code written by AI tools. The same report shows that almost seven in ten companies have already found vulnerabilities traced to machine-written software.

The research, published[1] in Aikido Security’s State of AI in Security & Development 2026, paints a picture of an industry trying to move faster than its safety net. The survey covered 450 developers, security leaders, and application engineers across the U.S. and Europe, capturing how the use of AI in programming has outpaced the rules meant to keep it safe.

Speed Comes with Loose Ends

The study found that 24% of production code worldwide now comes from AI systems. In the U.S., the figure climbs to 29%. In Europe, it’s closer to 21%. This shift has lifted productivity, but it has also created new security headaches.

About one in five companies reported a serious security breach linked to AI code. Another 49% said they’d seen smaller issues or didn’t realize where the problem came from until much later. Most agreed that the lack of clear oversight made it difficult to assign responsibility when AI-generated work introduced bugs or security gaps.

When asked who would take the blame, 53% said the security team, 45% pointed to the developer who used the AI, and 42% blamed whoever merged the code. The report says that uncertainty slows down fixes, stretches investigations, and leaves gaps open for longer than anyone is comfortable admitting.

Two Worlds, Two Attitudes

Companies in Europe are more cautious than those in the U.S., which helps explain their lower incident rates. Only one in five European firms reported a major breach caused by AI-generated code, compared with 43% in the U.S.

Aikido’s analysts say the gap reflects how each region approaches compliance. European firms are bound by tighter data and software regulations, while American developers lean harder on automation and are more likely to bypass safety checks when deadlines tighten.

The report also shows that U.S. teams are more proactive in tracking AI-generated content. Nearly six in ten said they log and review every line of AI code, compared with just over a third in Europe. The difference gives U.S. firms more visibility, even if it comes with more risk.

Too Many Tools, Too Little Focus

Another finding centers on tool sprawl. Teams juggling multiple security products are facing more incidents, not fewer. Companies using one or two security tools fixed critical flaws in about three days. Those using five or more took nearly eight.

False alerts made matters worse. Almost every engineer surveyed said they lost hours each week sorting through warnings that turned out to be harmless. The report estimated that wasted time costs big firms millions of dollars in lost productivity each year. Some engineers admitted to turning off scanners or bypassing checks just to get code shipped, a move that often adds hidden risks later.

One respondent described the situation as “too many alarms and not enough clarity,” a sentiment echoed across both continents.

Humans Still Hold the Line

Even as AI takes on more work, nearly everyone agrees that human review still matters. Ninety-six percent of respondents believe AI will eventually write secure code, but most expect it will take at least five more years. Only one in five think it will happen without people checking the results.

Companies also depend heavily on experienced security engineers. A quarter of CISOs said losing one skilled team member could lead directly to a breach. Many are now trying to make security tools easier to use and less noisy, giving developers room to focus on the real problems instead of chasing false positives.

Despite the growing pains, optimism remains strong. Nine in ten firms expect AI will soon handle most penetration testing, and nearly eight in ten already use AI to help repair vulnerabilities. The difference between optimism and reality, researchers said, lies in how companies combine automation with human oversight.

Balancing Speed and Safety

The report ends with a familiar warning. The faster AI writes code, the faster mistakes can spread. Security still depends on developers who understand what the AI is doing and who take ownership of the results.

In plain terms, Aikido’s findings suggest that the tools are racing ahead, but the guardrails have yet to catch up. For now, the smartest move might be slowing down long enough to double-check what the machines have built.

Notes: This post was edited/created using GenAI tools.

Read next: YouTube Pilots Reforms That Reopen Doors For Creators And Close Loops For Endless Scrolling[2]

By admin