The use of autonomous AI tools in software engineering is accelerating fast, with new data suggesting that companies are no longer limiting artificial intelligence to passive assistance. Instead, more engineering teams are deploying what are known as agentic AI systems, which are capable of carrying out tasks on their own, without human confirmation at every step.

Between December 2024 and May 2025, a sample of over 400 companies showed a major shift in how these tools are used. The data, drawn from Jellyfish’s engineering management platform, showed that agentic AI tools were adopted by just over half of organizations at the start of the year. By May, that number had grown sharply, with 82% of firms using these tools in day-to-day engineering work.

These tools go beyond offering suggestions or generating small code snippets. Instead, they take direct action in the development workflow, such as writing code, opening code reviews, submitting commits, and leaving review feedback without prompting. This movement marks a key transition from interactive systems that rely on constant human oversight to more autonomous systems that operate with minimal supervision.

Among the many entry points for AI adoption, automated code reviews have emerged as the most common. That’s partly because they present fewer risks and allow teams to experiment without committing to full workflow automation. In this area, the numbers tell a clear story. Between January and May, the share of companies using AI-powered code reviews grew from 39% to 76%. For some early adopters, these tools now handle as much as 80% of all code reviews.

This shift has been accompanied by small but measurable efficiency gains. Average cycle times for reviews completed by AI were modestly faster in the second quarter of 2025, suggesting that these tools may already be contributing to higher throughput in some teams. Overall, usage of agentic code review tools rose by 11% among early adopters during the same period.

Several tools have become favorites among engineering teams, especially for reviewing code. GitHub Copilot Reviewer, Cursor BugBot, and CodeRabbit remain widely used, while platforms like Graphite and Greptile are becoming more popular. Bito.ai has also emerged as a new player in this space.

Still, while AI has firmly established itself in the review phase of software development, a smaller but growing group of companies is now exploring fully agentic coding workflows. These involve agents not only checking code, but also writing and submitting it into production pipelines. Although the overall share of companies testing these workflows remains low, it has increased significantly. Back in January, fewer than 2% of companies had any such pilot in place. By May, nearly 8% had started to test autonomous code writing and submission processes.

The expansion of this category is being helped along by tools like Claude Code, Devin, and Codex, which some teams are already using in internal workflows. Adoption of this kind of fully autonomous tooling rose 4.5 times in just five months, reflecting a growing readiness among some firms to delegate entire programming tasks to AI systems.

This steady move toward greater autonomy shows how quickly engineering organizations are adapting their development processes to integrate more capable AI. With most teams now past the experimentation phase, and more pushing into deeper automation, the shift toward AI-native workflows appears to be underway.

Read next: OpenAI’s Cheaper ChatGPT Go Tier, Pinned Chats, and Themes Signal Broader Rollout Before GPT-5

By admin