Across the enterprise AI sector, preferences are shifting in ways that are beginning to consolidate the market around a small group of high-performing providers. Anthropic has now become the most widely used large language model vendor among enterprise customers, according to updated data from Menlo Ventures. Its share of enterprise usage has reached 32 percent. OpenAI, which once held twice that figure, now stands at 25 percent. Google’s models have seen a more gradual rise, holding 20 percent of the market. Meta, with its Llama models, has kept a foothold at 9 percent. Other providers like DeepSeek remain niche, contributing only marginally to real-world workloads.

The last eighteen months have marked a transition. Usage, once more evenly distributed, is now clustering around systems that have demonstrated consistently high performance across both general tasks and more specialized workloads like code generation. Anthropic’s rise has been tied to this trend. It accelerated following the release of Claude Sonnet 3.5 in mid-2024, continued through Sonnet 3.7 early this year, and became more pronounced after the introduction of Claude 4 and Claude Code in the second quarter of 2025.

Within the software development segment, the numbers are even more defined. Anthropic’s models account for 42 percent of usage tied to enterprise code tasks. OpenAI’s models, once dominant, now represent 21 percent of this same segment. The shift has followed a wave of adoption for newer tools and platforms built around Claude’s architecture, including AI-powered coding environments, internal dev agents, and low-code toolchains. Over time, Claude has displaced earlier-generation solutions like GitHub Copilot, with the broader category now valued at close to $2 billion.

While model cost remains a factor for some organizations, most enterprise users continue to prioritize performance. Model switching has become less common, with only 11 percent of teams reporting a change in vendor over the past twelve months. Most organizations upgrade within their existing ecosystem when newer versions become available. Of those surveyed, 66 percent adopted a more recent model from the same provider, while 23 percent made no changes at all. This behavior has led to a concentration of usage around the highest-performing models, regardless of pricing trends. Even as some models have dropped in price by a factor of ten, usage has shifted toward newer releases rather than older, more affordable alternatives.

The broader financial picture also reflects changing priorities. Enterprise spending on model APIs has increased substantially, growing from $3.5 billion at the end of 2023 to $8.4 billion by mid-2025. Most of this spend now goes toward inference, not training. Among startups, 74 percent of workloads are inference-driven. Among larger firms, that number has risen from 29 percent to 49 percent in the last twelve months. These changes point to an industry that is now focused less on experimentation and more on production-level use.

The movement toward agent-based architecture has also played a role. Rather than delivering single-sentence answers, newer models are structured to reason through problems using multiple steps, often calling external tools like search engines, calculators, or code compilers. Anthropic has built much of its recent product line around this framework, using protocols like MCP to coordinate tool use inside agent environments. That structure has expanded the role of models in day-to-day enterprise tasks, especially where reliability and repeatable logic are needed.

The outlook for open-source models appears to have cooled in comparison. Six months ago, 19 percent of enterprise workloads relied on open systems. That figure now stands at 13 percent. Meta’s Llama models still account for the largest share within that category, though real-world results from the latest version (Llama 4) fell short of expectations. Several new entrants, including offerings from Alibaba, Zhipu AI, and Bytedance, have pushed forward technically. Still, many of the top-performing open-source models come from Chinese firms, which has slowed adoption in markets with procurement or regulatory constraints.

Even among startups, enthusiasm for open systems has started to fade. While initial pilots were often run using models like Llama or DeepSeek, most production systems have since moved to closed providers. Developers continue to cite performance gaps and deployment complexity as the main reasons behind that change.

As the current cycle unfolds, enterprise buyers appear to be aligning around a small number of platforms that consistently ship performant models, support long-term compatibility, and offer integration across toolchains. Anthropic, now leading by usage share, has reached this position through incremental improvement and timely releases rather than dramatic shifts. The market, which once saw rapid swings between providers, is starting to settle into a more stable structure where switching is infrequent and performance is the core differentiator.

There’s no single signal that defines this shift, but the pattern is clear. AI in the enterprise is no longer defined by research benchmarks or marketing milestones. It’s shaped instead by consistency, support, and operational outcomes. In that environment, the providers who lead tend to be the ones that quietly meet expectations without needing to overstate them.

Notes: This post was edited/created using GenAI tools.

Read next: Google Fixes Major Bug That Let Bad Actors Remove URLs from Search

By admin