
DeepSeek, the Hangzhou-backed stock-market shaker, is back with a new model. Although unleashed quietly, DeepSeek has done with their R models, and present DeepSeek V3.1 as the new successor.
What is DeepSeek v3.1?
At its core, DeepSeek V3.1 is a colossal model, boasting 685 billion parameters and a 128,000-token context window. Which is a staggering leap that allows it to process entire books in a single input.
The system is built on a Mixture-of-Experts (MoE) architecture, which activates around 37 billion parameters per token. This ensures that the model achieves exceptional efficiency without the usual ballooning costs of inference, making it a serious contender against AI giants who rely on closed, resource-heavy systems.
What’s New in V3.1?
What makes V3.1 even more revolutionary is its hybrid reasoning structure, a blend of “thinking” and “non-thinking” modes that enable it to process complex logic while maintaining coherent, conversational responses. Unlike earlier generations, which often relied on separate models for coding, chat, and reasoning, DeepSeek V3.1 integrates these functions seamlessly. This versatility marks a significant evolution in open-source AI capabilities.
On the Aider coding test, DeepSeek V3.1 scored an impressive 71.6%, narrowly outperforming Anthropic’s Claude Opus 4. Even more striking is its efficiency, with reports suggesting it can complete coding tasks up to 68 times cheaper than its rivals.
It is Free!
In keeping with its mission of accessibility, DeepSeek has released the model under the permissive MIT license, allowing developers worldwide to freely download, modify, and build upon it. The full weights are already available on Hugging Face, underscoring the startup’s commitment to transparent and democratic AI innovation. DeepSeek has also confirmed that API pricing will be adjusted beginning September 6, 2025, making integrations even more affordable and further boosting its appeal for businesses and app developers.
In some sense, DeepSeek V3.1 still challenges the dominance of U.S. tech firms and reshapes the balance of global AI power. For smaller research teams, startups, and independent developers, the model’s affordability and accessibility is an opportunity to compete at the highest level.
But with such openness also come risks. While the MIT license and transparent release foster community-led improvements and safety checks, the availability of such a powerful system could also raise concerns about misuse and lack of oversight.