DeepSeek, a startup from China, has rolled out a new model. It is called V3.1. Analysts say it signals a shift in the global contest over advanced artificial intelligence. The release comes in open-source form, a choice that separates it from U.S. rivals.
The launch follows the company’s R1 model, introduced only months earlier. That version was already drawing attention for reaching U.S.-level results while costing much less. V3.1 pushes the numbers higher. The model has 685 billion parameters. It can read 128,000 tokens in one input, roughly the length of a full book.
Benchmark results add weight. On the Aider coding test, V3.1 scored 71.6 percent. The score aligns with leading models from OpenAI and Anthropic. Price is where the contrast grows sharper. A coding run is about one dollar with V3.1. Competitors charge close to seventy dollars for a similar task.
DeepSeek described the release as an improved build of its earlier V3 design. The WeChat statement gave few details. But the model was posted on Hugging Face, where developers worldwide quickly began to download it. Activity picked up around the benchmarks and the low cost.
Technical Features
V3.1 works in BF16 and FP8 formats, letting developers adjust for different hardware. It also combines reasoning, dialogue, and coding in a single system. Earlier releases kept these in separate tracks. Developer reviews suggest hidden tokens may connect to web use and internal reasoning, two areas that had been weak in older mixed designs.
The model’s speed and efficiency make it more practical for real-time settings. Systems built only for reasoning often slow down. For enterprises, the gap in cost could run into millions when deployed widely.
Positioning Against U.S. Rivals
The timing was deliberate. V3.1 arrived only weeks after OpenAI announced GPT-5 and Anthropic brought out Claude 4. Both were promoted as state of the art. DeepSeek’s move added contrast. U.S. firms hold their top systems behind paywalls and access rules. V3.1 is open to download and adapt.
The difference points to two strategies. American firms guard their most advanced designs. Some Chinese developers take another path, showing high-end models as public goods. DeepSeek has also cut back on multiple product lines, placing them under the single V3.1 release to reduce clutter.
Global Response
Within hours, V3.1 rose on Hugging Face’s trending chart. Developers from Asia, Europe, and North America began running tests and examining the build. Notes on structure and performance spread quickly. The activity shows how open-source AI is advancing through wide collaboration.
The situation recalls earlier periods in software when open platforms began to edge out closed ones. Broader access makes it harder for one company or one country to hold a lasting lead.
Future Outlook
Attention now turns to the R2 model, which has been delayed. Local reporting links the pause to unresolved technical issues. For now, V3.1 shows that advanced performance does not always require billion-dollar budgets or strict limits.
Cost savings are one part. Licensing fees are removed. But the model’s size is heavy, nearly 700 gigabytes. Many companies may depend on hosted versions instead of running it in-house. For developers in the U.S., another question grows sharper: can proprietary systems continue to justify premium prices if rivals match them at lower cost?
Shifting Competition
The release of V3.1 points to a change in the contest. Strength no longer rests only on raw output. Access is part of the calculation. Smaller research teams now appear capable of shaping the race, not just the largest U.S. labs. Reports already suggest DeepSeek has begun work on V4.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Alibaba Launches Qwen-Image-Edit With Text-Based AI Image Editing