Chinese AI maker DeepSeek will be able to slash API prices by more than 50% following the launch of its new experimental language model, DeepSeek-V3.2-Exp, which the company says is more efficient to train and better at handling long sequences of text than earlier versions.

In a post on developer platform Hugging Face, the Hangzhou-based company described the model as an “intermediate step toward our next-generation architecture,” hinting at a more significant release on the horizon. That architecture is expected to be DeepSeek’s most important product launch since its V3 and R1 models rattled Silicon Valley and global tech investors earlier this year.

The experimental model incorporates a system known as DeepSeek Sparse Attention, designed to reduce computing costs while improving performance in certain tasks.

While DeepSeek’s next-generation architecture is unlikely to spark the same immediate market upheaval as its January breakthroughs, analysts say it could still put pressure on both domestic competitors such as Alibaba’s Qwen and international players like OpenAI, provided the company can once again demonstrate strong performance at a fraction of the cost.

DeepSeek’s strategy hinges on delivering high capability without the massive expenses usually associated with model training, a formula that made its earlier releases some of the most closely watched developments in the global AI race.

By admin