DeepSeek Unveils V4: New Open-Weights LLMs Challenge Giants on Price and Efficiency
7
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
High media buzz fueled by sheer size and competitive pricing, translating into a genuine high impact on the cost-effectiveness of operating large-scale LLMs.
Article Summary
DeepSeek has released two preview models, DeepSeek-V4-Pro and DeepSeek-V4-Flash, marking a significant new entry into the open-weights LLM market. The Pro model is noted for its scale, boasting 1.6T total parameters, positioning it among the largest open models available. Crucially, the financial viability of these models is their biggest selling point; DeepSeek V4 Flash is priced at an extremely competitive $0.14/million input tokens, undercutting major competitors like GPT-5.4 Nano. Furthermore, the lab highlighted remarkable efficiency gains: even the massive Pro model shows minimal increase in FLOPs and KV cache size compared to its predecessor, suggesting advanced architecture optimization for longer context windows (1 million tokens). While benchmarked performance is competitive with industry leaders, the analysis suggests a trailing edge of 3-6 months from the absolute state-of-the-art models.Key Points
- DeepSeek V4 establishes itself as a top-tier open-weights option with its Pro and Flash models, offering massive scale and superior efficiency.
- The most disruptive element is the pricing, with the Flash model offering industry-leading low costs ($0.14/M tokens) compared to major proprietary models.
- Technological advancements focus on efficiency, achieving high performance with significantly reduced FLOPs and KV cache requirements even when supporting 1 million tokens of context.

