ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

DeepSeek Unveils V4 Series: Open-Source Contender Aims to Challenge Proprietary Giants on Cost and Scale

Large Language Model DeepSeek V4 Mixture-of-Experts Open-weight model Artificial Intelligence AI Benchmarks
April 24, 2026
Source: TechCrunch AI
Viqus Verdict Logo Viqus Verdict Logo 7
Aggressive Open-Source Pricing Pressure
Media Hype 7/10
Real Impact 7/10

Article Summary

DeepSeek has released two preview versions of its new large language model, DeepSeek V4, along with an accompanying R1 reasoning model. These models, V4 Flash and V4 Pro, utilize a Mixture-of-Experts (MoE) architecture and boast massive 1-million-token context windows, enabling complex document and code analysis. V4 Pro is touted as the industry's largest open-weight model (1.6 trillion parameters), significantly outpacing competitors like Moonshot AI and MiniMax. While DeepSeek claims performance comparable to, or surpassing, models like GPT-5.4 and Gemini 3.0 Pro, the company itself notes a performance gap of 3-6 months behind current state-of-the-art frontier models on pure knowledge tests. Crucially, DeepSeek positions itself as a major economic disruptor by offering models that are substantially more affordable than any commercial frontier offering.

Key Points

  • The V4 series uses a massive MoE architecture with 1M context windows, supporting large-scale enterprise tasks.
  • V4 Pro is marketed as the largest open-weight model (1.6T parameters), challenging industry leaders on scale.
  • DeepSeek leverages dramatic cost advantages, pricing its services below comparable commercial models like GPT-5.4 and Claude Haiku.

Why It Matters

This launch is highly significant for the open-source ecosystem. DeepSeek is not just releasing a powerful model; it is releasing a *product strategy* centered on massive scale and unparalleled affordability. By undercutting the cost of established proprietary leaders, DeepSeek lowers the barrier to entry for enterprise adoption, making advanced AI accessible to a broader range of companies. While the stated benchmark gap to the absolute SOTA models is a caveat, the combination of size, efficiency, and cost makes this a critical competitive move that forces proprietary players to re-evaluate their pricing and model release cycles.

You might also be interested in