Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Chinese Startup MiniMax Shocks AI with Ultra-Cheap Language Model

Artificial Intelligence Language Model Open Source AI Agents MiniMax Tech Industry Cost Efficiency
February 12, 2026
Source: VentureBeat AI
Viqus Verdict Logo Viqus Verdict Logo 9
Price is Right
Media Hype 7/10
Real Impact 9/10

Article Summary

MiniMax has unveiled M2.5, a groundbreaking language model boasting performance levels competitive with industry leaders like Claude Opus 4.6, but at a fraction of the cost. The company’s key innovation lies in its Mixture of Experts (MoE) architecture, enabling the model to maintain reasoning depth while utilizing only 10 billion parameters, resulting in significant efficiency gains. This, combined with the innovative "Forge" RL framework and the CISPO optimization technique, allows M2.5 to deliver speeds up to 100 tokens per second, dramatically reducing operational costs. MiniMax is betting that accessibility will drive adoption, shifting the focus from simply ‘smart’ models to affordable, practical AI ‘agents’. The model's performance across benchmarks, including SWE-Bench, BrowseComp, and Multi-SWE-Bench, is comparable to or exceeding that of existing models, particularly in agentic tool use and financial modeling. Notably, MiniMax is offering two versions of M2.5 – M2.5-Lightning and the standard M2.5 – each tailored for speed or cost-effectiveness. The company is targeting enterprise users, offering a potential alternative to expensive proprietary models, and is actively deploying the model within its own operations, utilizing it for 30% of tasks and 80% of new code generation. This release signals a significant shift within the AI landscape, where affordability is becoming a critical factor in determining market success.

Key Points

  • MiniMax's M2.5 model achieves performance rivaling leading models like Claude Opus 4.6 while significantly reducing the cost of AI usage.
  • The model utilizes a Mixture of Experts (MoE) architecture and a proprietary RL framework (Forge) to achieve high efficiency and performance.
  • MiniMax is offering M2.5 at a price point that could make high-volume AI tasks accessible to businesses, potentially disrupting the current AI market.

Why It Matters

The release of MiniMax's M2.5 model is a pivotal moment for the artificial intelligence industry. It challenges the prevailing notion that cutting-edge AI is solely the domain of large corporations with immense computational resources. By dramatically reducing the cost barrier to entry, MiniMax is democratizing access to advanced AI capabilities, potentially accelerating innovation across industries. This shift could reshape how businesses leverage AI, moving from specialized applications to broader, more automated workflows. For professional observers, this signifies a move beyond simply measuring ‘intelligence’ to measuring the *cost-effectiveness* of AI solutions, ultimately impacting the strategies and investments of tech leaders globally.

You might also be interested in