European Startup Multiverse Computing Optimizes LLMs with Compressed Models
6
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
While Multiverse’s advancements represent a noteworthy competitive step within the European LLM market, the core technology – model compression – is not fundamentally changing the trajectory of AI. The hype surrounding this news is driven by the company’s rapid growth and the increasing attention on European AI innovation, but sustained impact depends on continued advancements and broader adoption.
Article Summary
Multiverse Computing is addressing the cost and complexity of deploying large language models with its new CompactifAI technology. Inspired by quantum computing, this compression method allows them to deliver models comparable in performance to leading models like OpenAI's gpt-oss-120B, but at a significantly reduced size and memory footprint. The company’s HyperNova 60B model, now available for free on Hugging Face, is roughly half the size of the gpt-oss-120B, boasting lower latency. Key advancements include better support for tool calling and agentic coding. The startup is attracting significant attention, raising a rumored €500 million funding round at a $1.5 billion valuation, and securing collaborations with regional governments and enterprise clients including Iberdrola, Bosch, and the Bank of Canada. Competitors like Mistral AI are also making waves in the European AI landscape.Key Points
- Multiverse Computing is releasing compressed LLMs to reduce deployment costs.
- Their HyperNova 60B model offers comparable performance to gpt-oss-120B with a smaller footprint.
- The company has secured significant funding and enterprise clients, alongside partnerships with regional governments.

