Multiverse Computing Secures $215M to Revolutionize LLM Compression Technology

By futureTEKnow | Editorial Team

The world of large language models (LLMs) is undergoing a major transformation, thanks to a breakthrough from Multiverse Computing, a Spanish deep tech startup. The company has just closed a massive $215M million funding round, propelling its mission to make LLMs dramatically more efficient and accessible.

Bullhound Capital is set to lead the Series B funding round, joined by prominent investors including HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba, and Capital Riesgo de Euskadi – Grupo SPRI.

Quantum-Inspired Compression: The CompactifAI Edge

At the heart of Multiverse Computing’s innovation is CompactifAI, a technology that leverages quantum-inspired tensor networks (TNs) to compress LLMs by up to 95%—all while maintaining high levels of accuracy. This approach goes far beyond traditional methods like quantization and pruning, which often sacrifice performance. Instead, CompactifAI uses advanced algorithms to identify and retain only the most relevant correlations in the neural network, replacing unnecessary data with matrix product operators (MPOs).

The result? LLMs that are not only smaller but also faster and more energy-efficient. These compressed models can run on everything from cloud infrastructure to edge devices like smartphones, laptops, and even microcontrollers such as the Raspberry Pi.

How Does It Work?

The process starts with layer sensitivity profiling to pinpoint which parts of the model can be compressed without losing critical information. By substituting the original weight matrices in self-attention and multi-layer perceptron layers with MPOs, the model’s size is drastically reduced. The bond dimension of the tensor network controls the degree of compression, allowing for a fine balance between model size and accuracy.

After compression, a brief retraining phase helps restore any lost accuracy, ensuring the models remain robust and reliable. In benchmark tests, CompactifAI-compressed models retained up to 90% of the original accuracy while shrinking to just 15-30% of their initial size.

Why Does This Matter?

LLMs are notorious for their massive computational and energy demands. Running these models typically requires specialized cloud infrastructure, driving up costs and limiting accessibility. With CompactifAI, Multiverse Computing is making it possible to deploy powerful AI models on a much wider range of hardware, slashing inference costs by 50-80% and reducing energy consumption.

This breakthrough is a game-changer for industries looking to integrate AI without the heavy price tag or environmental impact. The new funding will help Multiverse Computing scale its technology and accelerate the adoption of affordable, high-performance AI across sectors.

The Future of Efficient AI

With support from major investors and a rapidly growing customer base, Multiverse Computing is poised to set a new standard for AI efficiency. Their technology is not just about shrinking models—it’s about making advanced AI accessible, sustainable, and ready for real-world deployment.

futureTEKnow covers technology, startups, and business news, highlighting trends and updates across AI, Immersive Tech, Space, and robotics.

futureTEKnow

Editorial Team

futureTEKnow is a leading source for Technology, Startups, and Business News, spotlighting the most innovative companies and breakthrough trends in emerging tech sectors like Artificial Intelligence (AI), immersive technologies (XR), robotics, and the space industry.

Trending Companies

Latest Articles

Medicare will pilot AI-driven prior authorization in 2026 across six states, targeting high-risk services while clinicians make final decisions.

AI-Powered Prior Authorization Comes to Traditional Medicare

Traditional Medicare will pilot AI-assisted prior authorization in 2026 across six states, focusing on high-risk outpatient services. Clinicians retain final say, but incentives and access concerns loom as CMS tests fraud reduction and “gold card” exemptions. Here’s what providers and patients should know.

Read More >