BitcoinWorld
AI Costs Slashed: Multiverse Computing Raises $215M for Revolutionary LLM Compression
In the world of technology, efficiency is key, especially when it comes to the resource-hungry field of Artificial Intelligence. Reducing AI costs is a major goal for companies deploying large language models (LLMs). This is where a recent funding announcement from Spanish startup Multiverse Computing becomes particularly interesting for anyone following tech innovation, including those in the cryptocurrency space who understand the value of computational efficiency.
Multiverse Computing announced on Thursday a significant Series B funding round, securing €189 million (approximately $215 million). This substantial investment was raised on the back of their promising technology called “CompactifAI.”
What is CompactifAI and Why Does it Matter for LLM Compression?
CompactifAI is described as a quantum-computing inspired compression technology. Its primary function is to drastically reduce the size of Large Language Models (LLMs). According to Multiverse Computing, this technology can shrink LLMs by up to 95% while maintaining the model’s performance. This level of LLM compression is groundbreaking, potentially unlocking new possibilities for AI deployment.
Reducing AI Costs and Boosting Performance
The benefits of CompactifAI extend beyond just size reduction. The company states that their ‘slim’ models are significantly faster than their uncompressed counterparts, boasting speed increases of 4 to 12 times. This speed improvement directly translates into lower operational expenses.
- Cost Reduction: Multiverse Computing reports a 50% to 80% reduction in inference costs (the cost of running the model to generate responses).
- Example Savings: They cite an example where their Llama 4 Scout Slim model costs just 10 cents per million tokens on AWS, compared to 14 cents for the standard Llama 4 Scout.
- Speed Increase: Models run 4x to 12x faster, improving responsiveness and throughput.
- Size Reduction: Models can be compressed by up to 95%, requiring less storage and memory.
These factors combined mean that deploying and running LLMs becomes far more economically viable for a wider range of applications and businesses, significantly impacting overall AI costs.
Multiverse Computing’s Offerings and Accessibility
Multiverse Computing currently offers compressed versions of popular open-source LLMs. These primarily include smaller models like Llama 4 Scout, Llama 3.3 70B, and Mistral Small 3.1. They have also announced plans to release a compressed version of DeepSeek R1 soon, with more open-source and reasoning models on the horizon.
It’s important to note that this technology is focused on open-source models; proprietary models from companies like OpenAI are not currently supported.
The ‘slim’ models developed by Multiverse Computing are available for use on Amazon Web Services (AWS) or can be licensed for on-premises deployment, offering flexibility for different business needs.
Enabling AI on Smaller Devices
Perhaps one of the most exciting implications of CompactifAI’s extreme compression is the ability to run sophisticated AI models on hardware that was previously incapable of handling them. Multiverse Computing suggests that some of their models can be made small and energy-efficient enough to run on:
- Personal Computers (PCs)
- Smartphones
- Vehicles (Cars, Drones)
- Even small, low-power devices like the Raspberry Pi
Imagine the possibilities: AI assistants running locally on your phone, smart devices in your home with advanced language capabilities, or even AI-powered features in cars without needing constant cloud connectivity. This shift towards on-device AI processing is a direct result of effective LLM compression.
The Technology Behind CompactifAI: Quantum-Inspired AI
The core technology, CompactifAI, draws inspiration from quantum computing. One of the co-founders and CTO, Román Orús, is a professor known for his work on tensor networks. Tensor networks are computational techniques that can mimic aspects of quantum computing and are particularly effective for compressing deep learning models. While not true quantum computing, this Quantum-inspired AI approach leverages advanced mathematical frameworks to achieve its impressive compression rates.
Meet the Founders of Multiverse Computing
The company is built on a strong technical and business foundation.
- Román Orús (CTO): A professor at the Donostia International Physics Center in Spain, bringing deep expertise in tensor networks and quantum-inspired computation.
- Enrique Lizaso Olmos (CEO): Holds multiple mathematical degrees, was a college professor, and has significant experience in the banking sector, including being the former deputy CEO of Unnim Bank.
Who Invested in Multiverse Computing?
The substantial Series B round was led by Bullhound Capital, an investment firm with a history of backing successful tech companies like Spotify and Revolut. They were joined by a diverse group of investors, including:
- HP Tech Ventures
- SETT
- Forgepoint Capital International
- CDP Venture Capital
- Santander Climate VC
- Toshiba
- Capital Riesgo de Euskadi – Grupo SPR
This impressive list of investors underscores the market’s confidence in Multiverse Computing‘s technology and its potential impact on the AI landscape. With this latest round, the company has now raised approximately $250 million in total funding.
Conclusion: A New Era for AI Accessibility?
Multiverse Computing’s successful funding round and their CompactifAI technology represent a significant step towards making advanced AI more accessible and affordable. By drastically reducing model size and inference costs, they are enabling the deployment of powerful LLMs in environments previously considered impossible. This could democratize AI, leading to widespread innovation and new applications across various industries, ultimately lowering the barrier to entry by tackling high AI costs head-on.
To learn more about the latest AI market trends, explore our article on key developments shaping AI features.
This post AI Costs Slashed: Multiverse Computing Raises $215M for Revolutionary LLM Compression first appeared on BitcoinWorld and is written by Editorial Team