Combining over 1GW of power capacity secured through grid connection agreements and reserved sites across distributed micro-power sites in the US, Europe and GCC, Antimatter will deploy a global network of 1,000 distributed micro data centers to serve the growing AI inference market — 5 times faster and 50% cheaper than hyperscalers.
CANNES, France, April 21, 2026 /PRNewswire/ — Antimatter, a new category of neocloud purpose-built for the distributed AI economy, today announced its launch through the strategic combination of three companies: Datafactory (US-based energy and power infrastructure), Policloud (modular micro data center network), and Hivenet (distributed cloud provider).

The combined entity creates the industry’s first fully integrated AI infrastructure platform spanning energy sourcing, physical hardware, and cloud software — designed to serve the explosive global demand for AI inference at a fraction of hyperscale cost and dramatically faster time to market.
Antimatter is deploying capital at an unprecedented pace to build out the first global neocloud network optimized for AI inference. The company is securing €300 million to fund the deployment of its first 100 Policloud units by 2027, representing 40,000 GPUs and over 3.6 exaFLOPS of active compute capacity.
By the end of 2030, the planned network of 1,000 Policlouds will provide more than 400,000 GPUs and over 36 exaFLOPS of distributed AI inference capacity — the equivalent of five traditional hyperscale data centers, deployed across dozens of countries with 50% lower capital spending and significantly faster time to market.
Antimatter is led by David Gurlé, the serial high-tech entrepreneur who founded Microsoft’s Real-Time Communications business (today’s Microsoft Teams), led Skype’s enterprise division and its sale to Microsoft, and founded Symphony Communication Services.
“In the age of AI, intelligence is not the bottleneck — energy is,” said David Gurlé, Cofounder, Executive Chairman, and CEO of Antimatter. “The infrastructure built for the first era of cloud and AI was designed around centralized scale. But the inference era requires a different model: more distributed, faster to deploy, and sovereign by design. That is the infrastructure Antimatter is building.”
Why AI Inference is Breaking the Cloud Model
The first wave of AI was about training massive models in centralized data centers. But the next phase — inference — is about running those models billions of times per day, across applications like copilots, agents, and real-time decision systems.
That shift changes everything. Inference requires infrastructure that is closer to users, faster to deploy, more energy-efficient, and geographically distributed. Traditional hyperscalers were not built for this. Their model relies on massive, centralized campuses that can take years to build and require enormous upfront capital.
Antimatter’s answer: bring the data center to the energy, not the energy to the data center.
The global data center capacity market is projected to grow from 55GW in 2023 to 220GW by 2030 — a 22% CAGR — yet grid connection queues and infrastructure delays are emerging as the primary bottleneck. In Europe alone, more than 12 TWh of renewable electricity were curtailed in 2023, representing over €4.2 billion in lost value. At the same time, more than 1,000GW of additional renewable capacity remains stuck in permitting and grid-connection queues across Europe and the GCC.
A Full-Stack Neocloud Built for the AI Inference Era
Antimatter is uniquely positioned as the only neocloud that controls the complete value chain:
Energy-first model
More than 1GW of power capacity secured through formal grid connection agreements and site reservations, including over 160MW already operational across Texas and Oregon, USA. Antimatter deploys Policloud units directly at or near existing power assets — including wind, solar, hydro, or biogas sites — converting stranded generation into productive AI infrastructure in a matter of months, rather than waiting years for new transmission capacity.
Decentralized infrastructure layer
A fleet of modular, containerized micro data centers, each housing up to 400 GPUs and deployable in as little as five months, compared with 24+ months for traditional hyperscale builds. Antimatter currently operates 10 units across 8 sites and has a commercial pipeline of more than 500 additional units.
Distributed software layer
A proprietary distributed computing and storage platform providing the orchestration intelligence that connects distributed hardware into a single, sovereign cloud fabric with global default Tier 3 capability — supporting billions of inference requests each day, with sub-10ms latency for edge workloads and full data sovereignty for regulated industries.
Key Competitive Advantages
|
Metric |
Antimatter |
Traditional Hyperscale |
|
Capex per fully loaded MW |
~$7M |
~$35M |
|
Deployment timeline |
5 months |
24+ months |
|
Customer pricing |
~50% below hyperscalers |
Market rate |
|
Edge latency |
Sub-10ms |
Variable |
|
Carbon reduction |
~70% lower; zero water cooling |
Standard |
|
Data sovereignty |
Sovereign-by-design; local jurisdiction |
Bolt-on solutions |
Strong Commercial Traction
Antimatter enters the market with demonstrated commercial momentum:
- $20m forward looking revenue
- 3,344 GPUs deployed with demand for 10,000+
- 100 Policlouds being deployed in 2027, representing 40,000+ GPUs
- 1,000 Policlouds planned by end of 2030, representing 400,000+ GPUs
- Diversified customer base: Energy sector (35%), Public sector (30%), Agriculture (15%), Corporates (20%)
The company is targeting $250M+ in revenue within the next 18 months and $3.0B+ by the end of 2030.
Investor Perspectives
“AI infrastructure is now a strategic asset class, and the winners will be those who can combine hard assets with software at scale. Antimatter’s vertically integrated model — from megawatts to APIs — is exactly the kind of infrastructure we believe can define the next decade of digital growth.” — Alex Manson, CEO of SC Ventures, Standard Chartered Bank
“France and Europe need sovereign, energy-efficient infrastructure to compete in AI. What convinced us about Antimatter is not just the technology, but the ability to deploy micro data centers in months, on existing power assets, while meeting the most demanding regulatory constraints.” — Stéphanie Hospital, Founder and CEO of OneRagtime
“We are witnessing first-hand how emerging markets are leapfrogging legacy infrastructure and going straight to AI-native architectures. Antimatter’s model — distributed, capital-efficient and deeply integrated with energy — is built for these environments and for an economy increasingly shaped by AI.” — Noor Sweid, Founder and Managing Partner, Global Ventures
“At Inria, we work every day at the frontier of AI and high-performance computing. Antimatter’s approach is compelling because it reconciles cutting-edge AI workloads with more frugal, sustainable infrastructure — distributed, software-defined, and close to available energy. It is a strong illustration of the deeptech industrial story we want to see emerge in Europe.” — Bruno Sportisse, Chairman and CEO of Inria
About the Founder
David Gurlé is a French entrepreneur, engineer, and Chevalier of the Légion d’Honneur. He has founded seven companies, including Symphony Communication Services ($1.4B valuation), and held senior leadership roles at Microsoft (where he founded the Real-Time Communications business), Thomson Reuters, and Skype (VP & General Manager, Enterprise). He holds an MSc in Computer Science and Telecommunications from EFREI Paris.
About Antimatter
Antimatter is the distributed neocloud for AI inference. By vertically integrating energy, modular infrastructure, and orchestration software, Antimatter deploys enterprise-grade AI compute infrastructure faster, cheaper, and more sustainably than traditional hyperscale providers. Headquartered in Cannes, France, with major operations in the United States, Antimatter serves enterprises, governments, and AI companies worldwide.
Note on exaFLOPS calculation: RTX 5090 = ~90 TFLOPS FP32. 40,000 GPUs x 90 TFLOPS = 3,600 petaFLOPS = 3.6 exaFLOPS. For 400,000 GPUs = 36 exaFLOPS.
CONTACT: Ariane Forgues, aforgues@mantu.com
Logo: https://mma.prnewswire.com/media/2960572/5925624/Antimatter_Logo.jpg
View original content:https://www.prnewswire.co.uk/news-releases/antimatter-launches-as-the-worlds-first-vertically-integrated-neocloud-for-ai-inference-302747479.html
