Cerebras Raised $5.5 Billion and Its Stock Nearly Doubled on Day One
Cerebras priced its IPO at $185 and closed up 68%, raising $5.5 billion in the year's largest tech debut. Its AI chip is built from a single silicon wafer, with OpenAI holding an 11% stake.
Thursday morning, Cerebras Systems priced its IPO at $185 per share. By the time the Nasdaq opened, the stock was at $385. That's a 108% pop on opening. By end of day, shares closed at $311, still 68% above the IPO price, for a fully diluted market cap of roughly $66 billion.
Cerebras raised $5.55 billion on debut day, the largest U.S. tech IPO since Arm Holdings went public in 2023. The stock performance made instant billionaires of both co-founders and sent a clear message to the rest of the AI chip market: investors aren't done writing big checks for companies that challenge NVIDIA.
This wasn't a straightforward path. The company first filed to go public in late 2024, got caught in a federal regulatory review, spent 18 months diversifying its customer base, and finally landed with one of the most impressive customer rosters any IPO-stage hardware company has assembled. The road to Thursday's bell included OpenAI, Amazon Web Services, and a lot of quiet work that didn't make headlines.
The Chip Behind Cerebras' $66 Billion Debut
Most semiconductor manufacturing follows the same process it has for decades. A silicon wafer gets sliced into hundreds of individual chips, each typically a few hundred square millimeters. Defects are isolated by cutting; a bad chip gets discarded. The approach keeps yields manageable and costs predictable.
Cerebras doesn't do that. The company's Wafer Scale Engine 3 uses the full silicon wafer as a single unified chip. The WSE-3 measures 46,225 square millimeters, bigger than an iPad screen, and packs 4 trillion transistors onto that surface. It includes 900,000 AI-optimized compute cores and 2,625 times the memory bandwidth of NVIDIA's flagship B200 chip. Total throughput: 125 petaflops of AI compute from a single device.
The physics argument for this design is straightforward. Communication within a chip is thousands of times faster than communication between chips. When a large language model runs across hundreds of separate GPUs, those GPUs spend significant time passing data back and forth over high-speed interconnects. Every interconnect hop adds latency. The WSE architecture eliminates most of those hops by keeping compute on one substrate.
Making a chip that large work reliably required solving a different problem: defects are unavoidable at 46,000-square-millimeter scale. Cerebras addressed this with redundant compute cores, redundant routing paths, and a fail-in-place architecture that detects flaws, shuts them down, and routes around them. The system keeps running even with imperfect silicon.
The result is a machine that Cerebras claims can answer large language model queries more than twice as fast as comparable NVIDIA hardware in certain workloads. For inference specifically, the company claims performance up to 15 times faster than GPU-based alternatives. That gap matters now more than it did three years ago. As AI applications shifted from training to deployment, inference latency became a direct cost driver. Every millisecond saved on a query response is compute capacity freed up. For companies running billions of daily queries, that math gets significant quickly.
On IPO day, investors who received shares at $185 watched the stock open at $385 and settle at $311. The company's fully diluted valuation at that close was roughly $66 billion, more than double its February 2026 private valuation. Co-founders Andrew Feldman and Sean Lie both became billionaires: Feldman's stake at the IPO price was worth approximately $1.9 billion, Lie's around $1 billion. Cerebras trades on the Nasdaq under the ticker CBRS.
From CFIUS Review to OpenAI Deal
Cerebras was ready to go public in 2024. The filing was in order, the numbers were compelling enough, and the timing felt right. Then came the regulatory complication that shelved everything for more than a year.
The Committee on Foreign Investment in the United States opened a national security review focused on Group 42, an Abu Dhabi-based AI firm with close ties to the UAE government. At the time, Group 42 was both a major investor in Cerebras and the source of almost all of the company's revenue. CFIUS was concerned about the implications of advanced AI chip technology effectively being controlled by, or substantially dependent on, a foreign government-linked entity.
That review didn't produce a formal block, but it killed the IPO timeline. The company spent the next 18 months doing two things: diversifying its revenue base away from Group 42 and building relationships with U.S.-domiciled customers. By early 2026, those customers included some of the most recognizable names in AI.
The anchor customer relationship is with OpenAI. In early 2026, Cerebras signed a deal with OpenAI worth over $20 billion for 750 megawatts of Cerebras compute capacity. That's a commitment to Cerebras as a core compute provider for one of the most resource-intensive AI operations in the world. The deal also included an equity stake: OpenAI negotiated an ownership position of between 10% and 11% in Cerebras. One of the largest AI model developers in the world now has a financial interest in the company making chips that compete with NVIDIA.
From OpenAI's perspective, the logic is straightforward. NVIDIA GPU supply is constrained. Pricing is high. Any credible inference alternative that delivers better latency at competitive cost is worth owning. Cerebras delivers that for certain large-batch workloads, and an equity stake is a hedge against the scenario where it delivers that for a lot more.
Amazon Web Services followed with a $270 million direct investment in Cerebras as part of an agreement to offer WSE-3 access to cloud customers. Through the AWS Marketplace, any enterprise already on AWS can now evaluate Cerebras compute for inference tasks alongside standard GPU instances. No dedicated hardware to buy. No new vendor contract to negotiate. Just a different instance type on a platform they already use. That distribution model is what turns Cerebras from a specialty hardware vendor into something that competes in the general enterprise market.
The CFIUS story is worth tracking as a pattern. U.S. regulators are increasingly willing to block or delay AI hardware companies that have significant capital or revenue exposure to certain foreign entities. Cerebras cleared the review by making its business look more American. Other chip startups with Gulf or Chinese capital structures are watching this outcome carefully.
Can Cerebras Challenge NVIDIA in the AI Chip Market
Cerebras came to its IPO with actual profit, not just a growth story. In 2025, the company reported $510 million in revenue, up 76% year over year from $290 million in 2024. Net income swung from a loss of nearly $500 million in 2024 to a profit of $237.8 million in 2025. That turnaround reflects the shift from expensive R&D-heavy years to volume production, plus the revenue visibility that came with the OpenAI deal signing.
The customer concentration risk is real. A significant share of current revenue flows from a small number of large relationships. That's normal for a hardware company at this stage, and the trend is moving toward diversification. But investors pricing the stock at $311 are pricing in continued rapid revenue growth, new customer additions through the AWS channel, and margin expansion as WSE-3 production scales.
NVIDIA's dominance in AI compute is structural in ways that performance benchmarks don't fully capture. The CUDA software ecosystem took over 15 years to build. The switching costs for developers are substantial. AI training in particular remains almost entirely GPU territory: the distributed training frameworks, the checkpointing systems, the collective institutional knowledge of how to train frontier models at scale all assume NVIDIA hardware. Cerebras isn't competing seriously in that market yet.
Where it has an opening is inference. The WSE-3's throughput and memory bandwidth advantages are most pronounced for large-batch inference workloads. High-speed, low-latency inference is also where most of the per-dollar AI deployment economics are concentrated today. When a company is running billions of daily model queries, shaving latency at scale directly reduces compute spend.
The AI chip market beyond NVIDIA is also not a two-player race. AMD's Instinct accelerators are gaining data center share. Google's TPUs, Amazon's Trainium and Inferentia chips, and Microsoft's Maia silicon handle meaningful portions of each company's internal workloads. Groq and SambaNova compete in specific inference niches. Cerebras' position in that field is unusual: it's not trying to be a cheaper NVIDIA. It's betting on a speed advantage derived from an unusual physical architecture, at the cost of deployment flexibility.
The bigger competitive question is whether NVIDIA's roadmap closes the gap. The B200 is fast. The next-generation Vera Rubin platform will be faster. Cerebras needs its performance advantage to widen faster than NVIDIA's roadmap catches up, or to land enough customers through channels like AWS before the gap narrows. The $20 billion OpenAI relationship buys time and revenue certainty. Whether that's enough time is what the next several years will show.
Cerebras' debut also signals something broader about the AI infrastructure investment cycle. After a stretch of headlines about data center overinvestment and GPU supply gluts, a hardware company raises $5.5 billion, doubles on day one, and ends with a $66 billion valuation. That's not the market calling the AI compute buildout over. The same week, Cisco reported that its AI networking orders doubled its own forecast, hitting $9 billion. Two infrastructure companies, one day, two dramatic upside surprises. Capital markets are still pricing in significant runway for the companies building the physical foundation of the AI economy.
For a full picture of the competitive landscape Cerebras is entering, including where it fits among the other companies racing to build AI compute alternatives, see our AI Infrastructure Companies to Know in 2026 guide.
The long-term outcome depends on execution, customer expansion, and the pace of NVIDIA's competition. But the IPO itself answered one question clearly: investors believe the alternative AI chip market is worth $66 billion in bets. Cerebras just collected the first one. For the primary sourcing on the IPO details, TechCrunch's full IPO coverage has the complete breakdown.
Weekly newsletter
Get a weekly summary of our most popular articles
Every week we send one email with a summary of the most popular articles on AIntelligenceHub so you can stay up-to-date on the latest AI trends and topics.
Comments
Every comment is reviewed before it appears on the site.
Related articles
Cisco's AI Networking Orders Just Doubled Its Own Forecast to $9 Billion
Cisco's Q3 earnings revealed $5.3 billion in AI infrastructure orders year-to-date and a forecast raised to $9 billion. Networking orders surged 50% and data center switching climbed 40%.
Apple Is Redesigning Its App Store Rules to Let AI Agents In
Apple is designing a new compliance framework that would allow AI agents to operate inside App Store boundaries, with details expected at WWDC on June 8 alongside a major Siri overhaul.
Google's New AI Video Model Leaked Ahead of Google I/O 2026
Real clips from Google's unreleased Gemini Omni video model are circulating publicly, showing in-chat video editing and strong prompt adherence just days before Google I/O 2026 opens on May 19.