Google and Intel Are Betting CPUs Still Matter in AI Data Centers
Google Cloud and Intel have expanded their infrastructure partnership around Xeon 6 processors and custom IPUs. The deal is a reminder that AI data centers still depend on much more than GPUs.
The hottest chips in AI are still GPUs, but Google and Intel just made a public point of talking about CPUs and infrastructure processors instead. That is not an accident. On April 9, Intel said Google Cloud is continuing to deploy Intel Xeon processors across its workload-optimized instances, including Xeon 6 in C4 and N4 instances, while the two companies also deepen their work on custom ASIC-based infrastructure processing units. The easiest way to read that announcement is simple. AI data centers are still being built as whole systems, not as giant piles of accelerators.
That matters because a lot of AI coverage has trained buyers to think in one dimension. More GPUs, more wins. More access to the newest accelerator family, more scale. That is a useful way to understand the training race, but it is an incomplete way to understand how cloud infrastructure actually works. Models need CPUs for coordination, memory handling, storage paths, control-plane work, and a long list of surrounding tasks that do not disappear just because accelerators do the math-heavy parts. Network and offload processors matter too, because someone has to move traffic and housekeeping work away from the central compute path.
In Intel’s announcement, the company says Google Cloud will keep using Intel Xeon processors for AI, cloud, and inference tasks and that the two companies are expanding their co-development of custom ASIC-based IPUs. TechCrunch’s report on the move adds a useful plain-language summary: the renewed partnership keeps Intel inside Google Cloud’s AI infrastructure stack at a moment when buyers are again paying attention to CPU demand as AI usage broadens. That is the context that makes this worth covering. Google and Intel are telling the market that “AI infrastructure” is not shorthand for one chip class.
There is also timing behind the message. The industry is now under pressure to support more inference, more model-serving concurrency, and more enterprise AI workloads that run continuously rather than only in high-profile training clusters. Those workloads still need accelerators, but they also need balanced systems around them. Intel chief executive Lip-Bu Tan used exactly that language, arguing that scaling AI requires more than accelerators and that CPUs and IPUs stay central to performance, efficiency, and flexibility. That is the piece enterprise buyers should pay attention to.
The important question is not whether CPUs beat GPUs. They do not solve the same problems. The real question is why Google Cloud wants to highlight CPUs and IPUs now, in public, during an AI buildout cycle that often treats everything except accelerators as background noise. The answer is that background infrastructure is becoming a competitive surface again.
Why Google and Intel Are Talking About CPUs Again
The first reason is practical. Inference is now a much bigger business story than it was a year ago. Training still gets the headlines, but once companies begin serving real AI products at scale, efficiency per request, latency under load, orchestration overhead, and traffic management all become more visible. A cloud vendor cannot build that world using only the most glamorous hardware in the rack. It needs well-matched systems that handle different parts of the workload cleanly.
Google Cloud’s mention of C4 and N4 matters here because those are not exotic research systems. They are commercial instance families that customers can actually plan around. If Intel Xeon 6 is the basis for those instances, Google is signaling that conventional compute is still part of the value proposition even in an AI-heavy era. That may sound obvious to infrastructure specialists. It is not obvious to buyers who have spent the last year hearing that every serious AI conversation begins and ends with GPU access.
The second reason is cost discipline. AI infrastructure is getting more expensive, not less. If a cloud provider can improve overall system balance, move the right tasks off the expensive path, and avoid overusing scarce accelerators for work that does not need them, that matters commercially. It affects margins for the provider and pricing pressure for the customer. This is one reason our earlier look at Anthropic’s new Google TPU capacity deal mattered so much. The market is fighting over access to accelerator capacity, but the winning architectures still depend on everything around the accelerator too.
The third reason is vendor strategy. Google has its own Tensor Processing Units. Intel obviously wants to remain important in the hyperscale stack even as Nvidia keeps dominating the accelerator conversation. A public partnership message lets both companies make a point. Google can show that its infrastructure is heterogeneous and practical. Intel can show that it is still relevant in AI systems, not only in generic cloud compute. Expanding custom IPU co-development adds another layer to that story. It suggests that the companies are still optimizing the connective tissue of the data center, not only the headline chips.
This matters more than it may first appear because offload and coordination silicon can quietly shape how much useful work a cluster delivers. If IPUs take over certain networking, storage, or data-movement responsibilities, CPUs and accelerators spend less time on chores. That does not sound dramatic in a keynote. It matters a lot in production.
What Cloud Buyers Should Watch
The safest mistake to avoid is treating this announcement as a pure Intel comeback story. It is not. Google is not walking away from accelerators. The market is not suddenly moving backward from GPU-heavy design. What this partnership does show is that buyers should ask better infrastructure questions than “which GPU do you have?” That question still matters. It just is not enough on its own anymore.
Buyers should ask how the cloud vendor handles coordination and traffic around AI workloads. They should ask what part of latency comes from the accelerator path and what part comes from everything else. They should ask whether the provider has a plan for inference-heavy demand, not only training bursts. They should also ask how much of the stack is commodity, how much is custom, and which parts are likely to change over the next year as newer chips arrive.
The second thing to watch is how Google describes these systems at Cloud Next and in later infrastructure announcements. If CPUs, IPUs, and system balance keep showing up in the messaging, then this was not a one-day partner courtesy. It was a signal about how Google wants enterprise buyers to think about AI operations. That would be meaningful. Many customers are just now reaching the point where they care less about model demos and more about what the underlying system will cost, how it behaves under load, and whether it can stay predictable as usage grows.
The third thing to watch is whether other cloud providers start saying the quiet part out loud. They all know AI clusters are mixed systems. Not all of them talk about it directly because accelerators draw more attention. If Google and Intel keep pushing the idea of balanced AI infrastructure, rivals may have to explain their own supporting stack more clearly too.
That would be good for buyers. Enterprise teams make better decisions when the infrastructure conversation becomes less mystical. GPUs will keep dominating the marketing language around AI. They deserve a lot of that attention. But this Google and Intel update is a useful reminder that serious AI operations still rely on CPUs, offload silicon, and the less glamorous engineering around the main event. In 2026, the companies that understand that full stack will make smarter cloud decisions than the ones still shopping by headline chip alone.
Related articles
Oracle Wants AI Agents Inside Finance and Supply Chain Work
Oracle is rolling out Fusion Agentic Applications for finance and supply chain work. The real story is its push to move enterprise AI from chat helpers into systems that can take structured action.
Meta Just Gave CoreWeave Another $21 Billion for AI Cloud Capacity
Meta has expanded its CoreWeave deal with another $21 billion in cloud capacity through 2032. The move shows how much the AI race is shifting from model hype to long-term infrastructure commitments.
OpenAI Says Enterprise AI Is Moving Past Copilots
OpenAI says enterprise now makes up more than 40% of its revenue and is on track to match consumer by the end of 2026. That claim signals where the company thinks business AI spending is heading next.