Data center with server racks lit in cool blue and warm amber lighting, representing the Claude Platform on AWS enterprise cloud integration

Anthropic and AWS Just Made Enterprise AI Deployment a Lot Simpler

AIntelligenceHub
··9 min read

AWS became the first cloud provider to offer Anthropic's native Claude Platform, with IAM auth, CloudTrail logging, and unified billing. Here's what's included and how it compares to Bedrock.

Here's a problem that's been quietly frustrating AI teams for the past year. You build your Claude integration through Amazon Bedrock, your security team signs off, billing is consolidated, and everything runs. Then Anthropic ships something new: Managed Agents, the MCP connector, the Skills API. None of it is available where you're working. Those features live in Anthropic's native platform, which requires a separate contract, separate API keys, and a separate billing relationship.

On May 11, 2026, AWS and Anthropic closed that gap. Claude Platform is now generally available on AWS, making AWS the first cloud provider to offer Anthropic's native platform experience. Enterprise teams can now access Anthropic's complete Claude toolset using the same IAM credentials, billing account, and audit infrastructure they already use for everything else on AWS.

The Full Claude Platform Is Now Available Through AWS

Claude Platform on AWS isn't a new product. It's Anthropic's existing Claude Platform, the same one developers access directly through Anthropic, now reachable through your AWS account. The value here is direct: you get Anthropic's full feature set without managing a separate vendor relationship. Your IAM policies control access. Your AWS invoice covers usage. CloudTrail captures every API call for your security team.

This matters because Anthropic ships features faster than Bedrock's integration cycle can always match. Claude Managed Agents, the agent Skills system, the MCP connector for remote tool servers, the Files API for document handling: these debuted in Anthropic's native platform first, and all of them are available now through Claude Platform on AWS.

**The Messages API** is the foundation: access to Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5 through the same interface developers already know, with no changes to existing code.

**Claude Managed Agents** (currently in beta) handles the orchestration layer for building production agents. Instead of writing session management, execution flow control, and retry logic yourself, Anthropic's runtime takes ownership of those concerns. Sessions persist even if individual containers fail, which is a meaningful reliability improvement for long-running workflows. Pricing sits at $0.08 per active session-hour, on top of standard inference costs.

Most teams building agents today manage their own runtime: writing session persistence code, handling execution flow, dealing with failures and retries. This is undifferentiated engineering work. Claude Managed Agents inverts that model. The developer defines what the agent does and what tools it has. The platform handles whether and how it keeps running. A customer support agent handling 30-minute sessions costs less than three cents per resolved interaction before inference, making the math tractable for most production workloads.

**Agent Skills** (also in beta) lets you encode consistent best practices, behaviors your agents should apply reliably across sessions. The **MCP connector** connects agents to remote MCP servers without building custom integrations for each external system. **Web search and web fetch** let Claude access current information during inference. **Code execution** lets Claude run Python code and generate visualizations directly during a session. The **Files API** allows uploading documents and referencing them across multiple API calls, useful for multi-turn workflows involving the same source materials.

**Prompt caching** reduces repeated processing costs on long shared context. **Citations** provides structured attribution for retrieval and document-grounded responses. **Batch processing** handles high-volume, non-time-sensitive workloads at lower per-request cost. The **Claude Console** rounds out the offering: a prompt development and evaluation workspace tied directly to your AWS-authenticated session.

Setup requires three steps: create a workspace, authenticate with IAM, then make API calls. Workspaces function like namespaces, each with an ARN, so IAM policies can be scoped to specific workspaces and different teams can have different permission levels. AWS recommends temporary IAM credentials over long-lived API keys, which aligns with standard enterprise security posture. Credentials rotate automatically, access can be revoked at the role level, and the blast radius of a compromised credential is smaller than a static API key.

The service launched across 19 AWS regions from day one: US East in both N. Virginia and Ohio, US West (Oregon), Canada, South America (São Paulo), and a broad set of European and Asia Pacific locations including Tokyo, Seoul, Melbourne, Jakarta, Sydney, Dublin, London, Frankfurt, Milan, Zurich, Paris, and Stockholm. Nineteen regions at general availability signals that AWS and Anthropic built the infrastructure to support global enterprise deployments, not just a US-centric proof of concept. A firm with operations in Frankfurt, Tokyo, and New York doesn't need to maintain separate AI integrations in different regions.

For context on where Anthropic is taking managed agent capabilities, Anthropic has been working on ways for Claude agents to learn from experience through simulated recall, suggesting the managed runtime will gain more sophisticated learning and adaptation features over time.

How Claude Platform on AWS Differs from Amazon Bedrock

Claude on Amazon Bedrock and Claude Platform on AWS serve different purposes, and the distinction matters for both technical and compliance reasons.

**Data processing** is the clearest architectural difference. With Bedrock, AWS is the data processor. Your data stays within AWS infrastructure, and Anthropic doesn't have access to it. With Claude Platform on AWS, Anthropic is the data processor. The service runs on Anthropic's infrastructure, accessed through AWS's identity and billing systems, but data processing occurs outside the AWS boundary.

For organizations with strict regional data residency requirements, including financial services firms in certain jurisdictions, healthcare organizations under specific compliance frameworks, or government contractors with data sovereignty requirements, Bedrock's fully within-AWS model may be required. For organizations without those constraints, Claude Platform on AWS offers access to features that haven't been replicated in Bedrock's architecture.

**Feature availability** is the second major difference. Bedrock provides Claude model access combined with AWS-native capabilities: Guardrails for content filtering, Knowledge Bases for managed RAG pipelines, and PrivateLink for network isolation within VPCs. These are genuinely useful features for specific architectures, and they integrate naturally with other AWS services.

Claude Platform on AWS provides Anthropic's complete native feature set. Teams that want the latest features without waiting for Bedrock integration cycles get them here first. The MCP connector, Claude Managed Agents, and the Skills system are available now, and future Anthropic platform capabilities will appear here before or alongside Bedrock availability.

**Procurement and billing** is the third dimension. Before this launch, accessing Anthropic's native platform meant managing two vendor relationships: AWS for cloud infrastructure and Anthropic for the AI platform. Claude Platform on AWS reduces that to one consolidated relationship, one invoice, one set of commercial discussions.

Usage is billed through AWS Marketplace on a consumption basis. Claude Platform spending flows into AWS Cost Explorer alongside all other AWS services, appears on the same invoice, and uses the same tagging and cost allocation infrastructure teams have built for tracking AWS spending by team, project, or environment. For organizations with AWS Enterprise Discount Programs or custom pricing agreements, Claude Platform spend may retire against existing commitments: AI spending that reduces your AWS commitment balance is operationally different from spending that lives on a separate invoice.

Jonathan Echavarria, a principal research scientist at ReliableQuest, put the practical impact plainly. The offering "simplified how we access Claude" and improved the experience for Claude Code engineers while keeping the team within their existing cloud operating model.

The Claude Platform on AWS launch is one signal in a larger pattern: the operational layer of enterprise AI is maturing. A year ago, the central question was which model to use. That question hasn't gone away, but it's now joined by harder operational questions. How do you audit what the AI is doing? How do you control access across teams? How do you maintain cost visibility as usage scales? How do you build agents that don't fail when containers restart?

Claude Platform on AWS addresses each of those questions using infrastructure enterprise teams already trust. CloudTrail handles auditability. IAM handles access control. Cost Explorer handles cost visibility. Managed Agents handles runtime reliability. These aren't glamorous capabilities, but they're the ones that determine whether an AI deployment can survive a security review or get budget approval to expand. Understanding the infrastructure economics of serving AI at scale is important context for any team making these architecture decisions, and the AI Inference Infrastructure resource covers the choices that most affect cost and latency in production deployments.

The deeper context here is the Anthropic-Amazon partnership, which reached up to $4 billion in Amazon investment commitments. Claude Platform on AWS is part of what that partnership is producing for enterprise customers. AWS gets a differentiated AI offering that goes meaningfully beyond model access. Anthropic gets go-to-market reach and enterprise credibility that comes with deep AWS integration. Teams building on Claude get the operational simplicity of a single cloud relationship without sacrificing access to Anthropic's full platform.

Security, Compliance, and Choosing the Right Option

For security teams evaluating the platform, several aspects of the design deserve attention.

The IAM-based authentication model means there's no separate secrets management problem. Claude Platform credentials aren't long-lived API keys that need to be rotated manually or stored in secrets managers. They're derived from existing IAM roles and credentials, which already go through your organization's credential lifecycle management.

The CloudTrail integration means AI API calls are logged alongside all other AWS API activity. Security operations teams that monitor CloudTrail for anomalous activity get visibility into Claude Platform usage without additional configuration. This is meaningful for organizations that have invested in SIEM infrastructure built around AWS logs.

The workspace ARN model enables attribute-based access control patterns familiar to AWS-native teams. IAM policies can grant specific teams or services access to specific workspaces, and those policies live in the same place as all other IAM policies. There's no separate permission system to learn or maintain.

The data processing distinction deserves honest attention. When you use Claude Platform on AWS, Anthropic processes your data on their infrastructure, not AWS's. This is the right choice for organizations that prioritize feature access over strict data residency. But compliance teams should understand it clearly: the AWS branding and billing integration don't change where data is processed. Teams with hard requirements around data never leaving a specific cloud boundary should evaluate Bedrock instead.

The practical decision tree is fairly direct. Choose **Claude on Amazon Bedrock** when your organization has strict data residency requirements mandating data stays within AWS infrastructure, when you want AWS-managed capabilities like Guardrails or PrivateLink, or when you're building multi-model architectures and want multiple models in one service with consistent AWS-native interfaces.

Choose **Claude Platform on AWS** when you want access to Anthropic's complete and current feature set including capabilities not yet in Bedrock, when you want IAM-authenticated access with unified AWS billing without a separate Anthropic account, or when you're building with Claude Managed Agents, the MCP connector, or the Skills system.

Choose **Anthropic's direct API** when you're a smaller team where AWS integration overhead isn't justified, when you're doing early-stage development and want the simplest possible setup, or when you need access to beta features before AWS Marketplace availability.

These options aren't mutually exclusive. Teams running Bedrock for core production workloads and Claude Platform on AWS for access to newer capabilities can operate both in parallel, with billing for both showing up on the same invoice.

Claude Platform on AWS is available now, with no waitlist or special access program required. The official AWS announcement includes documentation for the three-step setup: workspace creation, IAM authentication, and API access.

For teams currently running Claude through Bedrock, the most useful near-term evaluation is whether Claude Managed Agents fits your agent architecture better than your current self-managed sessions, and whether the data processing model is compatible with your compliance requirements. Both questions have clear answers for most organizations. The ones where both answers are yes have a direct path to Anthropic's full platform with less operational overhead than the alternative.

The longer-term picture is that the integration between AWS and Anthropic is likely to deepen. The $4 billion commitment represents a multi-year partnership, and Claude Platform on AWS is the first major product deliverable that enterprise customers can use today. Teams that build familiarity with the platform now will be better positioned as the feature set continues to expand.

Weekly newsletter

Get a weekly summary of our most popular articles

Every week we send one email with a summary of the most popular articles on AIntelligenceHub so you can stay up-to-date on the latest AI trends and topics.

One weekly email. No sponsored sends. Unsubscribe when you want.

Comments

Every comment is reviewed before it appears on the site.

Comments stay pending until review. Posts with more than two links are held back.

Related articles