Anthropic Launches Managed Agents to Run Enterprise AI Workloads
AI News

Anthropic Launches Managed Agents to Run Enterprise AI Workloads

Admin
Admin
4 min read

Anthropic Launches Managed Agents to Run Enterprise AI Workloads

Anthropic has released Claude Managed Agents, a suite of composable APIs that lets businesses build and deploy cloud-hosted AI agents without managing their own infrastructure. The service, now in public beta on the Claude Platform, handles sandboxing, state management, credential handling, and tool execution so engineering teams can focus on agent logic rather than operational overhead.

The launch marks Anthropic’s most direct move into the enterprise platform market. Rather than selling model access alone, the company is now offering to run the full agent stack – positioning itself as a managed runtime provider for autonomous AI work.

What Managed Agents Does

Building a production-grade AI agent typically requires months of infrastructure work: sandboxed code execution, checkpointing, credential management, scoped permissions, and end-to-end tracing. Managed Agents abstracts all of that into a hosted service.

Developers define an agent’s tasks, tools, and guardrails. Anthropic’s built-in orchestration harness then decides when to call tools, how to manage context across long sessions, and how to recover from errors. Sessions can run autonomously for hours, with progress persisting even through disconnections.

Advertisement

Introducing Claude Managed Agents

The engineering blog post accompanying the launch describes the core architecture as a separation of three components: the “brain” (Claude and its harness), the “hands” (sandboxes and tools), and the “session” (a durable event log). Each component can fail or be replaced independently. If a container dies, the harness catches it as a tool-call error and spins up a new one. If the harness itself crashes, a new instance can resume from the last recorded event.

Security is handled through structural separation. Credentials never live in the sandbox where Claude’s generated code runs. For Git operations, access tokens are injected during sandbox initialization but stay outside the agent’s reach. For external services connected via the Model Context Protocol (MCP), OAuth tokens sit in a secure vault and are accessed through a proxy.

The system also includes multi-agent coordination (currently in research preview), where agents can spawn and direct other agents to parallelize complex work.

Early Adopters and Pricing

Advertisement

Several companies have already integrated Managed Agents into production workflows. Notion embedded Claude directly into workspaces through Custom Agents, letting teams delegate coding, presentations, and spreadsheets without leaving the app. Rakuten deployed specialist agents across product, sales, marketing, and finance within a week per deployment, connecting them to Slack and Teams. Asana built AI Teammates that work alongside humans inside projects, with CTO Amritansh Raghav stating that Managed Agents helped them ship advanced capabilities faster. Sentry connected its debugging tool Seer to a Claude-powered agent that writes patches and opens pull requests, going from flagged bug to reviewable fix in a single flow. Atlassian is building agents that let developers assign tasks directly from Jira.

Pricing is consumption-based: standard Claude API token rates apply, plus $0.08 per session-hour for active runtime. According to the official documentation, the service requires a specific beta header (managed-agents-2026-04-01) that the SDK sets automatically.

The launch extends Anthropic’s pattern of building developer infrastructure around Claude. Over the past year, the company has released Claude Cowork, brought Claude Code to Slack for in-chat development, added desktop control capabilities, and established a skills framework that has become widely adopted. Managed Agents consolidates that trajectory into a single hosted offering that competes directly with custom AI agent infrastructure.

The open question is whether enterprises will hand full agent runtime to a single AI vendor. The $0.08-per-hour pricing is low enough to undercut most in-house infrastructure costs, but organizations running sensitive workloads may hesitate to route all agent activity through Anthropic’s servers. Managed Agents’ support for connecting to customer VPCs addresses part of that concern, but the “brain” – Claude and its harness – still runs on Anthropic’s infrastructure.


Share this article

Admin

About Admin

Admin is a tech writer specializing in PC hardware and component price analysis. With over 5 years of experience in the tech industry, they provide insights into market trends and help consumers make informed purchasing decisions.

Subscribe to our newsletter

Get the latest PC component price drops and tech tips delivered to your inbox weekly.

We respect your privacy. Unsubscribe at any time.

Advertisement