AI coding agents like Claude Code and Gemini CLI are revolutionizing how developers work, but there's been one nagging concern: giving these powerful tools access to your local machine feels risky. Today, Docker addresses this challenge with the launch of Docker Sandboxes – an experimental preview that changes how we think about AI agent security.
![Docker Sendbox]()
The Big Announcement
Docker has officially introduced Sandboxes, a new approach that lets you run coding agents in isolated, container-based environments. Instead of giving AI agents direct access to your system, Docker Sandboxes wrap them in secure containers that mirror your workspace while keeping your actual machine protected.
Think of it as giving your AI agent a perfect replica of your workspace to experiment in, but behind reinforced walls. If something goes wrong, your actual system stays completely untouched.
Why This Matters
Until now, developers faced an uncomfortable choice: either give AI agents full access and cross your fingers, or deal with constant permission prompts that interrupt your workflow. Docker Sandboxes eliminates this dilemma by providing container-based isolation that's purpose-built for the dynamic, iterative nature of AI coding workflows.
As coding agents become more autonomous – deleting repos, modifying files, accessing secrets – this sandbox approach offers the security developers need without sacrificing productivity.
What's Available Now
This is an experimental preview, and Docker is releasing it with native support for:
More agents will be supported soon.
Getting started is refreshingly simple. With Docker Desktop 4.50 or later installed, you just run:
docker sandbox run <agent>
That's it. The sandbox creates an isolated environment with your current directory mounted, and the AI agent can work freely without touching your system.
The Technical Edge
Docker chose containers over traditional OS-level sandboxing for good reasons. Container-based isolation provides the right balance of security and flexibility, works consistently across platforms, and doesn't interrupt your workflow with permission prompts. It's designed for exactly the kind of dynamic workflows that AI coding agents need.
Currently, agents run as containers inside Docker Desktop's VM. Docker plans to switch to dedicated microVMs soon for even deeper security and better performance.
What's Coming Next
Docker's roadmap includes:
MicroVM-based isolation for enhanced security
Better multi-agent support for running multiple AI assistants simultaneously
Granular network access controls
Advanced token and secret management
Centralized policy management for enterprise use
Support for additional coding agents
Why Docker Is Building This
Docker believes sandboxing should be how every coding agent runs, everywhere. This experimental release is the first step toward that vision, and the company is actively seeking developer feedback to shape the future of the product.
The team emphasizes they're building this alongside developers, not just for them. Real-world use cases and feedback will drive the development of features that matter most.
Try It and Share Feedback
If you're already using AI coding agents or curious about experimenting with them safely, Docker Sandboxes is worth trying. The experimental preview is available now, and Docker is collecting feedback at: [email protected]
This launch represents a significant shift in how we think about AI agent safety. Rather than treating security as an afterthought, Docker is making it the foundation of how coding agents operate.
As AI agents become more powerful and more integrated into our development workflows, approaches like Docker Sandboxes will become essential. The question isn't whether we'll use AI coding assistants – it's how safely we can deploy them.
Docker just answered that question.
Learn More: