Usage Policy Update: Clearer Rules for Responsible AI Use

We’re rolling out some important updates to our Usage Policy, designed to reflect the incredible growth of our products and the evolving ways people use them. Think of the policy as a roadmap, clear guidance on how Claude can (and shouldn’t) be used so that everyone in our community feels confident and supported.

These updates, shaped by user feedback, product evolution, and new regulations, go live on September 15, 2025. Here’s a snapshot of what’s new.

Strengthening Cybersecurity & Agentic Use

Claude’s agentic capabilities have grown rapidly, powering advanced tools like Claude Code and Computer Use. With great power comes great responsibility, so we’ve added more detail on what’s prohibited in cybersecurity contexts—such as malware creation or network compromise.

At the same time, we’re continuing to support positive, security-boosting uses like finding vulnerabilities (with permission!) and strengthening defenses. For extra clarity, we’ve published new Help Center guidance with real-world examples of what’s okay and what’s not.

Refining Political Content Guidelines

Previously, our policy broadly restricted political content to avoid risks to democratic processes. However, we understand that this limitation affects people using Claude in policy research, civic education, and thoughtful political writing.

Now, we’re taking a more nuanced approach: legitimate discourse and research are welcome, while we’ll continue to prohibit activities that are deceptive, misleading, or campaign-targeted.

Clarifying Law Enforcement Use

We’ve simplified our language around law enforcement applications. The rules themselves haven’t changed; we still restrict areas like surveillance, profiling, and biometric tracking, but the new wording makes what’s allowed (like back-office tools and analytics) much clearer.

Clearer Safeguards for High-Risk Use Cases

Some Claude use cases, like those in legal, financial, or employment contexts, carry higher stakes. That’s why we’ve always required safeguards such as human-in-the-loop oversight and AI disclosure.

What’s new? We’re clarifying that these safeguards apply when the outputs are consumer-facing, not for business-to-business interactions.

Looking Ahead

Our Usage Policy is a living document. As AI evolves, so will our policies. We’re committed to working with policymakers, experts, and our community to make sure our approach stays responsible, clear, and forward-looking.