Anthropic Expands Claude Memory to Pro and Max Users
Memory

Anthropic has announced that Memory — one of Claude’s most anticipated features — is now rolling out to Pro and Max plan users, bringing the same advanced functionality previously available to Team and Enterprise customers.

With Memory, Claude can now pick up right where you left off, making it easier for users to manage ongoing projects, multi-step workflows, and long-term collaborations without losing context between sessions.

Personalized Context for Every Project

Memory allows Claude to remember details, preferences, and progress across conversations, providing continuity for users who rely on the assistant for daily tasks like:

  • Iterating on strategy documents

  • Debugging technical issues

  • Managing multiple projects or research threads

Each project has its own scoped memory, meaning Claude maintains a separate context for each workspace — helping users stay organized and ensuring information from one project doesn’t spill into another.

Users also retain full control over what Claude remembers, with options to view, edit, or delete stored information at any time. For sensitive discussions or one-off queries, incognito chat remains available to ensure conversations don’t save to memory.

“Whether you’re planning, coding, or creating, Memory helps Claude pick up exactly where you left off,” Anthropic said. “You stay in control at every step, with clear visibility and privacy by design.”

Safety-Driven Rollout

Before extending Memory to individual Pro and Max users, Anthropic conducted extensive safety testing across a wide range of edge cases — including wellbeing-related topics and potential misuse scenarios.

The company evaluated how Memory handled situations involving:

  • Sensitive personal disclosures

  • Emotional or wellbeing-related conversations

  • Attempts to bypass safeguards through remembered context

These tests led to targeted adjustments to how Memory functions, ensuring it supports users helpfully and safely without reinforcing harmful patterns or over-accommodating risky behavior.

“Through iterative testing and refinement, we’ve improved Memory to help Claude deliver safe, consistent, and contextually aware responses,” the company said in a statement.

How to Get Started

Users on Claude Pro or Max plans can enable Memory by visiting Settings ? Memory within the Claude app or interface. Once activated, Claude begins retaining relevant context automatically — adapting to user goals while respecting all privacy and safety controls.

1-Memory-Modal_blog-inline copy

A Step Toward More Persistent AI Assistants

Anthropic’s Memory feature represents a major step forward in contextual AI — allowing assistants like Claude to build understanding over time while maintaining strong boundaries and transparency.

The rollout underscores Anthropic’s approach to developing responsible, human-centered AI, balancing personalization with user safety and control.

Source: Anthropic Official Blog