🤖 Protocol‑Agnostic Architecture
CrewAI implements open standards (Model Context Protocol – MCP and Agent2Agent – A2A), so you can:
-
Swap models without rewriting orchestration logic
-
Bridge between agents built on different LLMs
-
Extend to custom or fine‑tuned checkpoints
No matter where your model lives—AWS, OpenAI, private data center—CrewAI simply passes inputs/outputs over MCP or A2A.
🧩 Supported Foundation Models
You can target any model by specifying its endpoint or ID:
🛠️ Example: Switching Between Models
from crewai.protocols import MCPClient
from crewai import Agent, Crew
# Instantiate two MCP clients for different providers
openai_client = MCPClient(
endpoint="https://api.openai.com/v1",
api_key="OPENAI_KEY"
)
anthropic_client = MCPClient(
endpoint="https://api.anthropic.com/v1",
api_key="ANTHROPIC_KEY"
)
class DynamicAgent(Agent):
def __init__(self, client):
super().__init__()
self.client = client
def run(self, prompt):
# Send prompt to the chosen model
return self.client.generate(prompt)
# Create a Crew that uses both OpenAI GPT-4 and Anthropic Claude
crew = Crew([
DynamicAgent(openai_client),
DynamicAgent(anthropic_client),
])
responses = crew.run("Summarize the benefits of CrewAI.")
print(responses)
In this example, you’re orchestrating a hybrid Crew that leverages GPT‑4 for creativity and Claude for safety checks, without changing any orchestration code.
🚀 Getting Started
-
Choose your model(s): Pick any foundation model’s API endpoint or Bedrock ID.
-
Configure MCPClient or A2A transport: Supply credentials and endpoint.
-
Attach to agents: Pass the client into your Agent class.
-
Run your Crew or Flow: CrewAI handles the rest—routing inputs, aggregating outputs, and managing sessions.
With CrewAI’s model‑agnostic design, you’re free to innovate, compare, and evolve your AI stack without lock‑in.