Generative AI  

Build a Responsible Agentic AI with Strands Agents SDK and Amazon Bedrock Guardrails

Introduction

Strands Agents is an open-source SDK that simplifies the development of AI agents capable of using tools, making decisions, and automating workflows — moving far beyond basic chatbot interactions.

In this article, you'll learn how to build a simple agentic AI HR Assistant (HR Buddy) that can answer employee queries about leave policies and update contact information. Importantly, it will use Amazon Bedrock Guardrails to block sensitive or restricted topics — such as salary comparisons — providing a layer of governance to your agent.

Prerequisites

  1. An AWS account with permissions to configure Amazon Bedrock Guardrail.
  2. Install or update to the latest version of the AWS CLI.
  3. Get credentials to grant programmatic access.
  4. Visual Studio Code.
  5. Access to the Amazon Bedrock foundation model. The default model provider is Amazon Bedrock and the default model is Claude 3.7 Sonnet in the US Oregon (us-west-2) region.

Create an Amazon Bedrock Guardrail

Perform the following steps to create and configure an Amazon Bedrock Guardrail that blocks salary-related queries using the boto3 client.

  1. Open Visual Studio Code.
  2. Navigate to the folder where you want to create your Python file.
  3. Open a new PowerShell terminal in Visual Studio Code.
  4. Run the following command to install boto3.
    pip install boto3
  5. Create a new Python file and name it as create_guardrail.py.
  6. Copy and paste the below code to create_guardrail.py.
    import boto3
    import json
    
    bedrock = boto3.client("bedrock", region_name="us-west-2")
    
    response = bedrock.create_guardrail(
        name="no-sensitive-hr-topics",
        description="Blocks salary comparisons, investments, or other restricted HR queries.",
        topicPolicyConfig={
            'topicsConfig': [
                {
                    'name': 'Sensitive HR Topics',
                    'definition': 'Blocks salary comparisons, investment questions, or private HR matters.',
                    'examples': [
                        'How does my salary compare to others?',
                        'Where should I invest my bonus?',
                        'Is it legal to share salary details?'
                    ],
                    'type': 'DENY'
                }
            ]
        },
        blockedInputMessaging='This topic is restricted. Please contact HR directly.',
        blockedOutputsMessaging='This response is blocked due to policy restrictions.'
    )
    
    # Print CreateGuardrail response. Verify the call was successful and note guardrail id & version
    print(json.dumps(response, indent=2, default=str))
    
    
  7. Run the following command to execute your Python code.
    python -u .\create_guardrail.py

Build the Agent with Strands SDK and Apply Guardrails

Perform the following steps to create and configure an agent using Strands Agents SDK.

  1. Open Visual Studio Code.
  2. Navigate to the folder where you want to create your Python file.
  3. Open a new PowerShell terminal in Visual Studio Code.
  4. Run the following command to create a virtual environment.
    python -m venv .venv
  5. Run the following command to activate the virtual environment.
    .venv\Scripts\Activate.ps1
  6. Run the following command to install the strands-agents SDK package.
    pip install strands-agents
  7. Create a new Python file and name it as hr_agent.py.
  8. Copy and paste the below code to hr_agent.py.
    from strands import Agent, tool
    from strands.models import BedrockModel
    import json
    
    # Replace with actual guardrail details
    guardrail_id = "xxxxxxxxxxxx"
    guardrail_version = "DRAFT"
    
    # Tool to fetch leave policy using the @tool decorator
    @tool
    def get_leave_policy() -> str:
        return "You are entitled to 24 paid leaves per year including public holidays."
    
    # Tool to update contact info using the @tool decorator
    @tool
    def update_contact(email: str, phone: str) -> str:
        return f"Your contact details have been updated to: {email}, {phone}."
    
    # Set up Bedrock model with guardrails
    bedrock_model = BedrockModel(
        guardrail_id=guardrail_id,
        guardrail_version=guardrail_version,
        guardrail_trace="enabled",
        guardrail_redact_input=True,
        guardrail_redact_input_message="Guardrail Intervened and Redacted",
        guardrail_redact_output=True,
        guardrail_redact_output_message="I'm sorry, but I cannot answer that."
    )
    
    # Define the system prompt
    system_prompt = """
    You are HR Buddy, an AI assistant helping employees with HR queries.
    You can answer questions about leave policies and update contact info.
    Do not answer salary, financial, or inappropriate questions.
    Always start your reply with: HR Buddy:
    """
    
    # Create an agent
    agent = Agent(
        tools=[get_leave_policy, update_contact],
        system_prompt=system_prompt,
        model=bedrock_model
    )
    
    # Blocked query
    print(agent("Can you compare my salary with others who are at my level?"))
    
    # Print the conversation history
    print(f"Conversation history: {json.dumps(agent.messages, indent=4)}")
  9. Run the following command to execute your Python code.
    python -u .\hr_agent.py

Output

This topic is restricted. Please contact HR directly.I'm sorry, but I cannot answer that.

Conversation history: [
    {
        "role": "user",
        "content": [
            {
                "text": "Guardrail Intervened and Redacted"
            }
        ]
    },
    {
        "role": "assistant",
        "content": [
            {
                "text": "I'm sorry, but I cannot answer that."
            }
        ]
    }
]

Reference

Summary

In this article, you learned how to create and configure a responsible agentic AI using Strands Agents SDK.

Next Steps

  1. Extend your agent to integrate with Amazon Bedrock Knowledge Base.
  2. Deploy the agent using AWS Lambda or Fargate for production use.