Azure  

Build an MCP Server with Azure AI Agent Service

Introduction

By the end of this article, participants will have a solid grasp of the Model Context Protocol (MCP) and its architecture. They will learn how to build their own MCP Weather Application Server using tools like Azure AI Agent Service, Agent Mode in GitHub Copilot, and Claude Desktop clients. Additionally, they will use the MCP Inspector to test and validate their agent applications.

Large Language Models (LLMs) such as GPT-4, Claude, Gemini, and Llama 3 are highly effective at reasoning, summarization, and content generation. Yet, they function in isolation without native access to real-time data, external APIs, or local files. This gap is addressed by the Model Context Protocol (MCP).

What is Model Context Protocol?

The Model Context Protocol (MCP) is an open standard that enables Large Language Models (LLMs) to connect with external tools using structured API calls. You can think of MCP as a USB-C port for LLMs—allowing them to seamlessly plug into services such as external APIs, local databases, files, and web search.

Why MCP?

MCP offers a straightforward, standardized framework that enables LLMs to:

  • Discover available tools

  • Understand the input/output requirements of each tool

  • Invoke these tools through structured JSON requests and process their responses

The main advantage of MCP is that it allows LLMs to interact with external tools much like a human developer would when calling a REST API. This ensures a consistent and structured approach for models to extend their capabilities beyond their built-in knowledge.

MCP Architecture

mcp

Core Components of MCP

  • MCP Hosts — Applications like Claude Desktop, IDEs, or AI agents that use MCP to connect with external tools and data.

  • MCP Clients — Components that handle direct 1:1 communication with MCP servers, ensuring secure and structured interactions.

  • MCP Servers — Lightweight, modular services that expose specific functionalities—such as file access, API calls, or data analysis—through the standardized Model Context Protocol.

  • Local Data Sources — Files, databases, and system services on the local machine that MCP servers can securely access to execute tasks.

  • Remote Services — Internet-based systems or APIs (e.g., weather services, search engines, third-party data platforms) that MCP servers can connect to for external data and functionality.

How MCP Works?

At its foundation, MCP functions much like a traditional API ecosystem—only designed for LLMs instead of human developers. The typical workflow is as follows:

  • Tool Discovery : The model identifies the tools that are available and their capabilities.

  • Schema Understanding : MCP provides each tool's input/output schema in JSON format.

  • Structured Invocation : The model builds a JSON payload to call the tool.

  • Response Handling : The model processes the structured response to continue its reasoning.

Prerequisites for building MCP Server

1) Python 3.8+ installed

2) UV Setup

  
    For Windows, 
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

uv –version
uv init .
uv add "mcp[cli]"
  

Steps for building MCP Server

Step: 1 Import required modules

  
    from mcp.server.fastmcp import FastMCP
  

Step: 2 Create MCP Server Instance

  
    mcp = FastMCP("demo")
  

Step:3    Register to MCP tool

  
    @mcp.tool()
async def get_weather_alerts(state: str) -> str:
    """Get active NWS alerts for a US state (e.g., "CA", "NY")."""
    url = f"{NWS_API_BASE}/alerts/active/area/{state}"
    data = await _make_nws_request(url)
    if not data:
        return f"Unable to fetch alerts for {state} (API request failed)."

    features = data.get("features", [])
    if not features:
        return f"No active weather alerts for {state}."

    return "\n---\n".join(_format_alert(feat) for feat in features)
  

  
    @mcp.tool() is decorator that registers the function below it as an MCP tool.
  

Step 4: Start the Server using STDIO

  
    if __name__ == "__main__":
    # Run both toolsets via a single MCP server over stdio
    mcp.run(transport="stdio")
  

Implementation Code

Refer the entire code implementation in the ZIP file as attachment.

Code Execution

The code will be executed as follows,

  
    uv run mcp install main.py
  

Open Claude Desktop

p11

Enter the user prompt to find the alerts for a US state (e.g., "CA", "NY").

p23

You will get the results as shown below.

p34

MCP Inspector

The MCP Inspector is an interactive developer tool for testing and debugging MCP servers.

Debug responses using MCP Inspector by running below command

  
    uv run mcp dev main.py
  

The entire code should be executed as follows,

p90

Testing Agent Application using MCP Inspector

It will be redirected in the Web browser and click Tools section and then click get_alerts tools and results will be recorded in the Tool Result section.

p91

Real World Impact

In Visual Studio, MCP opens up powerful new capabilities for:

  • Custom Workflows — Leverage your own MCP servers or tap into the broader ecosystem to automate repetitive tasks, query metrics, interact with databases, or call internal APIs—all directly from Copilot Chat.

  • Enterprise Integration — Seamlessly connect AI to your organization's internal tools and systems while ensuring sensitive data remains secure.

  • Smarter Conversations — Provide Copilot with structured access to project-specific knowledge, services, and workflows, enabling more relevant, context-aware responses.

Summary

In this article, we explored and successfully implemented an MCP Server in VS Code, leveraging Claude Sonnet integration and Agent mode in GitHub Copilot. Along the way, we covered the MCP architecture, its core components, the step-by-step process of building an MCP Server, using the MCP Inspector, testing agent applications, and understanding its real-world impact.

I hope you found this article insightful and engaging.

Happy Learning!!