AI  

MCP in .NET: How to Connect Your Applications to AI Using the Model Context Protocol

Introduction

If you're a .NET developer in 2026, you've probably already heard the term "MCP" thrown around in conversations about AI integration. But beyond the buzz, the Model Context Protocol represents a genuine architectural shift in how we build software that interacts with large language models. fileciteturn1file0

For years, connecting an LLM to your existing systems meant writing fragile, custom glue code. You'd build a plugin for your database, another wrapper for your file system, another adapter for your internal API, each with its own discovery mechanism, authentication flow, and invocation pattern. Every time a new AI provider came along, you'd rewrite everything.

MCP changes that equation entirely. It is an open protocol, originally created by Anthropic and now hosted under the Linux Foundation, that standardizes how AI applications discover and interact with external tools and data sources. Think of it as what REST did for web APIs, but for AI-to-tool communication.

And as of March 2026, the official MCP C# SDK has reached v1.0, developed in collaboration between Microsoft and Anthropic. This means .NET developers now have a production-ready, first-class toolkit to build both MCP servers and clients in C#.

In this article, we'll go deep into MCP's architecture, explore how it maps to .NET patterns you already know, walk through real implementation examples, and discuss where MCP fits inside Clean Architecture. Whether you're building internal tools, exposing your APIs to AI agents, or designing multi-agent systems, this guide will give you the architectural foundation to do it right.

What Is MCP and Why Should .NET Developers Care?

At its core, MCP follows a client-server architecture built on JSON-RPC 2.0. The protocol defines three main roles.

Host

The application that the end user interacts with. This could be an AI-powered IDE like Visual Studio 2026 with Copilot, Claude Desktop, or your own custom application. The host creates and manages one or more MCP clients.

Client

Embedded inside the host, the client implements the MCP protocol. It sends requests to MCP servers (like "list your tools" or "call this tool"), processes responses, and feeds them back to the host. Each client maintains a 1:1 relationship with a specific server.

Server

The external service that provides capabilities. Servers expose three types of primitives: Tools, Resources, and Prompts. A server could be a local process running on the same machine, or a remote service accessed over HTTP.

The key insight is this: your .NET application becomes an MCP server. You expose your existing business logic, database queries, file operations, and API endpoints as MCP primitives. Any MCP-compatible AI client, whether it's GitHub Copilot, Claude, a Semantic Kernel agent, or a custom application, can then discover and use those capabilities automatically without any custom integration code.

The Three Primitives: Tools, Resources, and Prompts

Understanding MCP's three primitives is essential before writing any code. Each serves a distinct architectural purpose.

Tools - The Verbs

Tools are executable functions that AI agents can invoke to perform actions. They are the "verbs" of MCP. When an AI model decides it needs to take an action, such as querying a database, sending an email, or calculating a value, it calls a tool.

Each tool has a name, a description (which helps the AI understand when to use it), and a JSON Schema defining its input parameters. The server executes the tool's logic and returns structured results.

In .NET terms, think of tools as your service layer methods, but exposed through a standardized protocol rather than through REST controllers.

Resources - The Nouns

Resources represent read-only data sources that provide context to the AI. Each resource has a URI (using a custom scheme like myapp://), a human-readable name, and a MIME type. Resources can be static (fixed URI, like a configuration file) or dynamic (using URI templates with placeholders).

Resources are ideal for exposing database schemas, documentation, configuration data, or any contextual information the AI needs to understand your domain before acting.

Prompts - The Templates

Prompts are reusable message templates that structure how the AI interacts with your system. Unlike tools (which execute logic) and resources (which provide data), prompts return a predefined list of messages designed to initiate consistent model behavior.

For example, a code_review_prompt might accept a programming language and a code snippet as parameters, then return a formatted prompt that guides the LLM to perform a thorough code review following your team's standards.

Setting Up Your First MCP Server in .NET

The MCP C# SDK ships as two NuGet packages.

ModelContextProtocol. The core library containing all protocol primitives, client/server implementations, and transport handling.

ModelContextProtocol.AspNetCore. Adds HTTP hosting support via ASP.NET Core, enabling remote MCP servers with Streamable HTTP transport.

Both packages target netstandard2.0, so they work seamlessly on .NET 8 LTS, .NET 9, and .NET 10.

The Minimal MCP Server

Here's the simplest possible MCP server in ASP.NET Core:

using ModelContextProtocol;
using ModelContextProtocol.AspNetCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddMcpServer()
    .WithTools<InventoryTools>();

var app = builder.Build();
app.MapMcpEndpoint("/mcp");
app.Run();

That's it. Three lines of MCP-specific code. The SDK handles discovery, schema generation, JSON-RPC communication, and invocation routing automatically.

Defining Tools with Attributes

Tools are defined as methods decorated with the [McpTool] attribute inside a class marked with [McpToolType]:

using ModelContextProtocol;
using System.ComponentModel;

[McpToolType]
public class InventoryTools
{
    private readonly IInventoryService _inventoryService;

    public InventoryTools(IInventoryService inventoryService)
    {
        _inventoryService = inventoryService;
    }

    [McpTool("check_stock")]
    [Description("Check the current stock level for a product by its SKU")]
    public async Task<ToolResult> CheckStock(
        [Description("The product SKU to check")] string sku)
    {
        var stock = await _inventoryService.GetStockAsync(sku);

        if (stock == null)
            return ToolResult.Error($"Product with SKU '{sku}' not found.");

        return ToolResult.Success(
            $"Product: {stock.Name}, SKU: {sku}, " +
            $"Available: {stock.Quantity}, Warehouse: {stock.Location}");
    }

    [McpTool("update_stock")]
    [Description("Update the stock quantity for a product")]
    public async Task<ToolResult> UpdateStock(
        [Description("The product SKU")] string sku,
        [Description("The new quantity")] int quantity)
    {
        var result = await _inventoryService.UpdateQuantityAsync(sku, quantity);

        return result.IsSuccess
            ? ToolResult.Success($"Stock updated. {sku} now has {quantity} units.")
            : ToolResult.Error($"Failed to update: {result.ErrorMessage}");
    }
}

Notice how the [Description] attribute on both the method and its parameters plays a critical role. These descriptions are what the AI model reads to understand when and how to use each tool. Clear, descriptive naming and documentation directly improve the model's ability to select the right tool.

Constructor Injection Works Naturally

Because the SDK integrates with .NET's built-in dependency injection, your tool classes can receive any registered service through constructor injection. This means your MCP tools can use the same services, repositories, and infrastructure that your REST API already uses. You're not building a parallel system. You're exposing your existing system through a new protocol.

Transport Options: Stdio vs. HTTP

MCP supports two transport mechanisms, and choosing the right one depends on your deployment scenario.

Stdio Transport

Standard input/output transport is designed for local MCP servers that run on the same machine as the host application. The host launches the server as a child process and communicates through stdin/stdout pipes.

This is the default for IDE integrations. When Visual Studio 2026 or Claude Desktop connects to a local MCP server, it typically uses stdio. It offers the lowest latency since messages never leave the machine.

For a stdio server, you don't need ASP.NET Core at all. A simple console application works:

using ModelContextProtocol;
using Microsoft.Extensions.Hosting;

var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddMcpServer()
    .WithStdioTransport()
    .WithTools<InventoryTools>();

await builder.Build().RunAsync();

HTTP with Streamable HTTP / SSE

For remote MCP servers, meaning services running in the cloud, on Azure Container Apps, or behind a load balancer, you use HTTP transport with Server-Sent Events (SSE) for server-to-client streaming and HTTP POST for client-to-server messages.

This is where ModelContextProtocol.AspNetCore comes in. The MapMcpEndpoint() method handles all the HTTP plumbing, including connection management, message framing, and (with v1.0) authorization.

Most teams start with stdio during development and move to HTTP for production deployments.

Where MCP Fits in Clean Architecture

This is where it gets interesting for .NET architects. If you follow Clean Architecture (or Onion Architecture), you already have well-defined layers: Domain, Application, Infrastructure, and Presentation (WebApi). The question is: where does MCP live?

MCP as a Presentation Concern

The answer is straightforward: MCP is a presentation layer concern, just like your REST API controllers.

Your tool classes sit at the same level as your API controllers. They accept requests, delegate to application services (or MediatR handlers, if you use CQRS), and return results. They should not contain business logic.

Here's how the project structure looks:

MyApp/
├── src/
│   ├── Domain/
│   ├── Application/
│   ├── Infrastructure/
│   ├── WebApi/
│   └── McpServer/
│       ├── Tools/
│       │   ├── InventoryTools.cs
│       │   ├── OrderTools.cs
│       │   └── ReportingTools.cs
│       ├── Resources/
│       │   └── SchemaResources.cs
│       ├── Prompts/
│       │   └── AnalysisPrompts.cs
│       └── Program.cs
└── tests/
    ├── Domain.UnitTests/
    ├── Application.UnitTests/
    └── McpServer.IntegrationTests/

Using CQRS with MCP

If you're already using MediatR for CQRS, your MCP tools become thin orchestrators that dispatch commands and queries:

[McpToolType]
public class OrderTools
{
    private readonly IMediator _mediator;

    public OrderTools(IMediator mediator)
    {
        _mediator = mediator;
    }

    [McpTool("get_order_status")]
    [Description("Get the current status of an order by its ID")]
    public async Task<ToolResult> GetOrderStatus(
        [Description("The order ID")] string orderId)
    {
        var query = new GetOrderStatusQuery(orderId);
        var result = await _mediator.Send(query);

        return result.IsSuccess
            ? ToolResult.Success(JsonSerializer.Serialize(result.Value))
            : ToolResult.Error(result.ErrorMessage);
    }

    [McpTool("cancel_order")]
    [Description("Cancel an order that has not been shipped")]
    public async Task<ToolResult> CancelOrder(
        [Description("The order ID to cancel")] string orderId,
        [Description("Reason for cancellation")] string reason)
    {
        var command = new CancelOrderCommand(orderId, reason);
        var result = await _mediator.Send(command);

        return result.IsSuccess
            ? ToolResult.Success($"Order {orderId} cancelled.")
            : ToolResult.Error(result.ErrorMessage);
    }
}

This pattern keeps your MCP layer paper-thin. All business logic, validation, and domain rules stay in the Application and Domain layers where they belong.

Exposing Resources for Contextual AI

Resources are often overlooked, but they're incredibly powerful for helping the AI understand your domain before it starts calling tools.

[McpToolType]
public class SchemaResources
{
    private readonly ISchemaService _schemaService;

    public SchemaResources(ISchemaService schemaService)
    {
        _schemaService = schemaService;
    }

    [McpResource("app://schema/orders", "Order Schema",
        "The database schema for the orders table")]
    public async Task<ResourceResult> GetOrderSchema()
    {
        var schema = await _schemaService.GetTableSchemaAsync("Orders");
        return ResourceResult.Text(schema, "application/json");
    }

    [McpResource("app://docs/api-guide", "API Guide",
        "Internal API documentation and conventions")]
    public async Task<ResourceResult> GetApiGuide()
    {
        var guide = await File.ReadAllTextAsync("docs/api-guide.md");
        return ResourceResult.Text(guide, "text/markdown");
    }
}

When an AI agent connects to your server, it can first read these resources to understand your data model and conventions, then use that context to make smarter tool calls. This is analogous to how a new developer would read documentation before writing code.

v1.0 Features: Authorization, Elicitation, and Structured Output

The March 2026 v1.0 release brought several production-critical features.

Authorization

Servers can now expose Protected Resource Metadata, enabling proper OAuth flows. The SDK handles the full discovery process on the client side automatically. This is essential for enterprise deployments where your MCP server sits behind an identity provider.

Elicitation

This feature allows servers to request additional information from the user mid-execution. If a tool needs clarification, say a deployment tool that wants to confirm the target environment, it can pause and ask the user through the client, then continue with the response.

Structured Output

Previously, tool results were unstructured text that the LLM had to parse on its own. Now, tools can return explicitly defined structured content using the UseStructuredContent parameter on the [McpTool] attribute. This allows models to process outputs more reliably and reduces hallucination around result interpretation.

Integration with the .NET AI Ecosystem

MCP doesn't exist in isolation. It's designed to work alongside the broader .NET AI stack.

Semantic Kernel. Microsoft's AI orchestration framework can consume MCP servers as tool providers. Your MCP tools become Semantic Kernel plugins automatically.

Microsoft Agent Framework. Introduced in .NET 10, this framework uses MCP as its primary mechanism for tool integration. Agents discover and invoke tools through MCP, making your server compatible with multi-agent orchestration.

Microsoft.Extensions.AI. The new abstraction layer for AI services in .NET. The MCP SDK integrates through IChatClient, allowing tools to be exposed as AIFunction instances that work with any compatible chat client.

This means that by building one MCP server, you're automatically compatible with Copilot in Visual Studio, Claude Desktop, Semantic Kernel agents, the Microsoft Agent Framework, and any other MCP-compatible client, present or future.

Best Practices for Production MCP Servers

Based on real-world production deployments, here are the patterns that matter most.

  • Design tools with clear, single responsibilities. Each tool should do one thing well.

  • Write descriptions as if you're explaining to a new team member.

  • Return actionable error messages.

  • Use resources to provide domain context.

  • Validate all inputs rigorously.

  • Implement proper logging and observability.

Conclusion

MCP is not just another protocol. It's the architectural pattern that finally standardizes how .NET applications communicate with AI. By building MCP servers today, you're future-proofing your systems against the rapidly shifting AI landscape. Your tools, resources, and prompts work with any MCP-compatible client, whether it exists now or ships next year.

The C# SDK is production-ready at v1.0. It integrates naturally with .NET's dependency injection, ASP.NET Core hosting, and the broader AI ecosystem including Semantic Kernel, the Microsoft Agent Framework, and Microsoft.Extensions.AI.

If you're a .NET developer looking to make your applications AI-ready, MCP is where you start. Not by rewriting your codebase, but by exposing what you've already built through a protocol that AI can understand.