In my last post, we decoded the "alphabet soup" of modern AI, breaking down terms like GenAI, RAG, and AI Agents. Now that we all speak the same language, it’s time to get our hands dirty.
If you are building enterprise applications, you can't just rely on copying and pasting code from ChatGPT. You need to integrate these AI models directly into your existing C# architecture. But if you look at the .NET ecosystem today, there are a few different ways to do this.
Should you use Semantic Kernel? The new Microsoft.Extensions.AI? The raw OpenAI SDK? Let’s break down the toolbelt and look at a quick "Hello World" so you know exactly which tool to reach for.
1. The Standard Abstraction: Microsoft.Extensions.AI
Think of this as the ILogger or HttpClient of the AI world. Released recently by Microsoft, this library provides standard interfaces like IChatClient and IEmbeddingGenerator.
Why it’s great: It prevents vendor lock-in. You can write your application against the IChatClient interface. If you decide to switch from OpenAI to an open-source model running locally (like Ollama), you just change the dependency injection setup. Your core business logic doesn't change at all!
The "Hello World": Here is how incredibly simple it is to get an LLM talking in a modern .NET app:
using Microsoft.Extensions.AI;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var builder = Host.CreateApplicationBuilder();
// Inject your chosen AI provider (e.g., OpenAI or a local model)
builder.Services.AddChatClient(new OllamaChatClient(new Uri("http://localhost:11434"), "llama3"));
var app = builder.Build();
var chatClient = app.Services.GetRequiredService<IChatClient>();
// Ask the AI a question
var response = await chatClient.CompleteAsync("Explain microservices in one sentence.");
Console.WriteLine(response.Message.Text);
Use this when: You just need to add basic GenAI text generation or chat features to an existing API, and you want clean, testable code.
2. The Heavyweight Orchestrator: Semantic Kernel
If Microsoft.Extensions.AI is the engine, Semantic Kernel (SK) is the entire vehicle. It is Microsoft’s flagship open-source framework for building complex AI Agents and RAG pipelines.
Why it’s great: It bridges the gap between the AI’s brain and your C# code. With SK, you can create “Plugins.” You describe your existing C# methods to the AI, and the LLM can autonomously decide to call your code to accomplish a task.
The “Hello World” (Agentic Style): Imagine you want the AI to check inventory levels. You just write a standard C# class and decorate it with attributes:
using Microsoft.SemanticKernel;
using System.ComponentModel;
public class InventoryPlugin
{
[KernelFunction, Description("Gets the current stock level for a product")]
public int GetStockLevel([Description("The product ID")] int productId)
{
// In a real app, this queries your EF Core database or external API
Console.WriteLine($"[System: AI called GetStockLevel for Product {productId}]");
return 42;
}
}
When you load this plugin into Semantic Kernel, you can ask the AI: “Do we have enough of product 101 in stock?” The AI will realize it doesn’t know, automatically trigger your GetStockLevel(101) C# method, read the result, and then reply to the user: “Yes, we have 42 units available.”
Use this when: You are building autonomous AI Agents, complex multi-step workflows, or need the AI to take action within your enterprise systems.
3. The Raw SDKs (e.g., the official OpenAI .NET Library)
Sometimes, you don’t want any framework magic in the middle. The official provider SDKs give you raw, unfiltered access to the API.
Use this when: You need a highly specific, cutting-edge feature of a specific model that hasn’t made its way into the abstractions yet. However, for most enterprise architecture, you’ll want to wrap this behind one of the tools above.
The Decision Matrix
When designing your next feature, keep this simple rule of thumb:
Need simple text generation/chat? Start with Microsoft.Extensions.AI.
Need the AI to use tools, call your APIs, or run autonomous workflows? Reach for Semantic Kernel.
Now that we know the tools, we are ready to build something real. In the next post, we will look at how to hook these tools up to an actual database!
Happy coding! 🙂