Utilizing Generative AI with Semantic Kernel in .NET

Introduction

If you are a big fan of Generative Artificial Intelligence and working on .NET or Microsoft Ecosystem, this article will help you to get started with Generative AI integration quickly in your project to its full potential by utilizing all the capabilities of the Generative AI framework built by Microsoft and its community, framework named Semantic Kernel.

We need this framework because things are changing at a fast pace in the world of software development, and as we are finding more use cases, Generative AI technology is evolving rapidly. We want to ensure that our business logic does not get affected by changing our Large Language Models (LLMs), and it can be reused across different platforms if needed. The idea of Semantic Kernel is to work like a bridge between Generative AI and your platform, giving you the flexibility to decouple Gen AI platforms easily. It is available in Python, Java, and .NET and might have more support added in the future. The popular alternative is known as Langchain, but it is not available in .NET.

To summarize, it offers the following benefits

  1. Rapid Generative AI application development.
  2. Loose coupling between Generative AI platform and business logic.
  3. The standard design of Generative AI solutions for extensibility and maintainability.

Getting Started

We need to install the Nuget package to work with the framework. From Nuget, you can download Microsoft.SemanticKernel package. This is a package reference I will be using throughout.

<PackageReference Include="Microsoft.SemanticKernel" Version="1.0.1" />

I will be showing you a demo of a simple use case, i.e. to generate a story from a given situation and rate content sentiment as any negative/positive/neutral.

var builder = Kernel.CreateBuilder(); // Create builder instance

// Below is the list of direct integrations available
// builder.AddAzureOpenAIChatCompletion()
// builder.AddAzureOpenAITextEmbeddingGeneration();
// builder.AddAzureOpenAITextGeneration();
// builder.AddAzureOpenAITextToImage();
// builder.AddOpenAIChatCompletion();
// builder.AddOpenAITextEmbeddingGeneration();
// builder.AddOpenAITextGeneration();
// builder.AddOpenAITextToImage();

var kernel = builder.Build(); // Build the Kernel

Let us also download the Hugging Face connector package. For this, we will download the below package.

 <PackageReference Include="Microsoft.SemanticKernel.Connectors.HuggingFace" Version="1.0.1-preview" />

This gives us below two more options.

//builder.AddHuggingFaceTextGeneration();

//builder.AddHuggingFaceTextEmbeddingGeneration();

Writing first integration

Our first integration looks like this when integrating with Open AI. In this approach, we will create plain prompts as string.

var builder = Kernel.CreateBuilder();

builder.Services.AddOpenAIChatCompletion("gpt-3.5-turbo", "<key>");

var kernel = builder.Build();

string situation = "I lost my mobile phone and need to go home";
string storyPrompt = $"Generate a story from a given situation. The situation is: {situation}";

var response = await kernel.InvokePromptAsync(storyPrompt);

string sentimentPrompt = $"Tell the sentiment as positive/negative/neutral from {response}";

var sentimentResponse = await kernel.InvokePromptAsync(sentimentPrompt);

Console.WriteLine($"{response}", sentimentResponse);

Improving Integration Part 1

In this, we will convert our prompts to functions.

var builder = Kernel.CreateBuilder();

builder.Services.AddOpenAIChatCompletion("gpt-3.5-turbo", "<key>");

var kernel = builder.Build();

string situation = "I lost my mobile phone and need to go home";

var storyPrompt = kernel.CreateFunctionFromPrompt("Generate a story from the given situation. The situation is: {{$situation}}");

var response = await kernel.InvokeAsync(storyPrompt, new Dictionary<string, string>
{
    { "situation", situation }
});

var sentimentPrompt = kernel.CreateFunctionFromPrompt("Tell the sentiment as positive/negative/neutral from {{$response}}");

var sentimentResponse = await kernel.InvokeAsync(sentimentPrompt, new Dictionary<string, string>
{
    { "response", response }
});

Console.WriteLine($"{response}", sentimentResponse);

Improving Integration Part 2

Move prompts to a file

Create a structure like this.

Console App

Below are the contents of each file in LLMPrompts.

LLMPrompts\sentiment\config.json

{
  "schema": 1,
  "type": "completion",
  "description": "Creates a chat response to the user",
  "execution_settings": {
    "default": {
      "max_tokens": 1000,
      "temperature": 0
    }
  },
  "input_variables": [
    {
      "name": "response",
      "description": "The is input from the previous response that will be analyzed as sentiment",
      "required": true
    }
  ]
}

LLMPrompts\sentiment\skprompt.txt

Tell the sentiment as positive/negative/neutral from {{$response}}.

LLMPrompts\story\config.json

{
  "schema": 1,
  "type": "completion",
  "description": "Creates a chat response to the user",
  "execution_settings": {
    "default": {
      "max_tokens": 1000,
      "temperature": 0
    }
  },
  "input_variables": [
    {
      "name": "situation",
      "description": "The user's request.",
      "required": true
    }
  ]
}

LLMPrompts\story\skprompt.txt

Generate a story from the given situation. The situation is : {{$situation}}

Write the code below.

var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion("gpt-3.5-turbo", "<key>");
var kernel = builder.Build();

var prompts = kernel.CreatePluginFromPromptDirectory("LLMPrompts");

string situation = "I lost my mobile phone and need to go home";

var response = await kernel.InvokeAsync(prompts["story"], new()
{
    { "situation", situation }
});

var sentimentResponse = await kernel.InvokeAsync(prompts["sentiment"], new()
{
    { "response", response }
});

Console.WriteLine($"{response}", sentimentResponse);

Now let’s add a skill to create JSON from response.

public class LLMSKillsPlugin
{
    [KernelFunction, Description("Convert all params to JSON string that can be used by application")]
    public static string CreateJsonString(string input, string response, string sentiment)
    {
        var jsonObj = new
        {
            input,
            response,
            sentiment
        };
        return JsonSerializer.Serialize(jsonObj);
    }
}

Now, our final code looks like this.

var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion("gpt-3.5-turbo", "<key>");
builder.Plugins.AddFromType<LLMSKillsPlugin>();
var kernel = builder.Build();
var prompts = kernel.CreatePluginFromPromptDirectory("LLMPrompts");
string situation = "I lost my mobile phone and need to go home";

// Commented out to test the skill
/*
var response = await kernel.InvokeAsync(prompts["story"], new() {
    { "situation", situation }
});

var sentimentResponse = await kernel.InvokeAsync(prompts["sentiment"], new() {
    { "response", response }
});
*/

var jsonObj = await kernel.InvokeAsync<string>("LLMSKillsPlugin", "CreateJsonString", new() {
    { "input", situation },
    { "response", "This is mock response" },
    { "sentiment", "Neutral" }
});

Console.WriteLine(jsonObj);

Output

{"input":"I lost my mobile phone and need to go home","response":"This is a mock response","sentiment":"Neutral"}

That’s it! I hope you enjoyed reading it. You can do a lot more with Semantic Kernel. To know more, please visit - https://learn.microsoft.com/en-us/semantic-kernel/overview/

Please let me know how you feel about this framework in the comments. The source code is uploaded if you want to explore more.


Similar Articles