Integrating Open AI Chat completion in .NET Core 8 Web API

In today's article, we will learn about How to integrate Chat GPT Open AI API in .NET Core.

To consume this API first, we need to obtain an API key from OpenAI using the below URL: https://platform.openai.com/api-keys.

API Keys

There are two ways to consume the OpenAI API.

  1. Using API Endpoint (HttpClient Call)
  2. Using the OpenAI Nuget package

In today's article, we will discuss the first approach i.e. the HttpClient Call.

First of all will go through the API documentation using the mentioned URL: https://platform.openai.com/docs/api-reference/chat/create

Create chat Completion

The above endpoint is a post request that requires 2 mandatory field messages and models.

  • Messages: This contains the roles like "user", "system" and "assistant" and the content will be our conversation text
  • Models: The OpenAI API is powered by a diverse set of models with different capabilities. Based on our requirements we can choose the models, for more information please visit this link https://platform.openai.com/docs/models/overview in our example we are going to use GPT-3.5
  • MaxToken: The Maxtokens property specifies the maximum number of tokens(words or phrases) to generate in the completion.

Now let's jump into the code.

I have created a.NET 8 Web API project.

1. Add the OpenApiKey in the appsettings.json.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*",
  "OpenAIKey": "your-api-key"
}

2. Create the ChatCompletionService and IChatCompletionService to consume the Open AI API.

using System.Text;
using System.Text.Json;

namespace OpenAI_ChatGPT.Services
{
    public class ChatCompletionService(IConfiguration configuration, IHttpClientFactory httpClientFactory) 
        : IChatCompletionService
    {

        public async Task<string> GetChatCompletionAsync(string question)
        {
            var httpClient = httpClientFactory.CreateClient("ChtpGPT");

            ChatCompletionRequest completionRequest = new()
            {
                Model = "gpt-3.5-turbo",
                MaxTokens = 1000,
                Messages = [
                                new Message()
                                {
                                    Role = "user",
                                    Content = question,

                                }
                            ]
            };


            using var httpReq = new HttpRequestMessage(HttpMethod.Post, "https://api.openai.com/v1/chat/completions");
            httpReq.Headers.Add("Authorization", $"Bearer {configuration["OpenAIKey"]}");

            string requestString = JsonSerializer.Serialize(completionRequest);
            httpReq.Content = new StringContent(requestString, Encoding.UTF8, "application/json");

            using HttpResponseMessage? httpResponse = await httpClient.SendAsync(httpReq);
            httpResponse.EnsureSuccessStatusCode();

            var completionResponse = httpResponse.IsSuccessStatusCode ? JsonSerializer.Deserialize<ChatCompletionResponse>(await httpResponse.Content.ReadAsStringAsync()) : null;

            return completionResponse.Choices?[0]?.Message?.Content;

        }
    }
}

IChatCompletionServic.cs

namespace OpenAI_ChatGPT
{
    public interface IChatCompletionService
    {
        Task<string> GetChatCompletionAsync(string question);
    }
}

Create the Models for ChatCompletionRequest and ChatCompletionResponse.

using System.Text.Json.Serialization;

namespace OpenAI_ChatGPT
{
    public class ChatCompletionRequest
    {
        [JsonPropertyName("model")]
        public string Model { get; set; }
        [JsonPropertyName("messages")]
        public List<Message> Messages { get; set; }
        [JsonPropertyName("max_tokens")]
        public int MaxTokens { get; set; }

    }

    public class Choice
    {
        [JsonPropertyName("index")]
        public int Index { get; set; }

        [JsonPropertyName("message")]
        public Message Message { get; set; }

        [JsonPropertyName("logprobs")]
        public object Logprobs { get; set; }

        [JsonPropertyName("finish_reason")]
        public string FinishReason { get; set; }
    }

    public class Message
    {
        [JsonPropertyName("role")]
        public string Role { get; set; }

        [JsonPropertyName("content")]
        public string Content { get; set; }
    }

    public class ChatCompletionResponse
    {
        [JsonPropertyName("id")]
        public string Id { get; set; }

        [JsonPropertyName("object")]
        public string Object { get; set; }

        [JsonPropertyName("created")]
        public int Created { get; set; }

        [JsonPropertyName("model")]
        public string Model { get; set; }

        [JsonPropertyName("choices")]
        public List<Choice> Choices { get; set; }

        [JsonPropertyName("usage")]
        public Usage Usage { get; set; }

        [JsonPropertyName("system_fingerprint")]
        public object SystemFingerprint { get; set; }
    }

    public class Usage
    {
        [JsonPropertyName("prompt_tokens")]
        public int PromptTokens { get; set; }

        [JsonPropertyName("completion_tokens")]
        public int CompletionTokens { get; set; }

        [JsonPropertyName("total_tokens")]
        public int TotalTokens { get; set; }
    }
}

Register the dependency in the Program.cs.

builder.Services.AddHttpClient(); // Adds the IHttpClientFactory and related services to service coll

builder.Services.AddScoped<IChatCompletionService, ChatCompletionService>();

Now create the controller and inject the ChatCompletionService to consume the API.

using Microsoft.AspNetCore.Mvc;

namespace OpenAI_ChatGPT.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class ChatCompletionController(IChatCompletionService chatCompletionService) : ControllerBase
    {

        [HttpGet("answer")]
        public async Task<IActionResult> Get(string question)
        {
            var response = await chatCompletionService.GetChatCompletionAsync(question);
            return Ok(response);
        }
    }
}

We are done with our all code changes now let's run the application and ask some interesting questions.

ChatGpt

Awesome, here we can see we are getting the response based on our question.

All the code used in this article is available in the Github repository: https://github.com/rahulsdotnet/OpenAI_ChatGPT

I hope you enjoyed this article, Happy Coding!!