AI  

Build a Product Info Chatbot Using OpenAI and RAG Model

Introduction

In today's fast-paced digital commerce environment, consumers demand fast, accurate, and contextually relevant information to make purchasing decisions. Static FAQs and traditional rule-based chatbots no longer meet these expectations. As businesses strive to enhance user engagement and streamline customer support, AI-powered chatbots have emerged as a game changer.

One powerful approach that stands out is the Retrieval-Augmented Generation (RAG) model. By combining the strengths of information retrieval and natural language generation, RAG enables chatbots to deliver precise, real-time answers based on dynamic data sources. In this comprehensive blog, we will explore how to build a chatbot that uses OpenAI's GPT models and the RAG framework to provide detailed product information. From understanding the fundamentals of RAG to building a working solution with UI integration, we’ll walk you through the entire process.

What is RAG?

Retrieval-augmented generation (RAG) is an AI architecture that augments the capabilities of language models with an external knowledge source. Instead of generating answers based solely on pre-trained data, RAG first retrieves relevant information from a corpus or external API and then uses a generative model to craft a natural language response.

How does RAG work?

  1. Query Input: The user submits a question or prompt.
  2. Retriever Component: This component searches a structured or unstructured data source (like a product database or API) to find relevant chunks of information.
  3. Generator Component: The retrieved data, along with the original user query, is passed to a generative model (e.g., OpenAI GPT), which composes a human-like response.
  4. Output: The final response is shown to the user.
    Product details

Benefits of RAG

  • Provides context-aware, dynamic responses.
  • Reduces reliance on the model’s training data alone.
  • Improves accuracy, especially for domain-specific questions.
  • Easily integrates with real-time systems or proprietary datasets.

Leveraging RAG with OpenAI

OpenAI’s GPT models are ideal for implementing the generative part of an RAG pipeline. By supplying contextual data as part of the input prompt, we can guide the model in providing more relevant answers.

Ways to Integrate External Data

  • Prompt Injection: Embed retrieved data directly into the prompt.
  • Function Calling: Let the model invoke functions that fetch real-time information.
  • Tools or Plugins: Use OpenAI tools like browsing, code interpreter, or custom functions.

Prompt Example with Product Context.

{
  "messages": [
    {
      "role": "system",
      "content": "You are a product expert. Use the following product data to answer questions."
    },
    {
      "role": "user",
      "content": "Product: ABC Laptop, RAM: 16GB, Storage: 512GB SSD.\nDoes this laptop have SSD storage?"
    }
  ]
}

This example shows how we include specific product data with the user’s question to ensure an accurate response.

How to Get the OpenAI Key?

To use OpenAI’s models programmatically, you need an API key. Here’s how to get one.

  • Go to https://platform.openai.com.
  • Sign in or create an account.
  • Navigate to the API Keys tab in your profile.
  • Click Create new secret key.
  • Copy the generated key and store it in the config or environment file.

Choosing the Right OpenAI Model

Selecting the right AI model is crucial for balancing performance, cost, and response quality.

Model Pros Use Case
gpt-3.5-turbo Fast, affordable, and widely supported General Q&A, e-commerce chatbots
gpt-4 More accurate, better with complex logic Detailed queries, custom flows

For most e-commerce applications, gpt-3.5-turbo provides a great balance. However, if your chatbot needs to handle intricate queries or perform reasoning tasks, GPT-4 is worth the investment.

Building the Product Chatbot

Let’s now dive into building the chatbot step by step. We'll use a fake product API for simulation and OpenAI’s GPT for generating responses.

Step 1. Get a Fake Product Details API

We’ll use the Dummy JSON API, a public REST API with dummy product data for prototyping.

Example

Step 2. Create a Product List & Product Details Page

Using API calls, create a dummy product list & details page. You can get the sample code in the Zip file uploaded.

Step 3. Create the ChatBot component and integrate OpenAI

Using Next JS or any other framework, create a ChatBot interface and connect with OpenAI using the OpenAI key. The individual product JSON will be used as a data source at the beginning of the chat.

Build logic to

  • Set probable questions like (What are the key features? Is there a discount available?).
  • Accept a user’s question.
  • Combine it with relevant product data.
  • Send it to OpenAI for response.
  • Return the generated answer.

Probable Questions set to display.

const predefinedQuestions = [
  "What are the key features?",
  "Is there a discount available?",
  "Can you tell me about the product warranty?",
  "What's the shipping information?",
];

Set the initial chat with the user for every new chat.

// Initialize default messages
// Setting the product information as data source with title, Description & product json as "Data"
setMessages([
  {
    role: "system",
    content: `You are a product assistant. Here is the product information:\n
**Title:** ${productTitle}\n
**Description:** ${productDescription}\n
**Data:** ${productMetaData}`,
  },
  {
    role: "assistant",
    content:
      "Hi! I'm your product assistant. Ask me anything about this product, or choose from the questions below!",
  },
]);

Consume Open AI API (/api/chat) to get the answer to the asked question.

try {
  const response = await fetch("/api/chat", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ messages: newMessages }),
  });

  const data = await response.json();

  // success response from OpenAI API
  if (data.success) {
    setMessages([...newMessages, { role: "assistant", content: "" }]);
    typeAssistantMessage(data.message, newMessages);
  }

  // error response from OpenAI API 
  else {
    setMessages([
      ...newMessages,
      { role: "assistant", content: "Error fetching response!" },
    ]);
    setIsLoading(false);
  }
} catch (error) {
  // for any other errors (network, server, etc.)
  console.error("Error calling OpenAI API:", error);
  setMessages([
    ...newMessages,
    { role: "assistant", content: "Failed to get response." },
  ]);
  setIsLoading(false);
}

This front end calls a backend /api/chat endpoint, which handles the RAG logic.

The full working code is available in the Zip file uploaded, which includes the following.

  • Product list page with an autocomplete search box.
  • Product details page with chat head.
  • ChatBot component.
    • Initiate chat with product data as context.
    • Show reference questions to ask.
    • Save chat history.
    • Clear chat history.

Code Repository

Use the GitHub Code repository below to download the full working code.

Example Use Case

Imagine a user visiting your site and asking.

Is this backpack waterproof?

Your backend fetches the product data from the API, combines it with the question, and sends it to OpenAI. The GPT model responds with.

Based on the product description, it’s ideal for daily use and nature walks, but it does not mention being waterproof.

This enhances user trust and improves the decision-making process.

Conclusion

By integrating OpenAI's GPT models with the RAG architecture, you can build a powerful product information chatbot capable of delivering contextual, real-time, and user-friendly responses. Whether you’re running a small store or a large-scale e-commerce platform, this solution can improve customer satisfaction, reduce support tickets, and boost conversions.

Happy building!