Applying Few-Shot Prompting with Azure OpenAI Prompt Engineering in Python

In the previous article, we discussed the concepts of prompt and zero-shot learning. In this article, our focus will shift to Few-shot prompting.

Few-Shot Prompting

Few-shot prompting enables us to include exemplars within prompts, guiding the model to achieve improved performance.

Example

The Sentiment Analysis is the best example of Few-shot prompting.

Positive Sentiment: Text: "I love this new phone. It's amazing!"

Negative Sentiment: Text: "This restaurant had terrible food and awful service."

Advantages of Few-Shot Prompting

  1. Data Efficiency: Few-shot prompting allows models to make accurate predictions or generate content with a minimal amount of labeled training data, reducing the need for vast datasets.
  2. Flexibility: It enables models to generalize across various tasks or domains, making them adaptable to a wide range of applications.
  3. Customization: Few-shot prompts can be tailored to specific tasks, providing guidance and context to generate desired outputs.
  4. Ease of Use: It simplifies the interaction with AI models, making them accessible to users who may not have technical expertise.

Disadvantages of Few-Shot Prompting:

  1. Limited Context: Few-shot prompts may not provide enough context for complex tasks, leading to potential inaccuracies in responses.
  2. Fine-tuning Challenges: Training models for few-shot learning can be complex and time-consuming, especially for custom or domain-specific tasks.
  3. Dependency on Prompting: The model's performance heavily relies on the quality and relevance of the provided prompts, which may require human input.
  4. Scalability: Implementing few-shot learning for multiple tasks or languages can be resource-intensive and may not always scale effectively.

Overall, while few-shot prompting offers data-efficient and flexible solutions, it requires careful design and consideration of its limitations when applying it to various AI tasks.

Difference between Zero-Shot and Few-Shot prompting

Zero-Shot Prompting

Zero Information: In zero-shot prompting, the model is provided with minimal or no specific information about the task.

Generalization: The model is expected to generalize from its pre-existing knowledge and generate responses without any task-specific training data.

Few-Shot Prompting

Limited Information: In few-shot prompting, the model is given a small amount of task-specific information or examples as part of the prompt.

Guidance: This information provides guidance and context for the model to understand and perform a particular task.

Implementing Few-Shot prompting in Python

Objective: Perform sentiment analysis on the provided text using the following brief example as a reference.

Example:

Positive Sentiment: Text: "I love this new phone. It's amazing!"

Negative Sentiment: Text: "This restaurant had terrible food and awful service."

Prompt: Celebrating a well-deserved victory!!!

Expected Response: Positive

Let's start writing programming.

system_role = """Find text sentiment in given text:
: I love this new phone. It's amazing!!!
positive
: I hate this new phone. It's the worst.
negative"""

user_message = f"""\
I love this new phone.It's amazing!!!"""

System_Role: Within the system_role, you'll find an example of Few-shot prompting. This string comprises a prompt intended for users to assess the sentiment of a given text. The prompt contains two sample texts—one conveying a positive sentiment and another a negative sentiment. Users are tasked with categorizing the sentiment of a third text using these examples.

User_Message: The user_message variable is a formatted string containing the text that requires classification by the user. In this instance, the text is "I love this new phone. It's amazing!!!" and carries a positive sentiment. Users must determine whether this text is positive, negative, or neutral, drawing upon the examples outlined in the system_role variable.

Output: Upon executing this sample, you will observe the output as "Positive."

Full Source Code

import os
import requests
import json
import openai

from dotenv import load_dotenv

# Load the .env file
load_dotenv()

openai.api_key = os.getenv("AZURE_OPEN_KEY")
openai.api_base = os.getenv("AZURE_END_POINT") 
openai.api_type = 'azure'
openai.api_version = '2023-07-01-preview' 

deployment_name=os.getenv("DEPLOYMENT_NAME")

system_role = """Find text sentiment in given text:
: I love this new phone. It's amazing!!!
positive
: I hate this new phone. It's the worst.
negative"""

user_message = f"""\
I love this new phone.It's amazing!!!"""


# Send a completion call to generate an answer
response = openai.ChatCompletion.create(
    engine=os.getenv("DEPLOYMENT_NAME"),
    messages=[
        {"role": "system", "content": system_role},
        {"role": "user", "content": user_message},        
    ]
)

print(response['choices'][0]['message']['content'])

In the next article, we will explore additional prompt engineering techniques.


Similar Articles