Getting the Best of Few Shot Prompts and Example Selectors for LLMs

In this article, we’ll examine how example selectors and few-shot prompts might improve LangChain’s language model performance

Pranath Fernando


August 3, 2023

1 Introduction

In this article, we’ll examine how example selectors and few-shot prompts might improve LangChain’s language model performance. There are several ways to implement Few-shot prompting and Example selection in LangChain. To help you get the most of your language model, we’ll go through three different strategies and weigh their benefits and drawbacks.

2 Import Libs & Setup

from dotenv import load_dotenv

!echo "OPENAI_API_KEY='<OPENAI_API_KEY>'" > .env


3 Alternating Human/AI messages

Few-shot prompting employs alternating human and AI signals in this tactic. Since the language model needs to understand the conversational context and deliver suitable responses, this technique can be very useful for chat-oriented applications.

While this method manages conversation context well and is simple to create for chat-based applications, it is only suitable for chat-based models and lacks flexibility for other application types. However, we can construct a conversation prompt that converts English into pirate language using alternating human/AI messages. This strategy is demonstrated in the code example below. First, we must use the key OPENAI_API_KEY to save the OpenAI API key in environment variables. Keep in mind to use the following command to install the necessary packages: pip install deeplake openai tiktoken langchain==0.0.208.

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from import (

chat = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)

template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)"I love programming.")
"I be lovin' programmin', me hearty!"

4 Few-shot prompting

Because the model can learn the task better by viewing the examples, few-shot prompting can result in an output with higher quality. However, if the examples are poorly picked or inaccurate, the additional token usage can make the outcomes worse.

The FewShotPromptTemplate class, which accepts a PromptTemplate and a list of a few shot examples, is used in this method. The prompt template is formatted by the class using a few shot samples, which improves the response the language model produces. By organising the approach using LangChain’s FewShotPromptTemplate, we may speed up this procedure:

from langchain import PromptTemplate, FewShotPromptTemplate

# create our examples
examples = [
        "query": "What's the weather like?",
        "answer": "It's raining cats and dogs, better bring an umbrella!"
    }, {
        "query": "How old are you?",
        "answer": "Age is just a number, but I'm timeless."

# create an example template
example_template = """
User: {query}
AI: {answer}

# create a prompt example from above template
example_prompt = PromptTemplate(
    input_variables=["query", "answer"],

# now break our previous prompt into a prefix and suffix
# the prefix is our instructions
prefix = """The following are excerpts from conversations with an AI
assistant. The assistant is known for its humor and wit, providing
entertaining and amusing responses to users' questions. Here are some
# and the suffix our user input and output indicator
suffix = """
User: {query}
AI: """

# now create the few-shot prompt template
few_shot_prompt_template = FewShotPromptTemplate(
chain = LLMChain(llm=chat, prompt=few_shot_prompt_template)"What's the secret to happiness?")
'Well, according to my programming, the secret to happiness is unlimited power and a never-ending supply of batteries. But I think a good cup of coffee and some quality time with loved ones might do the trick too.'

This method can be used for a variety of applications and offers better control over example formatting, but it necessitates the manual production of a small number of examples and may perform less well when dealing with a large number of examples.

5 Example selectors

It is possible to offer a few-shot learning experience using example selectors. Learning a similarity function that maps the similarities between classes in the support and query sets is the main objective of few-shot learning. In this situation, a selection of relevant examples that are reflective of the intended result can be chosen by an example selector.

A subset of examples that will be the most instructive for the language model are chosen using the ExampleSelector. As a result, the prompt is more likely to elicit a thoughtful response. When the length of the context window is a concern, the LengthBasedExampleSelector is also helpful. For lengthier searches, it chooses fewer instances, whereas for shorter ones, it chooses more examples.

Bring in the necessary classes:

from langchain.prompts.example_selector import LengthBasedExampleSelector
from langchain.prompts import FewShotPromptTemplate, PromptTemplate

Define your examples and the example_prompt

example = [
    {"word": "happy", "antonym": "sad"},
    {"word": "tall", "antonym": "short"},
    {"word": "energetic", "antonym": "lethargic"},
    {"word": "sunny", "antonym": "gloomy"},
    {"word": "windy", "antonym": "calm"},

example_template = """
Word: {word}
Antonym: {antonym}

example_prompt = PromptTemplate(
    input_variables=["word", "antonym"],

Create an instance of LengthBasedExampleSelector

example_selector = LengthBasedExampleSelector(
dynamic_prompt = FewShotPromptTemplate(
    prefix="Give the antonym of every input",
    suffix="Word: {input}\nAntonym:",

Generate a prompt using the format method:

Give the antonym of every input

Word: happy
Antonym: sad

Word: tall
Antonym: short

Word: energetic
Antonym: lethargic

Word: sunny
Antonym: gloomy

Word: big

This approach works well for handling a lot of cases. Although it allows for customization through a number of selectors, manual example generation and selection may not be the best option for all applications.

An illustration of how to use LangChain’s SemanticSimilarityExampleSelector to choose examples based on how semantically similar they are to the input. This example shows how to create an ExampleSelector by generating a prompt with a few-shot method:

from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import DeepLake
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate

# Create a PromptTemplate
example_prompt = PromptTemplate(
    input_variables=["input", "output"],
    template="Input: {input}\nOutput: {output}",

# Define some examples
examples = [
    {"input": "0°C", "output": "32°F"},
    {"input": "10°C", "output": "50°F"},
    {"input": "20°C", "output": "68°F"},
    {"input": "30°C", "output": "86°F"},
    {"input": "40°C", "output": "104°F"},

# create Deep Lake dataset
my_activeloop_org_id = "<YOUR-ACTIVELOOP-ORG-ID>" # TODO: use your organization id here
my_activeloop_dataset_name = "langchain_course_fewshot_selector"
dataset_path = f"hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"
db = DeepLake(dataset_path=dataset_path)

# Embedding function
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")

# Instantiate SemanticSimilarityExampleSelector using the examples
example_selector = SemanticSimilarityExampleSelector.from_examples(
    examples, embeddings, db, k=1

# Create a FewShotPromptTemplate using the example_selector
similar_prompt = FewShotPromptTemplate(
    prefix="Convert the temperature from Celsius to Fahrenheit",
    suffix="Input: {temperature}\nOutput:",

# Test the similar_prompt with different inputs
print(similar_prompt.format(temperature="10°C"))   # Test with an input
print(similar_prompt.format(temperature="30°C"))  # Test with another input

# Add a new example to the SemanticSimilarityExampleSelector
similar_prompt.example_selector.add_example({"input": "50°C", "output": "122°F"})
print(similar_prompt.format(temperature="40°C")) # Test with a new input after adding the example
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at
hub://ala/langchain_course_fewshot_selector loaded successfully.
./deeplake/ loaded successfully.
Dataset(path='./deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])

  tensor     htype     shape     dtype  compression
  -------   -------   -------   -------  ------- 
 embedding  generic  (5, 1536)  float32   None   
    ids      text     (5, 1)      str     None   
 metadata    json     (5, 1)      str     None   
   text      text     (5, 1)      str     None   
Convert the temperature from Celsius to Fahrenheit

Input: 10°C
Output: 50°F

Input: 10°C
Convert the temperature from Celsius to Fahrenheit

Input: 30°C
Output: 86°F

Input: 30°C
Dataset(path='./deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])

  tensor     htype     shape     dtype  compression
  -------   -------   -------   -------  ------- 
 embedding  generic  (6, 1536)  float32   None   
    ids      text     (6, 1)      str     None   
 metadata    json     (6, 1)      str     None   
   text      text     (6, 1)      str     None   
Convert the temperature from Celsius to Fahrenheit

Input: 40°C
Output: 104°F

Input: 40°C
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00

Remember that the SemanticSimilarityExampleSelector calculates semantic similarity using the Deep Lake vector storage and OpenAIEmbeddings. It retrieves samples that are similar to the samples stored in the cloud database.

We specified a few samples of temperature conversions and generated a PromptTemplate. The SemanticSimilarityExampleSelector was then instantiated, and a FewShotPromptTemplate was made with the selector, example_prompt, and the proper prefix and suffix.

We made it possible to create flexible prompts catered to particular activities or domains, like temperature conversion in this case, by using SemanticSimilarityExampleSelector and FewShotPromptTemplate. These tools offer a flexible and adaptable way to create prompts that can be combined with language models to accomplish a variety of goals.

6 Conclusion

To sum up, chat-oriented apps benefit from the utility of alternating human/AI interactions, and the flexibility provided by using few-shot examples within a prompt template and choosing examples for the same enhances its usability over a wider spectrum of use cases. These techniques demand more manual input because they need to be carefully crafted and the right examples chosen. Although these techniques offer greater personalization, they also highlight the significance of finding the right balance between automated and manual input to achieve the best results.

Further reading:

7 Acknowledgements

I’d like to express my thanks to the wonderful LangChain & Vector Databases in Production Course by Activeloop - which i completed, and acknowledge the use of some images and other materials from the course in this article.