Using Prompt Templates with Large Language Models

This article explores the subtleties of PromptTemplates and efficient ways to use them. A PromptTemplate is a pre-established pattern or framework used to create efficient and dependable prompts for extensive language models - it serves as a guide to make sure the input text or prompt is formatted correctly
natural-language-processing
deep-learning
langchain
activeloop
openai
prompt-engineering
Author

Pranath Fernando

Published

August 2, 2023

1 Introduction

We can do a variety of jobs thanks to large language models. These models work on the simple premise that they take a text input sequence and produce a text output sequence. The prompt or input text is the most important element in this process.

For anyone working with large language models, creating appropriate prompts is essential since poorly created prompts result in poor outputs while well formulated prompts produce effective outcomes. The LangChain library has created a complete collection of objects specifically for prompts since it understands how important they are.

This article explores the subtleties of Prompt Templates and efficient ways to use them. A Prompt Template is a pre-established pattern or framework used to create efficient and dependable prompts for extensive language models. It serves as a guide to make sure the input text or prompt is formatted correctly.

2 Import Libs & Setup

from dotenv import load_dotenv

!echo "OPENAI_API_KEY='<OPENAI_API_KEY>'" > .env

load_dotenv()
True

3 Starting with Prompt Templates

Using a PromptTemplate with a single dynamic input for a user inquiry is demonstrated here. Ensure that your OPEN AI key is used to define the OPENAI_API_KEY in your environment variables. The following command should be used to install the necessary packages: install langchain==0.0.208 deeplake openai tiktoken using pip.

from langchain import LLMChain, PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(model_name="text-davinci-003", temperature=0)

template = """Answer the question based on the context below. If the
question cannot be answered using the information provided, answer
with "I don't know".
Context: Quantum computing is an emerging field that leverages quantum mechanics to solve complex problems faster than classical computers.
...
Question: {query}
Answer: """

prompt_template = PromptTemplate(
    input_variables=["query"],
    template=template
)

# Create the LLMChain for the prompt
chain = LLMChain(llm=llm, prompt=prompt_template)

# Set the query you want to ask
input_data = {"query": "What is the main advantage of quantum computing over classical computing?"}

# Run the LLMChain to get the AI-generated answer
response = chain.run(input_data)

print("Question:", input_data["query"])
print("Answer:", response)
Question: What is the main advantage of quantum computing over classical computing?
Answer:  The main advantage of quantum computing over classical computing is its ability to solve complex problems faster.

With any additional question, you can modify the input_data dictionary.

The template is a prepared string that contains a placeholder for the word “query” that, when used, will be replaced by an actual inquiry. Two arguments are needed to build a PromptTemplate object:

  1. input_variables: A list of the template’s variable names; in this instance, it simply contains the query.
  2. template: A string of placeholders and prepared text used as a template.

By giving input data, the PromptTemplate object can be used to generate prompts with customised questions. The input data is a dictionary with the variable name in the template’s name as the key. A language model can then be used to create answers from the resulting prompt.

To choose a subset of examples that will be the most instructive for the language model, you can develop a FewShotPromptTemplate with an ExampleSelector for use in more complex applications.

from langchain import LLMChain, FewShotPromptTemplate, PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(model_name="text-davinci-003", temperature=0)

examples = [
    {"animal": "lion", "habitat": "savanna"},
    {"animal": "polar bear", "habitat": "Arctic ice"},
    {"animal": "elephant", "habitat": "African grasslands"}
]

example_template = """
Animal: {animal}
Habitat: {habitat}
"""

example_prompt = PromptTemplate(
    input_variables=["animal", "habitat"],
    template=example_template
)

dynamic_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix="Identify the habitat of the given animal",
    suffix="Animal: {input}\nHabitat:",
    input_variables=["input"],
    example_separator="\n\n",
)

# Create the LLMChain for the dynamic_prompt
chain = LLMChain(llm=llm, prompt=dynamic_prompt)

# Run the LLMChain with input_data
input_data = {"input": "tiger"}
response = chain.run(input_data)

print(response)
 tropical forests and mangrove swamps

Additionally, you can also save your PromptTemplate to a file in your local filesystem in JSON or YAML format:

prompt_template.save("awesome_prompt.json")

And load it back:

from langchain.prompts import load_prompt
loaded_prompt = load_prompt("awesome_prompt.json")

4 Different Types of Prompt Templates

Let’s examine other examples using various Prompt Template types. The following example shows how to teach the LLM using a few short prompts by giving examples of how to answer ironically to inquiries.

from langchain import LLMChain, FewShotPromptTemplate, PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(model_name="text-davinci-003", temperature=0)

examples = [
    {
        "query": "How do I become a better programmer?",
        "answer": "Try talking to a rubber duck; it works wonders."
    }, {
        "query": "Why is the sky blue?",
        "answer": "It's nature's way of preventing eye strain."
    }
]

example_template = """
User: {query}
AI: {answer}
"""

example_prompt = PromptTemplate(
    input_variables=["query", "answer"],
    template=example_template
)

prefix = """The following are excerpts from conversations with an AI
assistant. The assistant is typically sarcastic and witty, producing
creative and funny responses to users' questions. Here are some
examples:
"""

suffix = """
User: {query}
AI: """

few_shot_prompt_template = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix=prefix,
    suffix=suffix,
    input_variables=["query"],
    example_separator="\n\n"
)

# Create the LLMChain for the few_shot_prompt_template
chain = LLMChain(llm=llm, prompt=few_shot_prompt_template)

# Run the LLMChain with input_data
input_data = {"query": "How can I learn quantum computing?"}
response = chain.run(input_data)

print(response)
 Start by studying Schrödinger's cat. That should get you off to a good start.

The example’s FewShotPromptTemplate shows how effective dynamic prompts can be. This method uses instances of prior encounters rather than employing a static template, enabling the AI to better comprehend the context and style of the desired answer.

Several benefits of dynamic prompts versus static templates include:

  • Improved context understanding: By providing examples, the AI can grasp the context and style of responses more effectively, enabling it to generate responses that are more in line with the desired output.
  • Flexibility: Dynamic prompts can be easily customized and adapted to specific use cases, allowing developers to experiment with different prompt structures and find the most effective format for their application.
  • Better results: As a result of the improved context understanding and flexibility, dynamic prompts often yield higher-quality outputs that better match user expectations.

By providing examples and context that direct the AI towards producing more precise, contextually relevant, and stylistically consistent responses, this enables us to fully utilise the model’s capabilities.

Additionally, prompt templates work nicely with other LangChain features like chains and let you manage the number of examples supplied based on query length. This aids in regulating the balance between the quantity of instances and prompt size and optimising token usage.

Giving the model as many relevant instances as you can without going over the maximum context window or slowing down processing is essential to maximising the performance of few-shot learning. We can strike a compromise between providing enough background and upholding the model’s operational efficiency by dynamically including or excluding examples:

examples = [
    {
        "query": "How do you feel today?",
        "answer": "As an AI, I don't have feelings, but I've got jokes!"
    }, {
        "query": "What is the speed of light?",
        "answer": "Fast enough to make a round trip around Earth 7.5 times in one second!"
    }, {
        "query": "What is a quantum computer?",
        "answer": "A magical box that harnesses the power of subatomic particles to solve complex problems."
    }, {
        "query": "Who invented the telephone?",
        "answer": "Alexander Graham Bell, the original 'ringmaster'."
    }, {
        "query": "What programming language is best for AI development?",
        "answer": "Python, because it's the only snake that won't bite."
    }, {
        "query": "What is the capital of France?",
        "answer": "Paris, the city of love and baguettes."
    }, {
        "query": "What is photosynthesis?",
        "answer": "A plant's way of saying 'I'll turn this sunlight into food. You're welcome, Earth.'"
    }, {
        "query": "What is the tallest mountain on Earth?",
        "answer": "Mount Everest, Earth's most impressive bump."
    }, {
        "query": "What is the most abundant element in the universe?",
        "answer": "Hydrogen, the basic building block of cosmic smoothies."
    }, {
        "query": "What is the largest mammal on Earth?",
        "answer": "The blue whale, the original heavyweight champion of the world."
    }, {
        "query": "What is the fastest land animal?",
        "answer": "The cheetah, the ultimate sprinter of the animal kingdom."
    }, {
        "query": "What is the square root of 144?",
        "answer": "12, the number of eggs you need for a really big omelette."
    }, {
        "query": "What is the average temperature on Mars?",
        "answer": "Cold enough to make a Martian wish for a sweater and a hot cocoa."
    }
]

Instead of utilizing the examples list of dictionaries directly, we implement a LengthBasedExampleSelector like this:

from langchain.prompts.example_selector import LengthBasedExampleSelector

example_selector = LengthBasedExampleSelector(
    examples=examples,
    example_prompt=example_prompt,
    max_length=100
)

The final prompt is kept under the intended token limit by the code’s dynamic selection and inclusion of examples based on their length using the LengthBasedExampleSelector. The dynamic_prompt_template is initialised using the selector:

dynamic_prompt_template = FewShotPromptTemplate(
    example_selector=example_selector,
    example_prompt=example_prompt,
    prefix=prefix,
    suffix=suffix,
    input_variables=["query"],
    example_separator="\n"
)

As a result, rather than using a set list of examples, the dynamic_prompt_template makes use of the example_selector. This enables the FewShotPromptTemplate to modify the amount of examples included in accordance with the length of the input query. By doing this, it makes the best use possible of the available context window and guarantees that the language model receives an adequate quantity of contextual information.

from langchain import LLMChain, FewShotPromptTemplate, PromptTemplate
from langchain.llms import OpenAI
from langchain.prompts.example_selector import LengthBasedExampleSelector

llm = OpenAI(model_name="text-davinci-003", temperature=0)

# Existing example and prompt definitions, and dynamic_prompt_template initialization

# Create the LLMChain for the dynamic_prompt_template
chain = LLMChain(llm=llm, prompt=dynamic_prompt_template)

# Run the LLMChain with input_data
input_data = {"query": "Who invented the telephone?"}
response = chain.run(input_data)

print(response)
 Alexander Graham Bell, the man who made it possible to talk to people from miles away!

5 Conclusion

For creating efficient prompts for extensive language models, prompt templates are crucial because they offer an organised and standardised framework that maximises accuracy and relevance. Dynamic prompt integration improves context comprehension, adaptability, and outcomes, making them an important tool for language model development.

6 Acknowledgements

I’d like to express my thanks to the wonderful LangChain & Vector Databases in Production Course by Activeloop - which i completed, and acknowledge the use of some images and other materials from the course in this article.

Subscribe