Prompt Engineering Tips and Tricks for Large Language Models

The aim of this post is to provide a strong basis in the knowledge and techniques required to develop effective prompts that empower LLMs to provide precise, contextually relevant, and insightful responses.
natural-language-processing
deep-learning
langchain
activeloop
openai
prompt-engineering
Author

Pranath Fernando

Published

August 1, 2023

1 Introduction

In the relatively young field of prompt engineering, prompts are created and improved in order to make efficient use of language models for a variety of applications and research areas. It is necessary for many NLP tasks and aids in a better understanding of the strengths and weaknesses of LLMs. To assist you better comprehend the subtleties of prompt engineering, we will use real-world examples to contrast good and terrible prompts.

The goal of this post is to provide a strong basis in the knowledge and techniques required to develop effective prompts that empower LLMs to provide precise, contextually relevant, and insightful responses.

2 Import Libs & Setup

from dotenv import load_dotenv

!echo "OPENAI_API_KEY='<OPENAI_API_KEY>'" > .env

load_dotenv()
True

3 Role Prompting

Role prompting entails requesting the LLM to adopt a particular character or identity prior to carrying out a specific activity, such as functioning as a copywriter. By giving the task a context or perspective, this can aid in directing the model’s response. You could iteratively:

  1. Clearly state the function in your prompt, such as “As a copywriter, develop some attention-grabbing taglines for AWS services.”
  2. Use the prompt to create an output from an LLM.
  3. Examine the generated response and, if necessary, change the prompt to get better outcomes.

Example:

As a futuristic robot band conductor in this example, the LLM is requested to offer a song title that relates to the given theme and year. (Remember to use the OPENAI_API_KEY key to configure your OpenAI API key in your environment variables.) Keep in mind to use the following command to install the necessary packages: pip install deeplake openai tiktoken langchain==0.0.208.

from langchain import PromptTemplate, LLMChain
from langchain.llms import OpenAI

# Initialize LLM
llm = OpenAI(model_name="text-davinci-003", temperature=0)

template = """
As a futuristic robot band conductor, I need you to help me come up with a song title.
What's a cool song title for a song about {theme} in the year {year}?
"""
prompt = PromptTemplate(
    input_variables=["theme", "year"],
    template=template,
)

# Create the LLMChain for the prompt
chain = LLMChain(llm=llm, prompt=prompt)

# Input data for the prompt
input_data = {"theme": "interstellar travel", "year": "3030"}

# Run the LLMChain to get the AI-generated song title
response = chain.run(input_data)

print("Theme: interstellar travel")
print("Year: 3030")
print("AI-generated song title:", response)
Theme: interstellar travel
Year: 3030
AI-generated song title: 
"Journey to the Stars: 3030"

Several factors make this a suitable prompt:

  • Clearly stated instructions: The prompt is written as a direct request for assistance in coming up with a song title and it clearly states the context: “As a futuristic robot band conductor.” This aids the LLM in comprehending that the ideal outcome should be a song title that refers to a futuristic situation.

  • Specificity: The prompt requests a song title that refers to a certain theme and a specific year, “theme in the year year.” For the LLM to produce a useful and original result, this offers sufficient context. By leveraging input variables, the prompt is flexible and reusable and can be customised for other themes and years.

  • Open-ended creativity: Because the LLM is not constrained to a specific format or style for the song title, the prompt encourages unrestricted creativity. Based on the specified theme and year, the LLM is capable of generating a wide variety of song titles.

  • Focus on the task: The prompt is primarily concerned with coming up with a song title, which makes it simpler for the LLM to produce a suitable output without becoming distracted by irrelevant subjects.

With the aid of these components, the LLM may better comprehend the user’s purpose and produce an appropriate answer.

4 Few Shot Prompting

Few Shot Prompting In the next example, the LLM is asked to provide the emotion associated with a given color based on a few examples of color-emotion pairs:

from langchain import FewShotPromptTemplate

examples = [
    {"color": "red", "emotion": "passion"},
    {"color": "blue", "emotion": "serenity"},
    {"color": "green", "emotion": "tranquility"},
]

example_formatter_template = """
Color: {color}
Emotion: {emotion}\n
"""
example_prompt = PromptTemplate(
    input_variables=["color", "emotion"],
    template=example_formatter_template,
)

few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix="Here are some examples of colors and the emotions associated with them:\n\n",
    suffix="\n\nNow, given a new color, identify the emotion associated with it:\n\nColor: {input}\nEmotion:",
    input_variables=["input"],
    example_separator="\n",
)

formatted_prompt = few_shot_prompt.format(input="purple")

# Create the LLMChain for the prompt
chain = LLMChain(llm=llm, prompt=PromptTemplate(template=formatted_prompt, input_variables=[]))

# Run the LLMChain to get the AI-generated emotion associated with the input color
response = chain.run({})

print("Color: purple")
print("Emotion:", response)
Color: purple
Emotion:  creativity

This prompt provides clear instructions and few-shot examples to help the model understand the task.

5 Bad Prompt Practices

Let’s now look at a few instances of prompting that are typically seen as undesirable.

Here is an illustration of a request that is too imprecise and doesn’t give the model enough information or direction to produce a meaningful response.

template = "Tell me something about {topic}."
prompt = PromptTemplate(
    input_variables=["topic"],
    template=template,
)
prompt.format(topic="dogs")
'Tell me something about dogs.'

6 Chain Prompting

Chain prompting is the process of chaining together two or more prompts in which the output of one prompt serves as the input for the next.

Chain prompting using LangChain can be used to:

  • From the generated response, extract any relevant data.
  • To build on the previous response, make a new prompt using the information that was retrieved.
  • Continue performing the steps until the desired result is obtained.

Building prompts with dynamic inputs is made simpler by the PromptTemplate class. This comes in handy when building a prompt chain that depends on earlier responses.

# Prompt 1
template_question = """What is the name of the famous scientist who developed the theory of general relativity?
Answer: """
prompt_question = PromptTemplate(template=template_question, input_variables=[])

# Prompt 2
template_fact = """Provide a brief description of {scientist}'s theory of general relativity.
Answer: """
prompt_fact = PromptTemplate(input_variables=["scientist"], template=template_fact)

# Create the LLMChain for the first prompt
chain_question = LLMChain(llm=llm, prompt=prompt_question)

# Run the LLMChain for the first prompt with an empty dictionary
response_question = chain_question.run({})

# Extract the scientist's name from the response
scientist = response_question.strip()

# Create the LLMChain for the second prompt
chain_fact = LLMChain(llm=llm, prompt=prompt_fact)

# Input data for the second prompt
input_data = {"scientist": scientist}

# Run the LLMChain for the second prompt
response_fact = chain_fact.run(input_data)

print("Scientist:", scientist)
print("Fact:", response_fact)
Scientist: Albert Einstein
Fact: 
Albert Einstein's theory of general relativity is a theory of gravitation that states that the gravitational force between two objects is a result of the curvature of spacetime caused by the presence of mass and energy. It explains the phenomenon of gravity as a result of the warping of space and time by matter and energy.

Due to its more open-ended character, this prompt can elicit a less instructive or targeted response than the prior example.

Bad Prompt Example:

# Prompt 1
template_question = """What is the name of the famous scientist who developed the theory of general relativity?
Answer: """
prompt_question = PromptTemplate(template=template_question, input_variables=[])

# Prompt 2
template_fact = """Tell me something interesting about {scientist}.
Answer: """
prompt_fact = PromptTemplate(input_variables=["scientist"], template=template_fact)

# Create the LLMChain for the first prompt
chain_question = LLMChain(llm=llm, prompt=prompt_question)

# Run the LLMChain for the first prompt with an empty dictionary
response_question = chain_question.run({})

# Extract the scientist's name from the response
scientist = response_question.strip()

# Create the LLMChain for the second prompt
chain_fact = LLMChain(llm=llm, prompt=prompt_fact)

# Input data for the second prompt
input_data = {"scientist": scientist}

# Run the LLMChain for the second prompt
response_fact = chain_fact.run(input_data)

print("Scientist:", scientist)
print("Fact:", response_fact)
Scientist: Albert Einstein
Fact:  Albert Einstein was a vegetarian and an advocate for animal rights. He was also a pacifist and a socialist, and he was a strong supporter of the civil rights movement. He was also a passionate violinist and a lover of sailing.

Due to its more open-ended character, this prompt can elicit a less instructive or targeted response than the prior example.

An illustration of the vague prompt:

# Prompt 1
template_question = """What are some musical genres?
Answer: """
prompt_question = PromptTemplate(template=template_question, input_variables=[])

# Prompt 2
template_fact = """Tell me something about {genre1}, {genre2}, and {genre3} without giving any specific details.
Answer: """
prompt_fact = PromptTemplate(input_variables=["genre1", "genre2", "genre3"], template=template_fact)

# Create the LLMChain for the first prompt
chain_question = LLMChain(llm=llm, prompt=prompt_question)

# Run the LLMChain for the first prompt with an empty dictionary
response_question = chain_question.run({})

# Assign three hardcoded genres
genre1, genre2, genre3 = "jazz", "pop", "rock"

# Create the LLMChain for the second prompt
chain_fact = LLMChain(llm=llm, prompt=prompt_fact)

# Input data for the second prompt
input_data = {"genre1": genre1, "genre2": genre2, "genre3": genre3}

# Run the LLMChain for the second prompt
response_fact = chain_fact.run(input_data)

print("Genres:", genre1, genre2, genre3)
print("Fact:", response_fact)
Genres: jazz pop rock
Fact: 
Jazz, pop, and rock are all genres of popular music that have been around for decades. They all have distinct sounds and styles, and have influenced each other in various ways. Jazz is often characterized by improvisation, complex harmonies, and syncopated rhythms. Pop music is typically more accessible and often features catchy melodies and hooks. Rock music is often characterized by distorted guitars, heavy drums, and powerful vocals.

The second prompt in this illustration is badly written. It requests that you “tell me something without providing any specifics about genres 1, 2, and 3.” This question is murky because it requests information on the genres while simultaneously stating that exact specifics are not required. This makes it challenging for the LLM to formulate an insightful and logical response. As a result, the LLM can give a vague or unclear response.

The first prompt requests “some musical genres” without providing any context or criteria, and the second asks why the genres are “unique” without indicating what aspects of uniqueness to emphasise, such as their historical roots, stylistic characteristics, or cultural significance.

7 Chain of Thought Prompting

Chain of Thought Prompting (CoT) is a method created to get large language models to explain how they think, which produces more accurate results. The LLM is instructed to explain its thinking while responding to the prompt by the few-shot exemplars that are provided to illustrate the reasoning process. Results on tests like arithmetic, common sense, and symbolic reasoning have been found to be improved by using this strategy.

CoT can be advantageous in the context of LangChain for a number of reasons. First, it can assist the LLM in breaking down a hard work into simpler steps, making it easy to comprehend and resolve the issue. For activities requiring computation, logic, or multiple steps in reasoning, this is especially helpful. Second, CoT can direct the model through pertinent prompts, assisting in the production of outputs that are more coherent and pertinent to the context. In jobs that call for a thorough understanding of the issue or topic, this can result in more accurate and helpful responses.

When using CoT, there are various restrictions to take into account. One drawback is that it has been discovered to only give performance increases when used to models with around 100 billion parameters or more; smaller models have a tendency to generate illogical thought chains, which might result in accuracy that is worse than normal prompting. The possibility that CoT may not be equally successful for various jobs is another drawback. It has been demonstrated that it works best for arithmetic, common sense, and symbolic thinking problems. The advantages of adopting CoT may be less obvious or even ineffective for other jobs.

8 Tips for Effective Prompt Engineering

  • Describe your prompt in detail: Give the LLM enough background information and specifics to direct it towards the intended result.
  • When necessary, compel conciseness.
  • Encourage the model to provide justification: Results may be more accurate as a result, particularly for challenging assignments.

Remember that timely engineering is an iterative process, and getting the optimal result could necessitate making multiple adjustments. The capacity to formulate potent urges will be a crucial talent to possess as LLMs are more thoroughly included into goods and services.

A well-structured prompt example:

examples = [
    {
        "query": "What's the secret to happiness?",
        "answer": "Finding balance in life and learning to enjoy the small moments."
    }, {
        "query": "How can I become more productive?",
        "answer": "Try prioritizing tasks, setting goals, and maintaining a healthy work-life balance."
    }
]

example_template = """
User: {query}
AI: {answer}
"""

example_prompt = PromptTemplate(
    input_variables=["query", "answer"],
    template=example_template
)

prefix = """The following are excerpts from conversations with an AI
life coach. The assistant provides insightful and practical advice to the users' questions. Here are some
examples:
"""

suffix = """
User: {query}
AI: """

few_shot_prompt_template = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix=prefix,
    suffix=suffix,
    input_variables=["query"],
    example_separator="\n\n"
)

# Create the LLMChain for the few-shot prompt template
chain = LLMChain(llm=llm, prompt=few_shot_prompt_template)

# Define the user query
user_query = "What are some tips for improving communication skills?"

# Run the LLMChain for the user query
response = chain.run({"query": user_query})

print("User Query:", user_query)
print("AI Response:", response)
User Query: What are some tips for improving communication skills?
AI Response:  Practice active listening, be mindful of your body language, and be open to constructive feedback.

This prompt:

  • The prefix clearly defines the context: The prompt indicates that the AI is a life coach offering enlightening and helpful counsel. The AI is guided by this context to ensure that its responses serve the desired objective.
  • Cites instances to highlight the function of the AI and the kinds of responses it produces: Giving the AI meaningful examples will help it comprehend the appropriate style and tone for its responses. The AI can use these samples as a guide to produce comparable responses that are appropriate for the situation.
  • Establishes a clear difference between sample dialogues and user input: This enables the AI to comprehend the format it should adhere to. The AI can concentrate on the current inquiry and react appropriately thanks to this division.
  • Has a distinct suffix indicating where the user’s input goes and where the AI should respond: The suffix serves as a cue for the artificial intelligence, indicating where the user’s request stops and where the AI response should start. The generated responses are kept in a format that is clear and consistent thanks to this framework.

The AI will provide more accurate and helpful outputs as a result of applying this well-structured prompt, which helps it comprehend its function, the context, and the format of the intended response.

9 Conclusion

This article examined various methods for developing prompts for extensive language models that are more useful. You’ll be better able to create effective prompts that enable LLMs to provide precise, contextually relevant, and insightful responses if you comprehend and put these strategies to use. Keep in mind that quick engineering is an iterative process that occasionally calls for revision to produce the greatest outcomes.

To sum up, prompt engineering is a powerful technique that can aid in language model optimisation for a range of applications and research areas. We may direct the model to produce accurate, contextually appropriate, and insightful responses by designing effective prompts. We have given examples of how to build appropriate prompts using the role prompting and chain prompting strategies. On the other side, we have also illustrated poor prompt examples that don’t offer the model enough context or direction to develop a meaningful response. You can build a strong foundation in timely engineering and use language models more skillfully for a variety of tasks by paying attention to the advice and strategies offered in this post.

Further reading:

https://dev.to/mmz001/a-hands-on-guide-to-prompt-engineering-with-chatgpt-and-gpt-3-4127

https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/

https://wandb.ai/a-sh0ts/langchain_callback_demo/reports/Prompt-Engineering-LLMs-with-LangChain-and-W-B–VmlldzozNjk1NTUw

10 Acknowledgements

I’d like to express my thanks to the wonderful LangChain & Vector Databases in Production Course by Activeloop - which i completed, and acknowledge the use of some images and other materials from the course in this article.

Subscribe