Using Chains with LangChain

Here we will look at the Chains component of LangChain and see how this can help us combine different sequences of events using LLM’s.

Pranath Fernando


June 3, 2023

1 Introduction

Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using LLMs in isolation is often not enough in practice to create a truly powerful or useful business application - the real power comes when you are able to combine them with other sources of computation, services or knowledge. LangChain is an intuitive open-source python framework created to simplify the development of useful applications using large language models (LLMs), such as OpenAI or Hugging Face.

In earlier articles we introduced the LangChain library and key components.

In this article, we will look at the Chains component of LangChain and see how this can help us combine different sequences of events using LLM’s.

2 Setup

We will use OpenAI’s ChatGPT LLM for our examples, so lets load in the required libraries.

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

We are going to load some example product review data to use. One of the many advantages of using chains is that it enables you to run LLM’s over many inputs at a time.

import pandas as pd
df = pd.read_csv('Data.csv')
Product Review
0 Queen Size Sheet Set I ordered a king size set. My only criticism w...
1 Waterproof Phone Pouch I loved the waterproof sac, although the openi...
2 Luxury Air Mattress This mattress had a small hole in the top of i...
3 Pillows Insert This is the best throw pillow fillers on Amazo...
4 Milk Frother Handheld\n I loved this product. But they only seem to l...

3 LLMChain

This is one of the most basic chains we can use. Let’s initilise an LLM with a high temperature so we get more variability and creativity from the model responses.

We will set up a template and a product, to create the best name for a product - and lets test that out.

llm = ChatOpenAI(temperature=0.9)
prompt = ChatPromptTemplate.from_template(
    "What is the best name to describe \
    a company that makes {product}?"

chain = LLMChain(llm=llm, prompt=prompt)
product = "Queen Size Sheet Set"
"Queen's Choice Linens."

So in this case the simple chain is just the LLM and the prompt in a sequential manner - and not a bad product name!

Sequential chains on the other hand enables us to combine multiple chains in such a way that the output of one chain becomes the input to another chain.

There are 2 types of Sequential chain:

  • SimpleSequentialChain: Single input/output
  • SequentialChain: Multiple inputs/outputs

4 SimpleSequentialChain

So let’s create two chains: a first chain that as before takes a product and creates a name as its output, and a second chain that takes in the company name and outputs a 20 word description about that company.

from langchain.chains import SimpleSequentialChain
llm = ChatOpenAI(temperature=0.9)

# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
    "What is the best name to describe \
    a company that makes {product}?"

# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)

# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
    "Write a 20 words description for the following \
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],

> Entering new SimpleSequentialChain chain...
Royal Linens
Royal Linens is a leading manufacturer of high-quality bedding, towels, and linens for residential and commercial customers worldwide.

> Finished chain.
'Royal Linens is a leading manufacturer of high-quality bedding, towels, and linens for residential and commercial customers worldwide.'

So you could repeat and run this sequential chain for multiple products. This works well for when you need a single input and a single output.

5 SequentialChain

When you have multiple inputs or outputs SequentialChain can be used.

So lets say we want to do the following sequence of tasks:

  1. Translate a review into English
  2. Create a summary of that english review in one sentance
  3. Identify the language of the original review
  4. Write a follow up response including the summary and language previously created

We can specify a sequence of chains to do this like this:

from langchain.chains import SequentialChain
llm = ChatOpenAI(temperature=0.9)

# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template(
    "Translate the following review to english:"
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt, 
second_prompt = ChatPromptTemplate.from_template(
    "Can you summarize the following review in 1 sentence:"
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt, 
# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template(
    "What language is the following review:\n\n{Review}"
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,

# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
    "Write a follow up response to the following "
    "summary in the specified language:"
    "\n\nSummary: {summary}\n\nLanguage: {language}"
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
# overall_chain: input= Review 
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(
    chains=[chain_one, chain_two, chain_three, chain_four],
    output_variables=["English_Review", "summary","followup_message"],
review = df.Review[5]

> Entering new SequentialChain chain...

> Finished chain.
{'Review': "Je trouve le goût médiocre. La mousse ne tient pas, c'est bizarre. J'achète les mêmes dans le commerce et le goût est bien meilleur...\nVieux lot ou contrefaçon !?",
 'English_Review': "I find the taste mediocre. The foam doesn't hold, it's weird. I buy the same ones in stores and the taste is much better... Old batch or counterfeit!?",
 'summary': 'The reviewer finds the taste of the product mediocre and suspects that it may be an old batch or counterfeit as the foam does not hold.',
 'followup_message': "Le critique trouve que le goût du produit est médiocre et soupçonne qu'il pourrait s'agir d'un lot ancien ou contrefait car la mousse n'est pas stable."}

One thing we can note is that its important we are careful we refer to the variable names used that hold values correctly, this enables the chain to pass on values further down the chain. Chosing unique variable names of course is a given. We use variable names within prompts within curly brackets {} to refer to previous values, and define new output variable names using the output_key parameter for each chain object.

We can see here how in the Sequential chain any chain can potentially take inputs from multiple other chains.

6 Router Chain

What if we have a task where we need to put something through a different sub-chain depending on some condition? in this case we can use RouterChain.

As an example lets decide to answer questions on different subjects, and route through different sub-chains depending on the subject of the text coming in. We can create say a different prompt template for dealing with different subjects.

physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.

Here is a question:

math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts, 
answer the component parts, and then put them together\
to answer the broader question.

Here is a question:

history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.

Here is a question:

computerscience_template = """ You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity. 

Here is a question:

We can then define some metadata for each of these templates, giving them each a name and some guidance for when each is good to use. This enables the RouterChain to know which sub-chain to use.

prompt_infos = [
        "name": "physics", 
        "description": "Good for answering questions about physics", 
        "prompt_template": physics_template
        "name": "math", 
        "description": "Good for answering math questions", 
        "prompt_template": math_template
        "name": "History", 
        "description": "Good for answering history questions", 
        "prompt_template": history_template
        "name": "computer science", 
        "description": "Good for answering computer science questions", 
        "prompt_template": computerscience_template

We now need to import some other chain objects. The MultiPromptChain can be used when routing between different prompt templates. The LLMRouterChain uses a language model to route between different sub-chains - this is where the prompt_infos name, descriptions etc will be used to inform the model on its choice of where to route to the next prompt. RouterOutputParser is used to convert the LLM output into a dictionary that can be used further downstream to determine which chain to use and what the input to that chain should be.

from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(temperature=0)

Let’s create the destination chains, these are the chains that will be called by the router. We need to also define a default chain, which is a chain to use when the router is not sure which to choose, for example in our case when the question has nothing to do with physics, maths, history or computer science.

destination_chains = {}
for p_info in prompt_infos:
    name = p_info["name"]
    prompt_template = p_info["prompt_template"]
    prompt = ChatPromptTemplate.from_template(template=prompt_template)
    chain = LLMChain(llm=llm, prompt=prompt)
    destination_chains[name] = chain  
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)

Now we define the template used by the LLM to route between the different chains. This has descriptions of the tasks to be done as well as the formatting required for the output.

MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.

Return a markdown code snippet with a JSON object formatted to look like:
    "destination": string \ name of the prompt to use or "DEFAULT"
    "next_inputs": string \ a potentially modified version of the original input

REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.


<< INPUT >>

<< OUTPUT (remember to include the ```json)>>"""

Let’s now put these elements together to build the router chain. Lets first create the router template using the destinations we created above. This template format is flexible for different types of destinations. Next we create the prompt template from this template, then we create the routerchain object using the LLM and the router prompt. Note we have also added the RouterOutputParser as it will help this chain decide which sub-chains to route between.

Finally we put everything together to create one chain - to rule them all ! Which includes the router chain, a desination chain, and the default chain.

So if we now use this chain to ask a question about physics, and set verbose as true - we can see the resulting prompt sequences and resulting output from this chain - and this should show the prompts routing through the physics sub-chain.

router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
router_prompt = PromptTemplate(

router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain, 
                         default_chain=default_chain, verbose=True
                        )"What is black body radiation?")

> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.
"Black body radiation refers to the electromagnetic radiation emitted by a perfect black body, which is an object that absorbs all radiation that falls on it and emits radiation at all wavelengths. The radiation emitted by a black body depends only on its temperature and follows a specific distribution known as Planck's law. This type of radiation is important in understanding the behavior of stars, as well as in the development of technologies such as incandescent light bulbs and infrared cameras."

If we ask a maths question, we should see this routed through the maths sub-chain."what is 2 + 2")

> Entering new MultiPromptChain chain...
math: {'input': 'what is 2 + 2'}
> Finished chain.
'As an AI language model, I can answer this question easily. The answer to 2 + 2 is 4.'

So if we pass in a question that does not relate to any of the router sub-chains, this should activate the default sub-chain to answer."Why does every cell in our body contain DNA?")

> Entering new MultiPromptChain chain...
None: {'input': 'Why does every cell in our body contain DNA?'}
> Finished chain.
'Every cell in our body contains DNA because DNA carries the genetic information that determines the characteristics and functions of each cell. DNA contains the instructions for the synthesis of proteins, which are essential for the structure and function of cells. Additionally, DNA is responsible for the transmission of genetic information from one generation to the next. Therefore, every cell in our body needs DNA to carry out its specific functions and to maintain the integrity of the organism as a whole.'

Now that we understand the basic building blocks of chains, we can start to put these together to create really interesting combinations - for example a chain that can do question answering over documents.

7 Acknowledgements

I’d like to express my thanks to the wonderful LangChain for LLM Application Development Course by - which i completed, and acknowledge the use of some images and other materials from the course in this article.