I finished my previous blog post with the aim of looking at Langchain’s template options. I feel like I was on a similar path during my last post, with defining the initial prompt to be passed each time. However, the Langchain templates allow for easier and more thorough control.
Langchain keeps the prompts and model together in a chain. It takes a template for the System Prompts and the Human Prompts. A take-away for me was to remember that the outputs can be piped (passed) to another function.
The templates can be imported using:
from langchain.prompts import (
PromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate,
)
Note: If you skipped the previous blog entry posts, I’m following along with Real Pythons “Build an LLM RAG Chabot….” tutorial. My blog posts are to help with my understanding, keep track of any side paths I head down, and note any adjustments I make (i.e. using Ollama instead of OpenAI, different Python libraries etc).
Langchain uses a language called “LangChain Expression Language” (LCEL) in its templates. Details the language are available at https://python.langchain.com/docs/introduction/, with a handy cheatsheet available at https://python.langchain.com/docs/how_to/lcel_cheatsheet/ . The Langchain website also has a tutorial on building a chatbot, which details how Langchain (via LangGraph) can store chat state so that the bot has a form of memory / message persistence.
Sticking with the Langchain template, it uses a base template (called “review_template_str” in the below code example) and then also has prompt contexts (called “prompt_context” in the below code example).
import dotenv
import os
from langchain_ollama import OllamaLLM
from langchain.prompts import (
PromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate,
)
dotenv.load_dotenv()
chat_model = OllamaLLM(model=os.getenv('LLM_MODEL'),
base_url=os.getenv('LLM_URL'))
review_template_str = """Your job is to answer questions about the weather.
Use the following context to answer questions.
Be as detailed as possible, but don't make up any information
that's not from the context.
If you don't know an answer, say you don't know.
If the question is not about the weather, say you don't know.
{context} """
review_system_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables = ["context"],
template=review_template_str,
)
)
review_human_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables = ["question"],
template="{question}",
)
)
messages = [review_system_prompt, review_human_prompt]
review_prompt_template = ChatPromptTemplate(
input_variables = ["context", "question"],
messages=messages,
)
review_chain = (
review_prompt_template
| chat_model
)
human_question = input("Enter a question about the weather: ")
prompt_context="It is currently raining. The temperature is 2 degrees celesius. The rain is not due to stop for several hours."
print(review_chain.invoke({"context":prompt_context,"question":human_question}))
The prompt_context allows for additional context to be provided, so in my weather based chatbot I’ve given it some context about the current weather. When it gets asked a question it uses the base template (telling it to only discuss weather related subjects), then frames the question with the additional context of what the weather is currently like.