Prompting Techniques Using LangChain

Jan 15, 2025

Prompting Techniques Using LangChain and Gemini

Prompting is a crucial aspect of building AI applications using pre-existing models. It defines how the model should respond, including factors like the desired tone and the length of the response.

Different prompting techniques can be employed based on the specific use case. These techniques guide AI models to generate accurate and relevant responses. In this blog, we will explore how to implement these techniques using Python, LangChain, and Gemini.

Setting Up Your Environment

  • First, set up a virtual environment and create a main.py file.
  • In the .env file, add your Gemini API key with the variable name GOOGLE_API_KEY.
  • Activate the virtual environment and install the required packages:


pip install langchain-google-genai
pip install python-dotenv

1. Zero-Shot Prompting

In zero-shot prompting, the model is given a task without any prior context or examples. This approach is ideal for quick and short responses, utilizing fewer tokens.


from dotenv import load_dotenv
load_dotenv(override=True)
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")

res = llm.invoke("What is the capital of India?")
print(res.content)

The model will respond with: New Delhi

2. Few-Shot Prompting

Few-shot prompting provides the model with a few examples to establish a context or pattern before the actual query.


from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[("system","""Translate English to Hindi. Example:
- English: How are you?
Hindi: Tum kaise ho?
- English: Where are you going?
Hindi: Tum kaha ja rahe ho?""",),
("human", "{input}"), ])

chain = prompt | llm
res = chain.invoke("Who are you?")
print(res.content)

In this example, we provide translation examples for context. The model understands the format and responds accordingly.

Chain-of-Thought Prompting

This technique prompts the model to provide intermediate reasoning steps, leading to a final answer. It’s particularly useful for solving complex problems requiring logical reasoning.


prompt = ChatPromptTemplate.from_messages(
[("system","Provide a step-by-step answer for the question asked by the user.",),
("human", "{input}"),
])

chain = prompt | llm
res = chain.invoke("If there are 3 apples and you take away 2, how many do you have?")
print(res.content)

The model will provide a detailed, step-by-step explanation of how it arrived at the answer.

Conclusion

By using these prompting techniques, you can fine-tune AI responses to better fit your specific needs. Whether you need quick answers, context-driven responses, or logical reasoning, LangChain and Gemini provide the tools to achieve these goals effectively