Learn Langchain in simple words

Artificial Intelligence has made huge progress with Large Language Models (LLMs) like GPT. These models can write text, answer questions, and summarize information. However, when developers try to build real applications using these models, they quickly realize that using an LLM alone is not enough.

This is where LangChain becomes useful.

LangChain is a framework that helps developers build structured, reliable, and scalable AI applications using large language models. It provides ready-to-use components that make it easy to connect AI models with prompts, memory, documents, APIs, and external tools.

This article explains LangChain from zero, using plain English and simple Python examples. No advanced AI background is required.


What Exactly Is LangChain?

LangChain is an open-source framework designed to simplify application development using large language models. Instead of writing everything manually, LangChain gives you building blocks that handle common tasks like:

  • Prompt management
  • Conversation memory
  • Document retrieval
  • Tool usage
  • Multi-step reasoning

In short, LangChain helps you turn an LLM into a real application, not just a chatbot.


Why LangChain Is Needed

Large language models have several limitations:

  • They forget everything after one response
  • They cannot access databases on their own
  • They cannot call APIs automatically
  • They struggle with multi-step tasks

LangChain solves these problems by adding structure around the model.

Without LangChain:

  • You manually manage prompts
  • You write custom memory logic
  • You handle document search yourself

With LangChain:

  • You reuse tested components
  • You build faster
  • Your AI behaves more predictably

Installing LangChain

Before using LangChain, you need Python installed (Python 3.9+ recommended).

Install LangChain using pip:

pip install langchain langchain-openai

You also need an API key from your LLM provider and set it as an environment variable.

export OPENAI_API_KEY="your_api_key_here"

Core Idea Behind LangChain

LangChain treats AI applications as pipelines, not single prompts.

A typical pipeline looks like this:

  1. Receive user input
  2. Add context (memory or documents)
  3. Build a structured prompt
  4. Call the language model
  5. Return the response
  6. Save the interaction

LangChain provides components for every step.


Prompt Templates: Asking AI the Right Way

Prompts are the instructions you send to an AI model. LangChain uses prompt templates to make prompts reusable and dynamic.

Example:

from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
    input_variables=["topic"],
    template="Explain {topic} in simple words."
)

print(prompt.format(topic="LangChain"))

Why this is useful:

  • No hard-coded text
  • Easy to reuse
  • Cleaner code

Prompt templates are essential for scalable AI applications.


Language Models in LangChain

LangChain works with many language models. Here’s a simple example using OpenAI:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0.3)

response = llm.invoke("What is LangChain?")
print(response.content)

The temperature controls creativity:

  • Low value → more factual
  • High value → more creative

Chains: Connecting Steps Together

A chain links multiple steps into a single workflow.

Simple example using LLMChain:

from langchain.chains import LLMChain

chain = LLMChain(
    llm=llm,
    prompt=prompt
)

result = chain.run(topic="LangChain framework")
print(result)

Chains make your AI logic:

  • Modular
  • Reusable
  • Easy to debug

Sequential Chains: Multi-Step Reasoning

LangChain also supports multi-step workflows.

Example:

from langchain.chains import SequentialChain

chain1 = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["topic"],
        template="Write a short explanation of {topic}."
    ),
    output_key="explanation"
)

chain2 = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["explanation"],
        template="Summarize this text: {explanation}"
    ),
    output_key="summary"
)

overall_chain = SequentialChain(
    chains=[chain1, chain2],
    input_variables=["topic"],
    output_variables=["summary"]
)

result = overall_chain.run(topic="LangChain")
print(result)

Memory: Helping AI Remember Conversations

By default, AI has no memory. LangChain introduces memory components.

Example using conversation memory:

from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()

chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["history", "question"],
        template="{history}\nHuman: {question}\nAI:"
    ),
    memory=memory
)

print(chain.run(question="What is LangChain?"))
print(chain.run(question="Why is it useful?"))

Memory makes chatbots feel continuous and human-like.


Document Loaders: Using Your Own Data

Real applications often need AI to read documents.

Example loading a text file:

from langchain.document_loaders import TextLoader

loader = TextLoader("data.txt")
documents = loader.load()

LangChain supports PDFs, Word files, web pages, and more.


Text Splitters: Handling Large Documents

Large documents must be split into chunks.

from langchain.text_splitter import RecursiveCharacterTextSplitter

splitter = RecursiveCharacterTextSplitter(
    chunk_size=500,
    chunk_overlap=50
)

chunks = splitter.split_documents(documents)

This improves:

  • Search accuracy
  • Model performance
  • Cost efficiency

Embeddings: Turning Text Into Numbers

Embeddings convert text into numerical vectors.

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()
vector = embeddings.embed_query("What is LangChain?")

Embeddings allow semantic search, not just keyword matching.


Vector Stores: Storing Embeddings

Vector stores help search embeddings efficiently.

Example using FAISS:

from langchain.vectorstores import FAISS

vectorstore = FAISS.from_documents(chunks, embeddings)

Retrievers: Finding Relevant Information

Retrievers fetch the most relevant data before sending it to the model.

retriever = vectorstore.as_retriever()

docs = retriever.get_relevant_documents("Explain LangChain")

This is the core of Retrieval-Augmented Generation (RAG).


Building a Simple RAG Application
from langchain.chains import RetrievalQA

qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=retriever
)

answer = qa_chain.run("What is LangChain used for?")
print(answer)

This approach:

  • Reduces hallucinations
  • Improves factual accuracy
  • Uses your own data

Agents: Letting AI Decide What to Do

Agents can choose tools dynamically.

Example:

from langchain.agents import initialize_agent, Tool

tools = [
    Tool(
        name="Calculator",
        func=lambda x: eval(x),
        description="Useful for math"
    )
]

agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description"
)

agent.run("What is 25 multiplied by 4?")

Agents make AI more autonomous and intelligent.


Common Use Cases of LangChain

LangChain is widely used for:

  • Chatbots
  • Document Q&A systems
  • AI copilots
  • Customer support automation
  • Internal knowledge bases

LangChain bridges the gap between raw language models and real-world applications. By providing reusable components for prompts, memory, tools, and retrieval, it allows developers to focus on solving real problems instead of boilerplate code.

For beginners, LangChain is one of the best ways to enter the world of AI application development.

Leave a Comment

Your email address will not be published. Required fields are marked *