LangChain Expression Language (LCEL) Basics | Build LLM Apps
Master LangChain Expression Language (LCEL) basics for building powerful LLM applications. Learn to compose chains declaratively with LLMs, prompts, and parsers.
LangChain Expression Language (LCEL) Basics
LangChain Expression Language (LCEL) is a declarative way to compose chains. It provides a powerful and flexible way to build complex LLM applications by combining different components like LLMs, prompt templates, output parsers, and more.
What is LCEL?
LCEL allows you to define sequences of operations in a clear and intuitive manner. Instead of writing imperative code that manually handles intermediate steps, you can define a "chain" of operations that are executed sequentially. This makes your code more readable, maintainable, and easier to debug.
Key Concepts
LCEL is built around a few core concepts:
- Chains: A chain represents a sequence of operations. You can think of it as a pipeline where data flows from one component to the next.
- Components: These are the building blocks of your chains. Common components include:
- LLMs: Large Language Models (e.g., OpenAI's GPT-3.5, GPT-4).
- Prompt Templates: Structures for generating prompts by filling in variables.
- Output Parsers: Tools for structuring the output of LLMs into more usable formats (e.g., JSON, lists).
- Retrievers: Components that fetch relevant documents or data.
- Tools: Functions that your LLM can call to perform actions.
- Piping Operator (
|
): This operator is used to connect components in a chain. It signifies the flow of data from one component to the next.
Getting Started with LCEL
Let's illustrate the basics of LCEL with a simple example.
1. Setting up your Environment
Ensure you have LangChain installed:
pip install langchain langchain-openai
You'll also need to set up your OpenAI API key as an environment variable.
export OPENAI_API_KEY="your-api-key"
2. Building a Simple Chain
This example demonstrates a basic chain that takes an input string, formats it using a prompt template, and then sends it to an LLM.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# 1. Initialize the LLM
llm = ChatOpenAI(model="gpt-3.5-turbo")
# 2. Create a Prompt Template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}"),
])
# 3. Create an Output Parser
output_parser = StrOutputParser()
# 4. Build the Chain using LCEL
chain = prompt | llm | output_parser
# 5. Invoke the Chain
response = chain.invoke({"input": "What is the capital of France?"})
print(response)
Explanation:
- We initialize
ChatOpenAI
to use thegpt-3.5-turbo
model. ChatPromptTemplate.from_messages
defines the structure of the conversation. The"{input}"
placeholder will be filled with the user's query.StrOutputParser
is used to ensure the final output is a string.- The pipe operator (
|
) connects these components: the input goes to theprompt
, the output of theprompt
goes to thellm
, and the output of thellm
goes to theoutput_parser
. chain.invoke({"input": "What is the capital of France?"})
executes the chain with the provided input.
3. Streaming Output
LCEL also supports streaming output, which is crucial for interactive applications.
# ... (previous imports and setup)
# Invoke the chain and stream the output
for chunk in chain.stream({"input": "Explain the concept of recursion in simple terms."}):
print(chunk, end="", flush=True)
Advanced LCEL Concepts
As you build more complex applications, LCEL offers features for:
- Sequential Chains: Chaining multiple operations together.
- Parallel Chains: Executing multiple operations concurrently.
- Conditional Chains: Executing different branches of logic based on conditions.
- RunnableSequence, RunnableParallel, RunnableBranch: Specific classes for composing chains.
LCEL provides a robust framework for building sophisticated LLM applications with clarity and efficiency.
Implement Tool-Augmented AI Agents with LLMs
Discover how tool-augmented AI agents enhance LLMs by integrating external tools for complex, multi-step task execution and real-world problem-solving.
LangChain RAG: Enhance LLMs with Retrieval-Augmented Generation
Master Retrieval-Augmented Generation (RAG) with LangChain. Learn to build powerful LLM applications by grounding responses in external data using embeddings, vector stores, and retrievers.