Run a Basic Multi-Agent Pipeline with CrewAI | AI Collaboration

Learn how to set up and run a basic multi-agent pipeline using CrewAI. Discover how to define agents, goals, and backstories for collaborative AI systems. Ideal for LLM development.

Running a Basic Multi-Agent Pipeline with CrewAI

This guide provides a step-by-step process for setting up and running a fundamental multi-agent pipeline using CrewAI. CrewAI simplifies the development of collaborative AI systems where multiple agents work together to achieve a shared objective.

Introduction

A multi-agent pipeline in CrewAI involves:

  • Defining Agents: Each agent is assigned a specific role, goal, and potentially a backstory to guide its behavior.
  • Organizing into a Crew: Agents are grouped into a "Crew," which acts as the orchestrator for a shared task.
  • Executing a Shared Task: The crew then executes a collaborative task, simulating real-world workflows where AI agents contribute to an overall objective.

CrewAI offers a minimal and intuitive Python interface for building these agentic AI systems.

Prerequisites

  • Python: Version 3.8 or higher.
  • CrewAI Package: Must be installed.
  • LLM Backend:
    • Required (for this example): OpenAI API key.
    • Optional: LangChain-compatible LLMs (e.g., Claude, Hugging Face models).

Installation

Install CrewAI using pip:

pip install crewai

Step-by-Step Guide

Follow these steps to create and run a basic multi-agent pipeline:

1. Import Required Libraries

Begin by importing the necessary classes from the crewai and langchain libraries.

from crewai import Agent, Crew
from langchain.llms import OpenAI # Or your preferred LLM from LangChain
  • Note: You can substitute OpenAI with other LangChain-compatible LLMs, such as those from Anthropic (Claude) or Hugging Face.

2. Define Agents

Create individual agents, each with a specific role, goal, backstory, and configured LLM.

# Define the Researcher Agent
researcher = Agent(
    role="Researcher",
    goal="Find the latest news and trends in artificial intelligence.",
    backstory="An expert in AI research and industry trends, skilled at gathering and synthesizing information.",
    llm=OpenAI(model="gpt-4") # Example using GPT-4
)

# Define the Writer Agent
writer = Agent(
    role="Writer",
    goal="Summarize the research findings into a concise and engaging blog post.",
    backstory="A tech content writer experienced in simplifying complex topics for a general audience.",
    llm=OpenAI(model="gpt-4") # Example using GPT-4
)

3. Create the Crew

Instantiate the Crew class, passing in the list of agents and the overarching shared task. The Crew object manages the agents and their workflow.

# Create the Crew
crew = Crew(
    agents=[researcher, writer],
    task="Produce a summarized blog post on current AI trends.",
    verbose=2 # Set to 1 or 2 for more detailed logs
)
  • agents: A list containing the Agent objects you defined.
  • task: The primary objective that the entire crew needs to accomplish.
  • verbose: Controls the level of logging. 2 provides the most detail, showing each agent's thought process.

4. Execute the Task with kickoff()

Initiate the pipeline by calling the kickoff() method on the crew object. This triggers the collaborative execution of the defined task.

# Execute the task
result = crew.kickoff()

# Print the final output
print("## Final Output:\n")
print(result)

This kickoff() method orchestrates the entire process:

  • Each agent receives its goal and context.
  • Agents perform reasoning using their LLM backends.
  • Intermediate outputs are communicated between agents as needed.
  • Agents contribute to the final consolidated result.

Sample Output

The result variable will contain the consolidated output from the agents, which in this example might look like:

## Final Output:

**Title:** Top AI Trends Shaping Tomorrow: A 2025 Outlook

**Summary:** Artificial intelligence is rapidly evolving, with a significant shift towards agentic systems and multimodal learning capabilities. The fine-tuning of domain-specific LLMs is accelerating adoption across key sectors such as healthcare, finance, and education, promising transformative impacts.

Real-World Example Use Cases

CrewAI is versatile and can be applied to various collaborative AI tasks:

  • Research Pipeline: A researcher agent gathers data, and a summarizer agent synthesizes the findings into actionable insights.
  • Code Review Team: A developer agent writes code, and a reviewer agent evaluates it, suggesting improvements.
  • Marketing Automation: A strategist agent plans content campaigns, and a writer agent generates social media copy and blog posts.
  • Customer Support Escalation: A first-line agent handles common queries, escalating complex issues to a specialist agent.

Key Benefits

  • Fast Setup: Quickly prototype and deploy team-based AI automation.
  • Modular Architecture: Design and manage agents independently.
  • Seamless LLM Integration: Easily integrate with various LLM providers.
  • Reusable Components: Build and reuse agents and workflows across different tasks.

Best Practices

  • Clear Goals: Define specific, measurable, achievable, relevant, and time-bound (SMART) goals for each agent.
  • Role Definition: Assign roles that accurately reflect real-world job functions for better agent behavior.
  • Start Simple: Limit the number of agents in early experiments to maintain clarity and simplify debugging.
  • Monitor Execution: Log intermediate outputs (verbose=2) to understand agent decision-making and identify potential bottlenecks or misinterpretations.

SEO Keywords

Run multi-agent pipeline, Crew AI, basic Crew AI example, Python, Crew AI agent collaboration, execute task Crew AI kickoff, Crew AI pipeline steps, define agents Crew AI, Crew AI blog summarizer, modular multi-agent system, AI task orchestration, LangChain Crew AI tutorial.

Interview Questions

  • What are the core components required to run a basic multi-agent pipeline in CrewAI?
  • How do agents communicate and collaborate within a CrewAI pipeline?
  • What is the primary function of the kickoff() method in CrewAI?
  • How do you define the roles and goals for individual agents in CrewAI?
  • What are the different LLM backends that can be integrated with CrewAI?
  • What are the advantages of using a modular agent architecture in CrewAI pipelines?
  • Describe the typical workflow for creating and executing a Crew in CrewAI.
  • What is the role of the Crew object in orchestrating a multi-agent system?
  • Can agents in a CrewAI pipeline share intermediate outputs? If so, how is this managed?
  • What are some practical, real-world use cases for CrewAI pipelines?
  • How can you ensure that each agent's behavior effectively contributes to the overall shared task?
  • What are the benefits of limiting the agent count when you are experimenting with new CrewAI pipelines?