TURION.AI
Tutorials

Build Your First AI Agent with LangGraph: A Beginner's Tutorial

Andrius Putna 4 min read
#ai#agents#langgraph#tutorial#python#beginners

Build Your First AI Agent with LangGraph

AI agents are transforming how we interact with software, enabling systems that can reason, plan, and take actions autonomously. LangGraph, developed by LangChain, provides an intuitive framework for building these agents using a graph-based architecture. In this tutorial, we’ll walk through creating your first AI agent from scratch.

What is LangGraph?

LangGraph is a library for building stateful, multi-step AI applications. Unlike simple prompt-response patterns, LangGraph enables you to create agents that can:

The “graph” in LangGraph refers to how you define your agent’s behavior as a directed graph, where nodes represent actions and edges represent transitions between them.

Prerequisites

Before we start, make sure you have:

Setting Up Your Environment

First, create a new project directory and set up a virtual environment:

mkdir my-first-agent
cd my-first-agent
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

Install the required packages:

pip install langgraph langchain-openai python-dotenv

Create a .env file to store your API key:

OPENAI_API_KEY=your-api-key-here

Understanding the Agent Architecture

Our agent will follow a simple but powerful pattern called the ReAct (Reasoning and Acting) loop:

  1. Observe: The agent receives input and context
  2. Think: The LLM reasons about what to do next
  3. Act: The agent executes a tool or responds
  4. Repeat: Continue until the task is complete

In LangGraph, we model this as a graph with nodes for the LLM and tools, connected by conditional edges.

Building the Agent

Let’s build a simple research agent that can search for information and answer questions. Create a file called agent.py:

import os
from dotenv import load_dotenv
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage

# Load environment variables
load_dotenv()

# Define the state that flows through our graph
class AgentState(TypedDict):
    messages: Annotated[list, add_messages]

# Initialize the LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# Define the agent node - this is where the LLM thinks
def agent_node(state: AgentState) -> AgentState:
    """The agent processes messages and decides what to do."""
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

# Build the graph
def create_agent():
    # Create a new graph
    graph = StateGraph(AgentState)

    # Add the agent node
    graph.add_node("agent", agent_node)

    # Define the flow: start -> agent -> end
    graph.add_edge(START, "agent")
    graph.add_edge("agent", END)

    # Compile the graph into a runnable
    return graph.compile()

# Create and run the agent
agent = create_agent()

# Test the agent
result = agent.invoke({
    "messages": [HumanMessage(content="What is LangGraph?")]
})

print(result["messages"][-1].content)

Run the script:

python agent.py

You should see the agent respond with information about LangGraph.

Adding Tools to Your Agent

A basic chatbot is useful, but agents become powerful when they can use tools. Let’s add a simple calculator tool:

from langchain_core.tools import tool

@tool
def calculator(expression: str) -> str:
    """Evaluate a mathematical expression."""
    try:
        result = eval(expression)
        return f"The result of {expression} is {result}"
    except Exception as e:
        return f"Error calculating: {str(e)}"

# Bind the tool to our LLM
tools = [calculator]
llm_with_tools = llm.bind_tools(tools)

Now we need to update our graph to handle tool calls. Here’s the enhanced version:

from langgraph.prebuilt import ToolNode

def agent_node(state: AgentState) -> AgentState:
    """The agent processes messages and may call tools."""
    response = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}

def should_continue(state: AgentState) -> str:
    """Determine if we should call a tool or finish."""
    last_message = state["messages"][-1]
    if hasattr(last_message, "tool_calls") and last_message.tool_calls:
        return "tools"
    return "end"

def create_agent_with_tools():
    graph = StateGraph(AgentState)

    # Add nodes
    graph.add_node("agent", agent_node)
    graph.add_node("tools", ToolNode(tools))

    # Define the flow with conditional edges
    graph.add_edge(START, "agent")
    graph.add_conditional_edges(
        "agent",
        should_continue,
        {"tools": "tools", "end": END}
    )
    graph.add_edge("tools", "agent")  # Loop back after tool execution

    return graph.compile()

# Test the enhanced agent
agent = create_agent_with_tools()
result = agent.invoke({
    "messages": [HumanMessage(content="What is 15 * 23 + 42?")]
})
print(result["messages"][-1].content)

The agent now reasons about the question, decides to use the calculator, and returns the computed answer.

Understanding the Graph Flow

Let’s visualize what happens when you ask the agent a math question:

  1. START -> agent: Your question enters the agent node
  2. agent -> should_continue: The LLM decides it needs the calculator
  3. should_continue -> tools: The condition routes to the tools node
  4. tools -> agent: The calculator result goes back to the agent
  5. agent -> should_continue: The LLM now has the answer
  6. should_continue -> END: No more tools needed, respond to user

This loop structure is what makes agents powerful. They can call multiple tools, reason about intermediate results, and keep working until the task is complete.

Adding State and Memory

For more complex interactions, you’ll want your agent to remember previous conversations. LangGraph supports checkpointing for this purpose:

from langgraph.checkpoint.memory import MemorySaver

# Create a memory saver
memory = MemorySaver()

# Compile with checkpointing
agent = create_agent_with_tools().compile(checkpointer=memory)

# Use a thread_id to maintain conversation context
config = {"configurable": {"thread_id": "user-123"}}

# First message
agent.invoke({"messages": [HumanMessage(content="My name is Alex")]}, config)

# Later message - the agent remembers
result = agent.invoke({"messages": [HumanMessage(content="What's my name?")]}, config)
print(result["messages"][-1].content)  # "Your name is Alex"

Next Steps

Congratulations! You’ve built your first AI agent with LangGraph. Here are some ways to extend it:

Key Takeaways

The graph-based approach gives you fine-grained control over agent behavior while keeping the code organized and maintainable. As you build more complex agents, this structure becomes invaluable for debugging and extending functionality.


Ready to dive deeper? Check out our Complete Guide to AI Agent Frameworks, explore the LangGraph documentation for advanced patterns, or see our AI Agents Glossary for terminology reference.

← Back to Blog