Supertype
Agentic AI

Building Your First Agent

Building your first agent using LLMs and tools.
June 10, 2025 · Kenneth Ezekiel

Introduction

In the first part of this series, we established the foundational concepts of Agentic AI. We defined an agent as an autonomous system that perceives its environment, reasons about its state, and takes actions using tools to achieve a specific goal, differentiating it from a standard Large Language Model (LLM).

In this article, we will transition from theory to practice. This tutorial will guide you through the process of building a simple but functional AI agent. The objective is to construct a system that can leverage both real-time web search and mathematical calculations to solve a multi-step problem that the underlying LLM could not solve on its own.

Defining the Agent's Task

To effectively demonstrate an agent's capabilities, it must be assigned a clear objective that necessitates planning and tool use. A simple, single-step query would be insufficient.

Therefore, the mission for our agent will be to answer the following question:

"Who was the president of the United States when the lead actor of the movie 'The Matrix' was born, and what is that president's birth year raised to the power of 0.3?"

This task is designed to compel the agent to execute a logical sequence of actions:

  1. Utilize a search tool to identify the lead actor of 'The Matrix'.
  2. Use the search tool again to determine the actor's birthdate.
  3. Use the search tool a third time to find the serving U.S. president during that time.
  4. Use the search tool once more to find that president's birth year.
  5. Finally, utilize a calculation tool to perform the required mathematical operation.

Prerequisites and Environment Setup

Before development, it is essential to prepare the environment correctly. The following steps outline the required setup.

Requirements:

  • Python (version 3.8 or newer)
  • A code editor (e.g., Visual Studio Code)
  • An API key from OpenAI
  • An API key from Tavily Search

Step 1: Project Environment First, create a dedicated folder for this project. In your terminal, navigate to this folder and create a Python virtual environment.

# For macOS/Linux
python3 -m venv venv
source venv/bin/activate
 
# For Windows
python -m venv venv
.\venv\Scripts\activate

Step 2: Library Installation With the virtual environment activated, install the necessary Python libraries.

pip install langchain langchain_openai tavily-python python-dotenv numexpr

Step 3: API Key Management API keys should not be hardcoded into source code. A secure practice is to use a .env file. Create a file with this name in your project's root directory and add your keys as follows:

# .env file
OPENAI_API_KEY="sk-..."
TAVILY_API_KEY="tvly-..."

Create your primary Python file, main.py, and use the dotenv library to load these keys as environment variables.

# main.py
from dotenv import load_dotenv
 
load_dotenv()

Building the Agent: A Code Walkthrough

This section details the step-by-step construction of the agent's components.

Step 4.1: Initializing the Language Model The core of the agent is the LLM, which provides the reasoning capabilities. We will instantiate OpenAI's gpt-4o model.

# main.py (continued)
from langchain_openai import ChatOpenAI
 
# Initialize the LLM with a temperature of 0 for deterministic outputs
llm = ChatOpenAI(model="gpt-4o", temperature=0)

Step 4.2: Defining the Agent's Tools The agent requires tools to interact with external systems. We will provide a search tool and a calculator.

# main.py (continued)
from langchain.agents import Tool
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.chains import LLMMathChain
 
# Initialize the Tavily search tool
search_tool = TavilySearchResults()
 
# Create the calculator tool using LLMMathChain
llm_math_chain = LLMMathChain.from_llm(llm)
calculator_tool = Tool(
    name="calculator",
    func=llm_math_chain.run,
    description="useful for when you need to answer questions about math"
)
 
# Assemble the list of tools available to the agent
tools = [search_tool, calculator_tool]

This tools list defines the set of external functions the agent is permitted to invoke.

Step 4.3: Designing the Prompt Template The prompt template structures the input to the LLM, guiding its reasoning process.

# main.py (continued)
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
 
# Create the prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. Use your tools to answer the user's question."),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

The agent_scratchpad is a critical variable. It is a placeholder where the history of previous actions and their corresponding observations is injected. This allows the agent to maintain context and plan its subsequent steps based on what has already transpired.

Step 4.4: Creating the Agent Logic This step binds the LLM, the tools, and the prompt together to form the agent's core logic.

# main.py (continued)
from langchain.agents.format_scratchpad.openai_tools import format_to_openai_tool_messages
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
 
# Chain the components together to create the agent
agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
    }
    | prompt
    | llm.bind_tools(tools)
    | OpenAIToolsAgentOutputParser()
)

Step 4.5: Creating the Agent Executor The AgentExecutor is the runtime component that invokes the agent and manages the execution loop until a final answer is produced.

# main.py (continued)
from langchain.agents import AgentExecutor
 
# Create the agent executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

Setting verbose=True is essential for observing the agent's step-by-step reasoning process during execution.

Executing the Agent and Analyzing its Output

With all components assembled, we can now invoke the agent with our defined task.

# main.py (continued)
if __name__ == "__main__":
    question = "Who was the president of the United States when the lead actor of the movie 'The Matrix' was born, and what is that president's birth year raised to the power of 0.3?"
    response = agent_executor.invoke({"input": question})
    print("\nFinal Answer:")
    print(response["output"])

Upon execution, the verbose output will display the agent's internal monologue and actions, which will resemble the following logical sequence:

> Entering new AgentExecutor chain...

[agent]
**Thought:** I need to identify the lead actor of 'The Matrix', find their birth date, determine the U.S. president at that time, find the president's birth year, and finally calculate that year to the power of 0.3. I will begin by searching for the lead actor.
**Action:** `tavily_search_results_json` with input: `lead actor of The Matrix`

[observation]
[{"url": "...", "content": "The lead actor in The Matrix is Keanu Reeves."}]

[agent]
**Thought:** The actor is Keanu Reeves. The next step is to find his birth date.
**Action:** `tavily_search_results_json` with input: `Keanu Reeves birth date`

[observation]
[{"url": "...", "content": "Keanu Charles Reeves was born on September 2, 1964."}]

[agent]
**Thought:** Keanu Reeves was born in 1964. I must now find the U.S. president during that year.
**Action:** `tavily_search_results_json` with input: `US president in 1964`

[observation]
[{"url": "...", "content": "Lyndon B. Johnson was the president of the United States in 1964."}]

[agent]
**Thought:** The president was Lyndon B. Johnson. I need to find his birth year.
**Action:** `tavily_search_results_json` with input: `Lyndon B. Johnson birth year`

[observation]
[{"url": "...", "content": "Lyndon Baines Johnson was born on August 27, 1908."}]

[agent]
**Thought:** The president's birth year is 1908. The final step is to calculate 1908 raised to the power of 0.3. The calculator tool is appropriate for this task.
**Action:** `calculator` with input: `1908 ** 0.3`

[observation]
Answer: 9.642141734768494

[agent]
**Thought:** I have gathered all required information and performed the final calculation. I can now formulate the final answer.
**Final Answer:** The president of the United States when Keanu Reeves (the lead actor of 'The Matrix') was born was Lyndon B. Johnson. His birth year was 1908, and 1908 raised to the power of 0.3 is approximately 9.64.

> Finished chain.

Final Answer:
The president of the United States when Keanu Reeves (the lead actor of 'The Matrix') was born was Lyndon B. Johnson. His birth year was 1908, and 1908 raised to the power of 0.3 is approximately 9.64.

Full Code

# main.py
from langchain_openai import ChatOpenAI
from langchain.agents import Tool
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.chains import LLMMathChain
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents.format_scratchpad.openai_tools import format_to_openai_tool_messages
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain.agents import AgentExecutor
from dotenv import load_dotenv
 
 
load_dotenv()
 
 
# Initialize the LLM with a temperature of 0 for deterministic outputs
llm = ChatOpenAI(model="gpt-4o", temperature=0)
 
# Initialize the Tavily search tool
search_tool = TavilySearchResults()
 
# Create the calculator tool using LLMMathChain
llm_math_chain = LLMMathChain.from_llm(llm)
calculator_tool = Tool(
    name="calculator",
    func=llm_math_chain.run,
    description="useful for when you need to answer questions about math"
)
 
# Assemble the list of tools available to the agent
tools = [search_tool, calculator_tool]
 
# Create the prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. Use your tools to answer the user's question."),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])
 
# Chain the components together to create the agent
agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
    }
    | prompt
    | llm.bind_tools(tools)
    | OpenAIToolsAgentOutputParser()
)
 
# Create the agent executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
 
if __name__ == "__main__":
    question = "Who was the president of the United States when the lead actor of the movie 'The Matrix' was born, and what is that president's birth year raised to the power of 0.3?"
    response = agent_executor.invoke({"input": question})
    print("\nFinal Answer:")
    print(response["output"])

Conclusion

This tutorial has demonstrated the practical construction of a functional AI agent. By assembling an LLM, tools, a prompt, and an executor, we created a system capable of decomposing a complex problem and leveraging external information to arrive at a solution.

The agent we built is effective but limited to the pre-existing tools we provided. A significant area for expanding agent capability is teaching them to use new, custom-built tools that can interact with proprietary APIs or private data sources.

The next article in this series, Part 3, will address this limitation. We will explore the process of creating custom tools and investigate more advanced prompting techniques to further enhance agent performance and utility.

Read More

Tags: agentic ai, artificial intelligence, machine learning

On this page

The latest in AI and Enterprise Analytics

We hate spam as much as you do. We do not resell your data.

Supertype Logo

Email us at human@supertype.ai for enquiries on enterprise analytics consulting, ai development, and consulting services.

Consulting & Services

Supertype | Industry-Leading AI Consultancy

By Industry

Supertype-Incubated Products

Information

Supertype Group of Companies