\n\n\n\n My 2026 Take on AI Agents: Starting From Scratch Agent 101 \n

My 2026 Take on AI Agents: Starting From Scratch

📖 12 min read•2,313 words•Updated Mar 31, 2026

Hey everyone, Emma here from agent101.net!

It’s April 2026, and if you’re anything like me, you’ve probably spent the last few months feeling like you’re constantly catching up with the AI agent world. Every week, there’s a new framework, a new tool, a new article claiming to have cracked the secret to autonomous agents. It’s exhilarating, yes, but also… a lot. Especially when you’re just starting out, trying to figure out how to even get one of these things to do something useful.

I remember just a few months ago, I was staring at a blank screen, trying to build my first “personal assistant” agent. I had read all the high-level explanations, understood the concepts of planning, memory, and tool use, but when it came down to actually writing the code, I felt like I was back in high school trying to assemble IKEA furniture with no instructions. Lots of pieces, no clear path. My first attempt involved a lot of copy-pasting from various GitHub repos, resulting in a Frankenstein’s monster of code that barely ran and certainly didn’t do anything intelligent.

That’s why today, I want to talk about something incredibly practical: teaching your AI agent to use a tool it wasn’t born with. Forget the grand visions of agents solving world hunger (for now). Let’s start with something tangible, something that immediately makes an agent more useful than just chatting. We’re going to give our agent the ability to interact with the real world – or at least, a tiny piece of it – through a simple, custom-built tool. This isn’t about building a complex RAG system or a multi-agent swarm; it’s about the fundamental “how-to” of extending an agent’s capabilities beyond its core language model.

Why this specific topic? Because in my experience, understanding how to equip an agent with tools is the single biggest “aha!” moment for beginners. It transforms an agent from a glorified chatbot into something that can actually do things. It’s the first step towards true agency, and it’s far less intimidating than you might think.

The “Why” Behind Custom Tools: Beyond Just Talking

Think about it: a large language model (LLM) is brilliant at understanding and generating text. It can summarize, brainstorm, write poetry, and even code. But it can’t, by itself, check the weather, send an email, or query a live database. It lives in a world of words.

Tools are how we bridge that gap. They are functions or APIs that we expose to the agent, allowing it to perform actions in the external world. Common examples include search engines, calendar APIs, email clients, or even just simple Python functions that perform calculations.

When you start with an off-the-shelf agent framework, they often come with a few pre-built tools – maybe a calculator, a web search. But the real power comes when you can build a tool for your specific need. For me, that moment came when I wanted my agent to tell me if my favorite local coffee shop was open right now. There wasn’t a pre-built “coffee shop hours” tool. I had to make one.

This tutorial will focus on a simple, yet illustrative example: teaching our agent to check the current date and time. Yes, an LLM knows what “today” generally means, but it won’t know the exact second. This tool will give it that precision, and the process is applicable to much more complex scenarios.

Choosing Our Agent Framework (Keep it Simple!)

For this walkthrough, I’m going to use a popular and beginner-friendly library: LangChain. There are other fantastic options out there, like LlamaIndex or CrewAI, but LangChain has a really clear way of handling tools that’s perfect for learning the ropes. Don’t worry if you’ve never touched it before; we’ll go step-by-step.

First things first, you’ll need Python installed. If you don’t have it, now’s the time! Head over to python.org. Then, let’s get our environment set up:


pip install langchain langchain-openai python-dotenv

You’ll also need an OpenAI API key. Why OpenAI? Because their models are generally very good at understanding tool use instructions without a lot of prompting acrobatics. Set this up as an environment variable in a .env file in your project directory:


OPENAI_API_KEY="your_openai_api_key_here"

And then load it in your script:


import os
from dotenv import load_dotenv

load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")

Okay, setup done. Now for the fun part!

Step 1: Define Our Custom Tool’s Functionality

Our tool needs to do one simple thing: get the current date and time. In Python, this is super easy.


from datetime import datetime

def get_current_datetime() -> str:
 """Returns the current date and time in a human-readable format."""
 now = datetime.now()
 return now.strftime("%Y-%m-%d %H:%M:%S")

print(get_current_datetime())

Run that little snippet, and you’ll see something like 2026-04-01 10:30:45. Perfect! This is the core logic our agent will call.

The Importance of Docstrings and Type Hints

Notice the docstring ("""Returns the current date...""") and the type hint (-> str)? These aren’t just for good programming practice; they are crucial for AI agents. The agent framework (and by extension, the LLM) uses these to understand what the tool does, what arguments it takes, and what it returns. Without clear descriptions, the agent won’t know when or how to use your tool.

Step 2: “Wrap” the Function as a LangChain Tool

LangChain has a specific way of turning a regular Python function into an agent-callable tool. We use the Tool class.


from langchain.tools import Tool

# ... (keep your get_current_datetime function above) ...

current_datetime_tool = Tool(
 name="get_current_datetime",
 func=get_current_datetime,
 description="Useful for when you need to know the current date and time to answer questions about 'now', 'today', or 'what time is it'."
)

print(current_datetime_tool.name)
print(current_datetime_tool.description)

Let’s break down the Tool constructor:

  • name: This is how the agent will refer to the tool. It should be a concise, descriptive string (e.g., "web_search", "calculator").
  • func: This is the actual Python function that gets executed when the tool is called.
  • description: This is arguably the most important part. It’s a natural language description that tells the LLM when and why to use this tool. Be specific! If your tool takes arguments, you’d describe them here too. For instance, if it were a “get weather” tool, the description might say, “Useful for finding the current weather conditions for a given city. Input should be the city name.”

My first few attempts at custom tools had terrible descriptions. The agent would either never use them, or use them at completely inappropriate times. It took me a while to realize that the LLM isn’t psychic; it only knows what I tell it in that description. Think of it as writing instructions for a very clever but literal intern.

Step 3: Assemble the Agent with Our New Tool

Now that we have our tool, we need to give it to an agent. We’ll use a simple conversational agent here.


from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain import hub
from langchain_core.prompts import PromptTemplate

# ... (keep your get_current_datetime function and current_datetime_tool definition above) ...

# 1. Choose the LLM
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0) # Using gpt-3.5-turbo for cost-effectiveness

# 2. Define the tools the agent can use
tools = [current_datetime_tool]

# 3. Get the prompt template for a ReAct agent
# LangChain hub is a great place for pre-built prompts
prompt = hub.pull("hwchase17/react")

# We can inspect the prompt if we want to see what's inside
# print(prompt.template)

# 4. Create the agent
agent = create_react_agent(llm, tools, prompt)

# 5. Create the AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 6. Run the agent!
response = agent_executor.invoke({"input": "What is the current date and time?"})
print("\nAgent's Final Answer:", response["output"])

response2 = agent_executor.invoke({"input": "Hello, how are you?"})
print("\nAgent's Final Answer:", response2["output"])

response3 = agent_executor.invoke({"input": "Tell me a fun fact about the number 7."})
print("\nAgent's Final Answer:", response3["output"])

Let’s unpack this code:

  • ChatOpenAI: This is our LLM. I’m using gpt-3.5-turbo because it’s fast and cheap for experimentation. temperature=0 makes it more deterministic, which is good when you want it to reliably use tools.
  • tools = [current_datetime_tool]: This is where we pass our custom tool (and any other tools we want) to the agent.
  • hub.pull("hwchase17/react"): LangChain agents typically use a specific “prompt template” that guides the LLM on how to think and interact with tools. The “ReAct” prompt (Reasoning and Acting) is a very common and effective pattern. It encourages the LLM to think step-by-step: Observation, Thought, Action, Observation, etc.
  • create_react_agent: This function wires everything together – our LLM, our tools, and the ReAct prompt – to create an agent.
  • AgentExecutor: This is the runtime that actually executes the agent’s “thoughts” and “actions.” The verbose=True flag is super helpful for debugging. It prints out the agent’s internal monologue, showing you when it decides to use a tool, what arguments it passes, and what the tool returns.

Running and Observing Your Agent

When you run the script, pay close attention to the verbose=True output. For the “What is the current date and time?” query, you should see something like this (simplified):


> Entering new AgentExecutor chain...
Thought: The user is asking for the current date and time. I should use the `get_current_datetime` tool to find this information.
Action: get_current_datetime
Observation: 2026-04-01 10:30:45
Thought: I have successfully obtained the current date and time. I can now provide this information to the user.
Final Answer: The current date and time is 2026-04-01 10:30:45.
> Finished chain.

Isn’t that cool? You can see the agent’s thought process! It *decided* to use your tool because its description matched the user’s intent. Then it called the tool, got the result, and formulated a response.

For the other queries (“Hello, how are you?” and “Tell me a fun fact…”), the agent should *not* use the tool, because its description doesn’t match the intent. This shows the agent intelligently choosing when to act.

Troubleshooting Tips (Because It WILL Happen)

My journey with agents has been a constant cycle of “it works!” followed by “why isn’t it working anymore?!” Here are my top tips:

  1. Check Your Tool Description: This is 90% of the battle. Is it clear? Is it specific? Does it explicitly state when the tool is useful? Does it mention the expected input format (if any)?
  2. verbose=True is Your Best Friend: Seriously, don’t ever run an agent without it when you’re developing. It reveals the agent’s internal monologue and its decisions. If it’s not using your tool, the “Thought” will tell you why (or at least, what it’s thinking instead).
  3. Simpler Prompts for Debugging: Sometimes the LLM gets confused by complex prompts. While hub.pull("hwchase17/react") is great, if you’re really stuck, try a super basic prompt that just says “You are a helpful assistant with access to tools. Use them when appropriate.” and see if that unblocks it. (Then go back to ReAct!)
  4. LLM Choice Matters: While gpt-3.5-turbo is usually fine, some models are better at tool use than others. If you’re having persistent issues, consider trying gpt-4 or gpt-4-turbo for a bit, even just for debugging, to see if the problem is the model’s reasoning capabilities.
  5. Input Formatting: If your tool takes arguments, ensure the agent is passing them in the correct format. The LLM will try its best to follow the description, but sometimes it needs a little nudge in the prompt or a more explicit description of expected arguments.

Actionable Takeaways and Next Steps

You’ve just built your first custom tool for an AI agent! Give yourself a pat on the back. This is a foundational skill that unlocks so much potential. Here’s what you should do next:

  1. Experiment with the Current Tool:
    • Try asking the agent different questions: “What time is it in Tokyo?” (It won’t know, but observe its thought process). “What day is today?” “Tell me a joke, and then tell me the current time.”
    • Modify the get_current_datetime function to return the time in a different format (e.g., just the date, or just the hour) and see how the agent adapts.
  2. Build a Slightly More Complex Tool:
    • A simple calculator: Create a tool that takes two numbers and an operation (add, subtract, multiply, divide) and returns the result. Remember to describe the arguments carefully in the tool’s description.
    • A word counter: A tool that takes a string of text and returns the number of words.
    • A “flip coin” tool: Returns “Heads” or “Tails” randomly.

    This will teach you how to handle tools that take arguments, which is a big leap.

  3. Explore More Tools in LangChain: Look at the LangChain documentation on tools. You’ll see how they integrate with existing APIs like Wikipedia, Google Search, or even local file systems. The principles you learned today apply to all of them.
  4. Think About Your Own Needs: What’s a repetitive task you do on your computer? Can you automate a small part of it with a custom agent tool? Maybe checking a specific website, or moving a file, or sending a pre-formatted message.

This isn’t just theory; it’s the practical application of AI agents. Once you get comfortable creating and integrating custom tools, you’ll start seeing agents not just as conversation partners, but as true digital assistants capable of interacting with the world on your behalf. That’s where the real magic happens.

Keep building, keep experimenting, and remember: start small, iterate, and always keep verbose=True handy! If you build something cool, share it in the comments below or hit me up on Twitter! I’d love to see what you come up with.

Happy building,

Emma

agent101.net

đź•’ Published:

🎓
Written by Jake Chen

AI educator passionate about making complex agent technology accessible. Created online courses reaching 10,000+ students.

Learn more →

Leave a Comment

Your email address will not be published. Required fields are marked *

Browse Topics: Beginner Guides | Explainers | Guides | Opinion | Safety & Ethics

Related Sites

AidebugBot-1AgntdevAgntmax
Scroll to Top