\n\n\n\n My AI Agents Now Use External Tools Seamlessly Agent 101 \n

My AI Agents Now Use External Tools Seamlessly

📖 11 min read•2,126 words•Updated Apr 4, 2026

Hey there, agent-builders! Emma here, back from another late-night coding session fueled by lukewarm coffee and the sheer joy of watching something I built… well, do something. Today, I want to talk about something that’s been on my mind a lot lately, especially as the AI agent space keeps evolving at warp speed: getting your agents to play nice with external tools.

If you’re anything like me when I first started tinkering, you probably built a few agents that were brilliant at specific, internal tasks. Maybe an agent that summarized text, or one that generated ideas based on a prompt. Super cool, right? But then you hit that wall: “Okay, this is neat, but how do I get it to, say, send an email? Or update a Google Sheet? Or even just tell me the weather?” That’s where the real magic, and sometimes the real headaches, begin. Today, we’re diving headfirst into how to teach your AI agents to use external tools effectively, turning them from clever internal processors into truly helpful digital assistants.

The “But It Can’t Do That… Yet” Moment

I remember my first real frustration with an agent I built. It was a simple “idea generator” agent. I’d feed it a topic, and it would spit out a list of potential blog post titles. Pretty basic, but it worked! Then I thought, “Wouldn’t it be amazing if this agent could then automatically add those titles to my Trello board, under a ‘Drafts’ list?” I spent hours trying to figure out how to make my Python script, which was just talking to an LLM, suddenly talk to Trello. It felt like trying to teach my dog to play the piano – conceptually appealing, but practically baffling with the tools I had.

The problem wasn’t that the LLM couldn’t understand the concept of “add to Trello.” The problem was that it didn’t have the *means* to do it. It was like giving a brilliant chef a fantastic recipe but no kitchen. This is the core of what we’re tackling today: giving our agents the kitchen, the ingredients, and the instructions to interact with the outside world.

Thinking of Tools as Agent Superpowers

Imagine your agent is a superhero. Initially, it might have one power – super-intelligence, let’s say. That’s cool, but what if it could also fly, or lift heavy objects, or teleport? Those are its tools. For our AI agents, tools are functions or APIs that allow them to perform specific actions outside their immediate processing environment. When an agent needs to do something beyond just thinking or generating text, it reaches for a tool.

The beauty of this approach is that you don’t need to build every capability into your agent from scratch. You just need to show it how to use existing ones. This is a massive simplification, especially for us beginners. Instead of writing complex API calls directly within your agent’s core logic, you define a tool, tell your agent what that tool does, and let the agent decide when and how to use it.

My Go-To Approach: Function Calling (or Tool Use)

Most modern LLM frameworks, like OpenAI’s function calling, Google’s Gemini function calling, or even LangChain’s tool abstraction, make this surprisingly straightforward. The general idea is this:

  1. You describe a function (a “tool”) to the LLM. You tell it what the function does, what arguments it takes, and what kind of data those arguments expect.
  2. When you prompt the LLM, it decides if any of the described tools are relevant to fulfill your request.
  3. If it decides a tool is needed, it doesn’t *execute* the tool. Instead, it tells you (the developer) which tool to call and with what arguments.
  4. You, the developer, then execute the actual function call in your code.
  5. You feed the result of that function call back to the LLM.
  6. The LLM uses that result to continue its reasoning or generate a final response.

This “handoff” is crucial. The LLM isn’t directly making API calls to your bank account or sending emails. It’s asking *you* to do it, based on its understanding of the task and the tools available. This provides a vital safety and control layer.

Practical Example 1: Getting Current Weather Data

Let’s build a super simple agent that can tell us the current weather. For this, we’ll need an external weather API. OpenWeatherMap is a popular choice and has a free tier. You’ll need an API key from their website.

Imagine we’re using a generic Python setup that talks to an LLM. Here’s how you’d define a tool for getting the weather:


import requests
import json

# Replace with your actual OpenWeatherMap API key
OPENWEATHER_API_KEY = "YOUR_OPENWEATHERMAP_API_KEY" 

def get_current_weather(location: str):
 """
 Fetches the current weather conditions for a specified city.

 Args:
 location (str): The name of the city (e.g., "London", "New York").

 Returns:
 dict: A dictionary containing weather information, or an error message.
 """
 base_url = "http://api.openweathermap.org/data/2.5/weather"
 params = {
 "q": location,
 "appid": OPENWEATHER_API_KEY,
 "units": "metric" # or 'imperial' for Fahrenheit
 }
 try:
 response = requests.get(base_url, params=params)
 response.raise_for_status() # Raise an exception for HTTP errors
 weather_data = response.json()

 if weather_data.get("cod") == "404":
 return {"error": "City not found."}

 main_info = weather_data.get("main", {})
 weather_desc = weather_data.get("weather", [{}])[0].get("description")
 return {
 "location": location,
 "temperature": main_info.get("temp"),
 "feels_like": main_info.get("feels_like"),
 "humidity": main_info.get("humidity"),
 "description": weather_desc.capitalize() if weather_desc else "N/A"
 }
 except requests.exceptions.RequestException as e:
 return {"error": f"Failed to fetch weather data: {e}"}
 except json.JSONDecodeError:
 return {"error": "Failed to decode weather data (invalid JSON response)."}

# Now, describe this function to your LLM framework.
# This part is highly dependent on the specific LLM SDK you're using.
# For OpenAI, it would look something like this:

tools_for_llm = [
 {
 "type": "function",
 "function": {
 "name": "get_current_weather",
 "description": "Get the current weather in a given location.",
 "parameters": {
 "type": "object",
 "properties": {
 "location": {
 "type": "string",
 "description": "The city name, e.g., San Francisco",
 }
 },
 "required": ["location"],
 },
 },
 }
]

# When you make an LLM call:
# response = client.chat.completions.create(
# model="gpt-3.5-turbo",
# messages=[{"role": "user", "content": "What's the weather like in Tokyo?"}],
# tools=tools_for_llm,
# tool_choice="auto", # Let the LLM decide if it needs a tool
# )

# If the LLM decides to call the tool, its response will look something like:
# tool_calls = response.choices[0].message.tool_calls
# if tool_calls:
# function_name = tool_calls[0].function.name # "get_current_weather"
# function_args = json.loads(tool_calls[0].function.arguments) # {"location": "Tokyo"}
# 
# # Execute the tool
# actual_tool_output = get_current_weather(**function_args)
# 
# # Send the tool output back to the LLM for a natural language response
# # client.chat.completions.create(
# # model="gpt-3.5-turbo",
# # messages=[
# # {"role": "user", "content": "What's the weather like in Tokyo?"},
# # {"role": "tool", "tool_call_id": tool_calls[0].id, "content": json.dumps(actual_tool_output)}
# # ]
# # )

This two-step process (LLM requests tool use, you run tool, you feed result back) is fundamental. It gives us control and visibility into what our agent is doing.

Practical Example 2: Updating a Simple Text File (A Local “Database”)

Sometimes, you don’t need a fancy API; you just need to save some data locally. Let’s say your agent helps you remember important notes. Instead of asking it to remember, we’ll teach it to write to a file.


def save_note_to_file(note_content: str, filename: str = "agent_notes.txt"):
 """
 Appends a given note to a text file. If the file doesn't exist, it will be created.

 Args:
 note_content (str): The content of the note to save.
 filename (str): The name of the file to save the note in. Defaults to "agent_notes.txt".

 Returns:
 str: A message indicating success or failure.
 """
 try:
 with open(filename, "a") as f: # "a" means append mode
 f.write(f"{note_content}\n")
 return f"Note successfully saved to {filename}."
 except IOError as e:
 return f"Error saving note: {e}"

def read_notes_from_file(filename: str = "agent_notes.txt"):
 """
 Reads all notes from a text file.

 Args:
 filename (str): The name of the file to read from. Defaults to "agent_notes.txt".

 Returns:
 str: The content of the file, or an error message if the file doesn't exist.
 """
 try:
 with open(filename, "r") as f:
 content = f.read()
 return f"Contents of {filename}:\n{content}"
 except FileNotFoundError:
 return f"File '{filename}' not found. No notes to read yet."
 except IOError as e:
 return f"Error reading notes: {e}"

# Tools description for LLM (simplified):
tools_for_llm_notes = [
 {
 "type": "function",
 "function": {
 "name": "save_note_to_file",
 "description": "Saves a text note to a local file for future reference.",
 "parameters": {
 "type": "object",
 "properties": {
 "note_content": {
 "type": "string",
 "description": "The content of the note to save.",
 },
 "filename": {
 "type": "string",
 "description": "The name of the file to save the note in (e.g., 'my_thoughts.txt'). Defaults to 'agent_notes.txt'.",
 },
 },
 "required": ["note_content"],
 },
 },
 },
 {
 "type": "function",
 "function": {
 "name": "read_notes_from_file",
 "description": "Reads all notes from a specified local text file.",
 "parameters": {
 "type": "object",
 "properties": {
 "filename": {
 "type": "string",
 "description": "The name of the file to read from (e.g., 'my_thoughts.txt'). Defaults to 'agent_notes.txt'.",
 },
 },
 "required": [], # No required arguments
 },
 },
 }
]

# Example interaction:
# User: "Remember this: Buy groceries tomorrow."
# LLM calls save_note_to_file(note_content="Buy groceries tomorrow.")
# You execute, feed result back.
# User: "What notes have I saved?"
# LLM calls read_notes_from_file()
# You execute, feed result back.

This is a fantastic way to give your agent a persistent memory that goes beyond the current conversation. It’s also incredibly flexible – you can replace `open()` with any database interaction, API call, or even a call to another script.

The Art of Describing Your Tools

This is where your human touch really comes in. The LLM relies entirely on the `description` field for each function to understand its purpose. A good description is:

  • Clear and concise: Don’t write a novel. Get straight to the point.
  • Accurate: Make sure the description matches what the function *actually* does.
  • Specific about arguments: Clearly explain what each argument represents and what kind of input it expects.
  • Goal-oriented: Frame the description in terms of the user’s intent. “Get the current weather in a given location” is better than “Call the weather API.”

I’ve personally wasted hours debugging why an agent wasn’t calling a tool, only to find out my description was ambiguous or didn’t clearly state the function’s capabilities. Spend time on these descriptions; it pays off!

Advanced Considerations (Once You’re Comfortable)

Once you’ve got the hang of basic tool use, you can start thinking about:

  • Error Handling: What if an API call fails? How does your agent communicate that to the user? Your tool functions should return informative error messages.
  • Tool Chaining: An agent might need to use multiple tools in sequence (e.g., “Find product ID” then “Get product details” then “Update inventory”). This is where multi-turn conversations and state management become important.
  • User Confirmation: For sensitive actions (like sending an email or making a purchase), you might want your agent to ask the user for confirmation *before* you execute the tool call.
  • Dynamic Tool Loading: As your agent grows, you might not want to present *all* tools all the time. You could load tools based on the user’s context or permissions.

But honestly, don’t worry about these until you’re solid on the basics. The core concept of defining a function, describing it to the LLM, and then executing it based on the LLM’s prompt is what unlocks a whole new world of agent capabilities.

Actionable Takeaways for Your Next Agent Project:

  1. Start Small: Pick one simple external action your agent could take (like getting weather, saving a note, or looking up a definition).
  2. Identify the API/Function: Find or write the Python function that performs that action. Make sure it’s reliable and handles errors gracefully.
  3. Describe with Precision: Craft a clear, accurate, and goal-oriented description of your function for the LLM. Focus on what the user wants to achieve.
  4. Implement the Handoff: Set up your LLM interaction loop so that when the LLM suggests a tool call, you execute it and feed the result back.
  5. Test, Test, Test: Try different prompts. See if your agent calls the tool when it should, and more importantly, when it shouldn’t! Refine your descriptions based on these tests.

Teaching your AI agents to use external tools is, in my opinion, one of the most exciting and practical skills you can learn in the agent-building journey. It’s the difference between a cool demo and a truly useful assistant. So go forth, give your agents some superpowers, and let me know what incredible things you build!

Happy coding,

Emma

đź•’ Published:

🎓
Written by Jake Chen

AI educator passionate about making complex agent technology accessible. Created online courses reaching 10,000+ students.

Learn more →

Leave a Comment

Your email address will not be published. Required fields are marked *

Browse Topics: Beginner Guides | Explainers | Guides | Opinion | Safety & Ethics

See Also

AgnthqAgntboxClawseoAgntwork
Scroll to Top