\n\n\n\n My April 2026 Take: AI Agents Are Finally Doing Things Agent 101 \n

My April 2026 Take: AI Agents Are Finally Doing Things

📖 13 min read•2,473 words•Updated Apr 3, 2026

Hey there, agent-in-training! Emma here, back on agent101.net. Can you believe it’s already April 2026? It feels like just yesterday we were all losing our minds over ChatGPT’s initial release, and now we’re talking about AI agents that can actually do things for us. It’s wild.

I’ve been spending way too many late nights (and probably too much money on coffee) playing around with different agent frameworks. And honestly, for a while there, I felt like I was drowning in a sea of acronyms and complex documentation. Every other tutorial I found seemed to assume I had a PhD in computer science. Frustrating, right?

That’s why today, I want to talk about something super specific and, I think, incredibly practical for anyone just dipping their toes into this world: **building your very first AI agent to automate a simple web research task.** We’re not aiming for Skynet here, folks. We’re aiming for a helpful little digital assistant that can save you a few clicks and a bit of brainpower. Think of it as your personal junior researcher, ready to fetch information for you.

This isn’t about some vague, high-level explanation. We’re going to get our hands dirty and actually build something. And trust me, if I can figure this out after accidentally deleting my entire project folder three times last week, you can too.

Why Web Research? Because We All Do It (Or Should!)

Okay, let’s be real. How many times a day do you find yourself opening a browser, typing something into Google, clicking a few links, and then maybe copy-pasting some text? Whether you’re looking up the best local coffee shop, researching a new gadget, or just trying to remember that actor’s name from that one movie, web research is a huge part of our digital lives.

And it’s a perfect first project for an AI agent for a few reasons:

  • **It’s tangible:** You can clearly see the agent going to websites and bringing back information.
  • **It’s iterative:** You can start simple and add complexity.
  • **It highlights agent capabilities:** It demonstrates how an agent can interact with external tools (like a web browser) and process information.
  • **It’s genuinely useful:** Imagine an agent that could quickly summarize the latest news on a topic you care about, or compile a list of specifications for a product you’re considering.

For this tutorial, we’re going to focus on a very specific, slightly mundane, but incredibly common task: **finding the current stock price and a brief company description for a given stock ticker.**

The Tools We’ll Need (Don’t Panic, They’re Friendly!)

We’re going to keep this as straightforward as possible. Here’s what we’ll be using:

  • **Python:** Our programming language of choice. If you don’t have it installed, now’s a great time. Version 3.9+ is fine.
  • **A Large Language Model (LLM) API:** We’ll be using OpenAI’s API for this example because it’s widely accessible and well-documented. You’ll need an API key. Yes, it costs a little bit, but for small projects like this, it’s usually pennies.
  • **CrewAI:** This is the agentic framework we’ll use. It’s relatively new but incredibly intuitive for beginners, focusing on roles, tasks, and processes. I’ve found it a lot easier to grasp than some of the others out there.
  • **beautifulsoup4 and requests:** Python libraries for web scraping. We’ll use these to actually go to a website and pull out information.

Before we dive into the code, make sure you have Python installed. Then, open your terminal or command prompt and run these commands:


pip install crewai openai beautifulsoup4 requests python-dotenv

The `python-dotenv` library is just to keep your API key out of your main code file, which is good practice!

Setting the Stage: Our Agent’s Mission

Our agent’s mission is simple:

  1. Receive a stock ticker symbol (e.g., “GOOGL” for Google).
  2. Use a web search tool to find a reliable source for current stock information (like Yahoo Finance or a similar site).
  3. Extract the current stock price and a short company description.
  4. Present this information in a clear, concise way.

Sounds easy, right? That’s the beauty of starting simple. It builds confidence.

Step 1: Get Your API Key and Environment Ready

First, go to OpenAI’s API keys page and generate a new secret key. Copy it somewhere safe immediately, as you won’t be able to see it again.

Next, create a file named `.env` in the same directory where your Python script will be. Inside it, put:


OPENAI_API_KEY="your_openai_api_key_here"

Replace `”your_openai_api_key_here”` with the actual key you just generated. This keeps your key secure and separate from your code.

Step 2: Defining Our Agent and Its Tools

With CrewAI, we define agents by giving them a `role`, `goal`, and `backstory`. This helps the LLM understand *who* it is and *what* it’s supposed to do. We also give it `tools` – functions it can call to interact with the outside world.

Let’s start our Python script (I’ll call it `stock_agent.py`):


from crewai import Agent, Task, Crew, Process
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import os
import requests
from bs4 import BeautifulSoup

# Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")

# Initialize the LLM
llm = ChatOpenAI(model="gpt-4-turbo-preview", temperature=0.7, openai_api_key=openai_api_key)

# --- Define Tools ---
def search_web_for_stock_info(ticker: str) -> str:
 """Searches the web for stock information for a given ticker, focusing on Yahoo Finance."""
 try:
 search_url = f"https://finance.yahoo.com/quote/{ticker}"
 headers = {
 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
 }
 response = requests.get(search_url, headers=headers)
 response.raise_for_status() # Raise an exception for HTTP errors
 return response.text
 except requests.exceptions.RequestException as e:
 return f"Error searching for stock info: {e}"

def extract_stock_data(html_content: str) -> dict:
 """Extracts stock price and company description from Yahoo Finance HTML."""
 soup = BeautifulSoup(html_content, 'html.parser')
 
 price = "N/A"
 company_name = "N/A"
 description = "N/A"

 # Try to find the price (this might need adjustment based on Yahoo Finance's ever-changing structure)
 try:
 # Look for the current price element. This is often in a specific span.
 # Yahoo Finance's HTML changes, so this is a common selector but might break.
 price_tag = soup.find('fin-streamer', {'data-field': 'regularMarketPrice'})
 if price_tag:
 price = price_tag.text.strip()
 except Exception as e:
 print(f"Could not find price: {e}")

 # Try to find the company name/description (more generic search)
 try:
 # Often, the company name is in the title or a specific meta tag
 title_tag = soup.find('title')
 if title_tag:
 title_text = title_tag.text
 if '|' in title_text:
 company_name = title_text.split('|')[0].strip()
 else:
 company_name = title_text.replace("Stock Price, News, Quote & History", "").strip()

 # A more robust way to get a description might involve searching for specific paragraphs
 # This is a very basic attempt. For a real agent, you'd want more sophisticated extraction.
 description_paragraph = soup.find('p', class_='description') # Example class, might not exist
 if description_paragraph:
 description = description_paragraph.text.strip()
 else:
 # Fallback: look for general paragraphs in the main content area
 main_content = soup.find('div', id='Main-Financials') or soup.find('div', id='quote-header-info')
 if main_content:
 paragraphs = main_content.find_all('p')
 if paragraphs:
 description = paragraphs[0].text.strip() # Take the first paragraph as a general description
 except Exception as e:
 print(f"Could not find company info: {e}")

 return {
 "price": price,
 "company_name": company_name,
 "description": description
 }

# CrewAI Tool definitions
from crewai_tools import Tool

web_search_tool = Tool(
 name="Web Search for Stock Info",
 func=search_web_for_stock_info,
 description="Useful for finding the raw HTML content of a stock's Yahoo Finance page."
)

data_extraction_tool = Tool(
 name="Extract Stock Data",
 func=extract_stock_data,
 description="Useful for parsing HTML content to extract the stock price and a brief company description."
)

# --- Define Agents ---
researcher = Agent(
 role='Financial Researcher',
 goal='Gather current stock price and a concise company description for a given ticker.',
 backstory="""You are an expert financial researcher with a knack for quickly finding accurate stock information online.
 You prioritize official financial sources like Yahoo Finance.""",
 verbose=True,
 allow_delegation=False,
 llm=llm,
 tools=[web_search_tool, data_extraction_tool]
)

# --- Define Tasks ---
research_task = Task(
 description=(
 "Search for the stock information for '{ticker}'. "
 "First, use the 'Web Search for Stock Info' tool to get the HTML content from Yahoo Finance. "
 "Then, use the 'Extract Stock Data' tool to parse the HTML and get the current price and a brief company description. "
 "Finally, compile this into a structured report."
 ),
 expected_output=(
 "A JSON object containing 'ticker', 'company_name', 'price', and 'description'."
 ),
 agent=researcher
)

# --- Define the Crew ---
financial_crew = Crew(
 agents=[researcher],
 tasks=[research_task],
 process=Process.sequential, # Tasks are executed one after the other
 verbose=True
)

# --- Run the Crew ---
if __name__ == "__main__":
 print("Welcome to the Stock Information Agent!")
 ticker_input = input("Please enter a stock ticker (e.g., GOOGL, MSFT, AAPL): ").upper()

 print(f"\n--- Initiating Agent for {ticker_input} ---")
 result = financial_crew.kickoff(inputs={'ticker': ticker_input})
 
 print("\n--- Agent's Final Report ---")
 print(result)

A Quick Walkthrough of the Code

  • **`load_dotenv()` and `ChatOpenAI`**: Sets up our LLM connection using your API key.
  • **`search_web_for_stock_info(ticker)`**: This is our first custom tool. It takes a stock ticker, constructs a Yahoo Finance URL, and fetches the raw HTML. I’ve added a basic `User-Agent` header – some websites block requests without it.
  • **`extract_stock_data(html_content)`**: This is where things get a bit more “scrappy.” We use `BeautifulSoup` to parse the HTML. Finding exact elements like the stock price can be tricky because websites change their HTML structure. I’ve put in a common selector for Yahoo Finance, but be aware that web scraping is a constant cat-and-mouse game. This function tries to get the price, company name, and a general description.
  • **`Tool`**: CrewAI’s way of wrapping our Python functions so the LLM knows how and when to use them.
  • **`Agent`**: We define our `researcher` agent. Notice the `role`, `goal`, and `backstory`. These are crucial for guiding the LLM’s behavior. We also give it our custom `tools`.
  • **`Task`**: This is the specific job our agent needs to do. The `description` tells the agent exactly what steps to take, including which tools to use. The `expected_output` helps guide the LLM to format its final response.
  • **`Crew`**: This orchestrates our agents and tasks. For this simple example, we only have one agent and one task, running `Process.sequential`.
  • **`financial_crew.kickoff()`**: This is where the magic happens! We pass our input (the ticker) to the crew, and it starts executing the tasks.

I know the `extract_stock_data` function might look a bit daunting, and it’s the most fragile part of any web scraping agent. Websites change, and their HTML structures shift. But for a beginner agent, it shows you how to interact with real web data.

Running Your First Agent!

Save the code as `stock_agent.py` (or whatever you like, just make sure it’s in the same folder as your `.env` file). Then, open your terminal or command prompt, navigate to that directory, and run:


python stock_agent.py

The script will prompt you to enter a stock ticker. Try “GOOGL”, “MSFT”, or “AAPL”.

You’ll see a lot of output! That’s CrewAI’s `verbose=True` setting showing you what the agent is thinking, which tools it’s calling, and the results it’s getting. It’s incredibly helpful for debugging and understanding the agent’s internal monologue.

After a moment, you should see the agent’s final report, hopefully with the current stock price and a description!

My Experience and What to Expect

When I first ran something similar, I was genuinely surprised. It felt a bit like magic, seeing the LLM decide, “Okay, I need to find information. I have a `web_search_tool`. I should use that first.” And then, “Now I have HTML. I need to parse it. I have a `data_extraction_tool` for that!” It’s this intelligent tool selection that really makes agents powerful.

You might encounter issues:

  • **Website Changes:** Yahoo Finance, like any website, can update its HTML. If the `extract_stock_data` function suddenly stops finding the price, it’s likely because the HTML element’s class or ID changed. This is a common challenge in web scraping.
  • **Rate Limiting:** If you run this too many times too quickly, websites or the OpenAI API might temporarily block you.
  • **LLM Hallucinations:** While less likely with specific tool calls, the LLM might sometimes misinterpret instructions or the data it receives.

Don’t get discouraged! Debugging is part of the fun (mostly). The `verbose=True` setting is your best friend here, showing you exactly what the agent is doing and thinking.

Actionable Takeaways for Your Agent Journey

So, you’ve built your first basic web research agent. What next?

  1. **Start Simple:** Seriously, this is the biggest tip. Don’t try to build a complex multi-agent system on day one. Pick one small, automatable task.
  2. **Define Clear Goals and Tools:** The more specific you are in your agent’s `goal`, `role`, and the `description` of your `tools`, the better the LLM will perform.
  3. **Embrace Iteration:** Your first version won’t be perfect. Run it, see what breaks, refine your tools, adjust your prompts, and try again.
  4. **Explore Other Tools:** CrewAI has a lot of built-in tools (like a generic `BrowserTools` or `ScrapeWebsiteTool`) that you might find useful for more complex scenarios, potentially simplifying your custom `search_web_for_stock_info` and `extract_stock_data` functions in the future.
  5. **Think About Error Handling:** In a production agent, you’d want more robust error handling in your tools (e.g., what if the website is down, or the element isn’t found?).
  6. **Consider Multi-Agent Crews:** Once you’re comfortable, think about how you could break down a more complex task into roles for multiple agents. For example, one agent to “find sources,” another to “extract data,” and a third to “summarize and report.”

This little stock research agent is just the beginning. Imagine an agent that could:

  • Monitor news headlines for specific keywords and summarize them.
  • Compare product specifications across multiple e-commerce sites.
  • Help you plan a trip by finding flight details, hotel prices, and local attractions.

The possibilities are genuinely exciting. This isn’t just about cool tech; it’s about building personalized digital assistants that can genuinely make our lives a little bit easier. And that, my friends, is why I’m so obsessed with this stuff.

Go forth, build, and break things! That’s how we learn. And as always, if you build something cool, share it with me on social media or in the comments below!

Happy agent building!

Emma Walsh

Blogger at agent101.net

🕒 Published:

🎓
Written by Jake Chen

AI educator passionate about making complex agent technology accessible. Created online courses reaching 10,000+ students.

Learn more →

Leave a Comment

Your email address will not be published. Required fields are marked *

Browse Topics: Beginner Guides | Explainers | Guides | Opinion | Safety & Ethics

See Also

Ai7botAgntzenAgntupAgntlog
Scroll to Top