\n\n\n\n My First AI Agent: Tracking My Online Mentions - Agent 101 \n

My First AI Agent: Tracking My Online Mentions

📖 12 min read•2,242 words•Updated Apr 5, 2026

Hey there, agent-newbies and future AI architects! Emma here, back on agent101.net, and boy, do I have a story for you. Today, we’re not just talking about AI agents; we’re talking about how to *actually* get one to do something useful, without feeling like you need a PhD in computer science. Specifically, we’re going to build a super simple, yet incredibly effective, AI agent that helps you track your online mentions. Think of it as your first step towards building your own digital assistant, without all the jargon that usually comes with it.

For a while now, I’ve been wrestling with how to keep up with what people are saying about agent101.net, or even just my own name, Emma Walsh, across the vast expanse of the internet. Google Alerts are okay, but they often miss things, or send me stuff that’s completely irrelevant. I needed something smarter, something that could not only find mentions but also understand their context a little bit. And that, my friends, is how my journey into building a “Simple Social Listener” agent began.

I remember sitting at my desk late one night, surrounded by half-empty coffee mugs, staring at a blank screen. My initial thought was, “This is going to be so complicated.” But then I broke it down. What does a social listener *do*? It looks for keywords, it checks specific sources, and it tells me what it found. Simple, right? The magic, and where AI agents come in, is automating that process and making it intelligent.

Why Build Your Own Simple Social Listener?

Before we dive into the nitty-gritty, let’s talk about why you’d even bother. You might be thinking, “Emma, there are a million tools for this already.” And you’d be right! But here’s the thing: most of them are expensive, overly complex for a beginner, or don’t quite fit your specific needs. Building your own gives you:

  • Customization: You decide exactly what it looks for and where.
  • Understanding: It’s the best way to grasp how agents work from the ground up.
  • Cost-Effectiveness: For basic needs, it can be practically free.
  • Empowerment: There’s a real thrill in seeing something you built actually work.

My first attempt was, shall we say, less than stellar. It was basically a glorified Python script that scraped a few RSS feeds. It worked, but it wasn’t “smart.” It didn’t understand sentiment, it couldn’t summarize, and it certainly couldn’t adapt. That’s when I realized I needed an agent, not just a script. An agent has goals, it makes decisions, and it uses tools to achieve those goals.

The Core Idea: An Agent with a Mission

Our “Simple Social Listener” agent has one primary mission: “Find and report significant mentions of ‘agent101.net’ or ‘Emma Walsh’ on the internet.”

To achieve this, it needs a few things:

  1. A Brain (LLM): To understand the mission, process information, and decide what to do next.
  2. Eyes (Tools): Ways to access information, like searching the web or checking specific sites.
  3. A Voice (Output): A way to tell us what it found.
  4. A Loop: A way to keep running and checking periodically.

For this beginner-friendly tutorial, we’re going to keep things incredibly straightforward. We’ll use readily available tools and focus on the agent’s logic rather than building complex infrastructure from scratch. My personal preference for starting out is usually Python because of its readability and the vast number of libraries available.

What We’ll Need (Our Agent’s Toolkit)

  • Python: Our programming language.
  • An LLM (Large Language Model) API: I’ll be using OpenAI’s API for this example, specifically a recent model like GPT-3.5 or GPT-4, because of its general availability and good performance. You’ll need an API key.
  • A Web Search Tool: We’ll simulate this with a simple library that can fetch content from URLs, or for a slightly more advanced approach, use a dedicated search API if you have one (e.g., SerpApi, Google Custom Search API). For simplicity, let’s stick to fetching content from specific, trusted sources first.
  • A Way to Store/Report Findings: A simple text file or printing to the console will do for now.

Let’s imagine our agent’s thought process. It wakes up, thinks, “Okay, I need to find mentions.” It then decides, “Where should I look?” It checks its list of sources. “Found something! Is it relevant? What’s the context? Should I tell Emma about it?” Then it reports, and goes back to sleep until the next check.

Building Our Agent: The “Simple Social Listener”

Here’s a simplified breakdown of the code structure. Don’t worry if it looks like a lot at first; we’ll go through it piece by piece.

Step 1: Setting Up Your Environment

First, make sure you have Python installed. Then, we need a few libraries. Open your terminal or command prompt and run:


pip install openai requests

You’ll also need your OpenAI API key. It’s good practice to store this as an environment variable, but for a quick start, you can put it directly in your script (though be mindful of security for anything beyond personal learning projects!).

Step 2: The Agent’s Brain (LLM Interaction)

This is where our agent “thinks.” We’ll define a function that sends a prompt to the LLM and gets a response. The prompt will guide the LLM on what to do with the text it finds.


import openai
import os
import requests
from datetime import datetime

# Set your OpenAI API key from an environment variable or directly (for quick testing)
# It's recommended to set it as an environment variable for security:
# export OPENAI_API_KEY="your_api_key_here"
openai.api_key = os.getenv("OPENAI_API_KEY")

def get_llm_response(prompt_text, model="gpt-3.5-turbo"):
 try:
 response = openai.chat.completions.create(
 model=model,
 messages=[
 {"role": "system", "content": "You are a helpful assistant designed to analyze text for mentions of specific entities."},
 {"role": "user", "content": prompt_text}
 ],
 temperature=0.7 # Adjust creativity; 0.7 is a good balance
 )
 return response.choices[0].message.content
 except Exception as e:
 print(f"Error communicating with LLM: {e}")
 return None

My initial attempts at prompting were… well, let’s just say the LLM often got sidetracked. I learned quickly that being *explicit* about the agent’s role (“You are a helpful assistant…”) and its task (“analyze text for mentions…”) is crucial. Think of it like giving clear instructions to a new intern.

Step 3: The Agent’s Eyes (Web Content Fetcher)

For our simple agent, we’ll fetch content from a few predefined URLs. In a real-world scenario, you might integrate with a news API, a social media scraper, or a dedicated search engine API. But for learning, `requests` is perfect.


def fetch_url_content(url):
 try:
 headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'}
 response = requests.get(url, headers=headers, timeout=10)
 response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
 return response.text
 except requests.exceptions.RequestException as e:
 print(f"Error fetching {url}: {e}")
 return None

A little tip I picked up: always include a `User-Agent` header when fetching web content. Some sites block requests that don’t look like they’re coming from a real browser. Also, `timeout` is your friend to prevent your script from hanging indefinitely.

Step 4: The Agent’s Logic (Putting it Together)

Now, let’s combine these parts into our main agent loop. Our agent will have a list of keywords to look for and a list of URLs to check.


def run_social_listener_agent():
 keywords = ["agent101.net", "Emma Walsh", "AI agent beginner", "agent101 blog"]
 target_urls = [
 "https://www.theverge.com/tech", # Example tech news site
 "https://techcrunch.com/", # Another example
 "https://news.ycombinator.com/", # Hacker News often has discussions
 # Add more relevant URLs here, e.g., specific forums, blogs, etc.
 ]
 
 found_mentions = []

 print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] Agent starting its search...")

 for url in target_urls:
 print(f"Checking {url}...")
 content = fetch_url_content(url)
 
 if content:
 # For simplicity, we'll just send the raw text. 
 # In a real app, you'd want to parse HTML to extract main article text.
 # Libraries like BeautifulSoup can help here.
 
 # Craft a prompt for the LLM
 prompt = f"""
 Analyze the following text content from {url} for mentions of any of these keywords: {', '.join(keywords)}.
 
 If you find a significant mention (more than just a passing keyword, ideally discussing the entity),
 summarize the mention in 2-3 sentences, note the keyword found, and indicate its sentiment (positive, negative, neutral).
 If no significant mention is found, state 'No significant mention found.'
 
 Text to analyze:
 ---
 {content[:2000]} # Limit content to avoid hitting token limits and for performance
 ---
 """
 
 llm_analysis = get_llm_response(prompt)
 
 if llm_analysis and "No significant mention found" not in llm_analysis:
 found_mentions.append({
 "url": url,
 "analysis": llm_analysis,
 "timestamp": datetime.now().strftime('%Y-%m-%d %H:%M:%S')
 })
 print(f" > Found something at {url}! Analysis: {llm_analysis[:100]}...") # Print a snippet
 else:
 print(f" > No significant mention found at {url}.")

 print(f"\n[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] Agent finished its search. Found {len(found_mentions)} significant mentions.")
 
 # Report findings
 if found_mentions:
 print("\n--- Summary of Significant Mentions ---")
 for mention in found_mentions:
 print(f"URL: {mention['url']}")
 print(f"Timestamp: {mention['timestamp']}")
 print(f"Analysis: {mention['analysis']}\n")
 
 # Optional: Save to a file
 with open("agent_mentions_report.txt", "a") as f:
 f.write(f"\n--- Report from {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} ---\n")
 for mention in found_mentions:
 f.write(f"URL: {mention['url']}\n")
 f.write(f"Timestamp: {mention['timestamp']}\n")
 f.write(f"Analysis: {mention['analysis']}\n\n")
 print("Report saved to agent_mentions_report.txt")
 else:
 print("No significant mentions found in this run.")

# Run the agent
if __name__ == "__main__":
 run_social_listener_agent()

A big lesson I learned here: Don’t just send the entire webpage content to the LLM. It’s expensive, slow, and often hits token limits. Truncate it, or better yet, use a library like `BeautifulSoup` to extract just the main article text before sending it to the LLM. For this example, I’m simply truncating `content[:2000]` to keep it simple, but remember this for bigger projects.

Another thing: the prompt for the LLM is where the “intelligence” really comes in. I’ve refined mine over time to be very specific: “summarize in 2-3 sentences,” “note the keyword,” “indicate sentiment.” This helps the LLM give consistent, useful output.

Putting It to the Test (My Anecdote)

The first time I ran a version of this, I set it to check a few specific forums where I knew agent101.net was sometimes discussed. I literally held my breath. When the output started showing actual, relevant discussions, summarized and with a sentiment attached, I almost jumped out of my chair! It wasn’t perfect, mind you. Sometimes it flagged a keyword that was just part of an unrelated URL, and the sentiment wasn’t always spot on. But it was *working*. It was a true “aha!” moment for me, realizing that even with relatively simple components, an AI agent could do meaningful work.

My agent once found a forum post where someone mentioned agent101.net in a slightly confused way, asking for clarification on a concept. Google Alerts would have just shown me “agent101.net mentioned,” but my LLM-powered agent summarized the *question*, allowing me to quickly jump in and provide a helpful answer, building community goodwill. That’s the power of context and understanding that an LLM brings.

Next Steps and Actionable Takeaways

So, you’ve built your first simple social listener agent! What now?

Actionable Takeaways:

  1. Run It: Execute the Python script! See what it finds for your keywords. Experiment with different `target_urls` and `keywords`.
  2. Refine Your Prompts: Play with the prompt you send to `get_llm_response`. Can you make it more specific? Ask for different types of analysis?
  3. Expand Your “Eyes”:
    • HTML Parsing: Integrate `BeautifulSoup` to extract cleaner text from webpages instead of sending raw HTML. This will make your LLM’s job much easier and more accurate.
    • More Sources: Add more URLs. Consider using RSS feeds for blogs or news sites for more structured content.
    • Search APIs: For broader searches, look into free tiers of search APIs (e.g., Bing Web Search API, Google Custom Search API, or even social media specific ones if you want to brave that landscape).
  4. Improve Reporting: Instead of just printing or saving to a text file, think about:
    • Sending an email or a Slack message with the findings.
    • Storing results in a simple database (like SQLite) for easier querying.
  5. Add Scheduling: Right now, you run it manually. Use Python’s `schedule` library or `cron` jobs (on Linux/macOS) or Task Scheduler (on Windows) to run your script automatically every few hours or once a day.
  6. Error Handling: Make your error handling more robust. What happens if an API call fails repeatedly?

This “Simple Social Listener” is just the tip of the iceberg. You’ve now got a foundational understanding of how an AI agent can be broken down into a brain, tools, and a loop. This pattern applies to far more complex agents, whether they’re planning tasks, writing code, or managing your calendar.

The biggest thing is just to start. Don’t let the grand vision of a fully autonomous AI assistant overwhelm you. Start small, build something practical, and iterate. That’s how I got here, and that’s how you’ll build your own incredible AI agents. Happy building, and I’ll catch you next time!

🕒 Published:

🎓
Written by Jake Chen

AI educator passionate about making complex agent technology accessible. Created online courses reaching 10,000+ students.

Learn more →
Browse Topics: Beginner Guides | Explainers | Guides | Opinion | Safety & Ethics
Scroll to Top