Hey there, agent-in-training! Emma here, back on agent101.net, and today we’re exploring something that’s been bubbling under the surface for a while but is finally hitting its stride: making AI agents actually do things for you, even if you’re just starting out. Specifically, we’re going to talk about how to get a simple AI agent to monitor a specific part of the internet for you and tell you when something changes. Think of it as your personal digital bloodhound, but way less messy.
For months, I’ve been messing around with various AI tools, trying to push them beyond just writing me funny poems or summarizing articles. I wanted them to be proactive. I wanted them to be… agents! But every time I looked at a tutorial, it felt like I needed a computer science degree and a spare weekend to even set up the environment. Frustrating, right?
Then, a few weeks ago, I was trying to track a specific product release – a new smart home device that I just had to get my hands on. It was one of those things that would drop without much warning, and the stock would vanish in minutes. Refreshing the page every five minutes was driving me nuts. My fingers ached, my eyes were blurry, and my coffee intake was dangerously high. That’s when it clicked: this is exactly the kind of repetitive, vigilance-requiring task an AI agent is perfect for. And I figured out a way to do it that even I, a self-proclaimed “gets confused by YAML sometimes” tech blogger, could manage. No fancy frameworks, no obscure libraries, just a bit of Python and a good understanding of what an AI assistant can do for you.
So today, we’re going to build a simple AI agent that monitors a webpage for a specific change and notifies you. This isn’t about building the next Skynet; it’s about solving a real, everyday problem with a dash of AI magic. And trust me, if I can do it, you can too.
What Even IS a Simple Monitoring AI Agent?
At its core, a monitoring AI agent is just a program that watches something and reacts when certain conditions are met. In our case, it’s going to watch a webpage. Why “AI agent” and not just “script”? Well, the “AI” part comes in when we start thinking about how it interprets the changes and how it decides what’s important. For this beginner tutorial, we’re keeping the AI pretty light – think of it as using an AI model to help us understand the text, not necessarily making complex decisions. The “agent” part is the loop: observe, think (a little), act, repeat.
My goal with this agent was simple: I wanted to know the *instant* the product page for that smart home device changed from “Out of Stock” to “In Stock.” I didn’t want to get spammed with every tiny update, just that one crucial piece of information. This isn’t just for product releases, though. Imagine tracking job postings, news articles about a specific topic, changes in competitor pricing, or even updates to your favorite webcomic. The possibilities are pretty neat once you get the hang of it.
The Tools We’ll Need (Don’t Panic, It’s Minimal)
- Python: If you don’t have it, go grab it. It’s free and relatively easy to use.
- A few Python libraries: We’ll use
requestsfor fetching webpages,BeautifulSoupfor parsing HTML, and potentially a way to send notifications (likesmtplibfor email or a simple webhook). - An AI model (optional but helpful): For this, I’m going to assume you have access to something like OpenAI’s API or Google’s Gemini API. We’ll use it to intelligently detect changes rather than just simple text matching. This is where the “AI” really comes into play beyond just a script.
- A text editor: VS Code, Sublime Text, even Notepad if you’re feeling old-school.
See? Nothing too intimidating. We’re not spinning up servers or configuring intricate cloud services. This is all local, on your machine, just like I did it.
Step 1: Fetching the Webpage Content
First things first, our agent needs to actually “see” the webpage. We’ll use the requests library for this. It’s like your browser, but without the fancy graphics – it just gets the raw HTML.
import requests
def fetch_page_content(url):
try:
response = requests.get(url)
response.raise_for_status() # This will raise an HTTPError for bad responses (4xx or 5xx)
return response.text
except requests.exceptions.RequestException as e:
print(f"Error fetching URL {url}: {e}")
return None
# Let's try it out with a dummy URL (replace with your actual target)
# For this example, I'll use a placeholder. NEVER scrape without checking a site's robots.txt
# and terms of service. Be respectful and don't hammer servers!
target_url = "https://example.com/product-page" # REPLACE THIS WITH YOUR ACTUAL TARGET
current_content = fetch_page_content(target_url)
if current_content:
print("Successfully fetched content. First 200 chars:")
print(current_content[:200])
else:
print("Failed to fetch content.")
When I first wrote this, I actually tried to fetch a page that blocked automated requests. Got a 403 error. Oops! Had to find a different target site or figure out how to add headers to mimic a browser. For this tutorial, let’s assume the site you’re targeting is okay with basic requests. Always check the site’s robots.txt file (e.g., https://example.com/robots.txt) to see what they allow and disallow for crawlers. Ethical scraping is important!
Step 2: Parsing the HTML to Find What Matters
Once we have the raw HTML, it’s a jumbled mess of tags and text. We need to extract the specific part that indicates “In Stock” or “Out of Stock.” This is where BeautifulSoup comes in. It helps us navigate the HTML structure like a map.
This was the trickiest part for me. Every website is different. You have to open the target page in your browser, right-click on the element you’re interested in (like the “In Stock” text), and select “Inspect” or “Inspect Element.” This will show you the HTML code for that specific part. Look for unique identifiers like IDs or class names.
from bs4 import BeautifulSoup
def extract_relevant_info(html_content):
if not html_content:
return "No content to parse."
soup = BeautifulSoup(html_content, 'html.parser')
# This is highly specific to the webpage you're monitoring.
# YOU WILL NEED TO CHANGE THIS.
# Example: Looking for a div with a specific class, or a span with specific text.
# For my product page, the stock status was often in a tag with a class like "product-status"
# Try to find a span or div that contains the stock status
stock_element = soup.find('span', class_='product-status') # Adjust class name based on your inspection
if not stock_element:
stock_element = soup.find('div', id='stock-indicator') # Adjust ID based on your inspection
if stock_element:
return stock_element.get_text(strip=True)
else:
# If we can't find the specific element, return a larger chunk or even the whole body
# and let the AI model figure it out. This is a good fallback.
body_content = soup.find('body')
if body_content:
return body_content.get_text(separator=' ', strip=True)
return "Specific element not found, providing full page text (might be noisy)."
# Assuming `current_content` is from the previous step
relevant_text = extract_relevant_info(current_content)
print(f"Extracted relevant text: {relevant_text[:200]}...") # Print first 200 chars
My initial attempts here were hilarious failures. I tried to just grab the entire page and feed it to the AI, which sometimes worked but was really slow and expensive (API calls aren’t free!). Then I tried to be too specific and missed the element because the class name changed slightly. The key is finding a balance: specific enough to reduce noise, general enough to handle minor layout changes.
Step 3: The AI Magic – Detecting Meaningful Change
Now for the fun part! Instead of just doing a simple text comparison (which would trigger on every minor ad change), we’ll use an AI model to tell us if the *meaning* of the relevant text has changed in a significant way regarding our interest (the stock status).
For this, you’ll need an API key for your chosen AI model (OpenAI, Gemini, etc.). Store it securely, not directly in your code!
import os
from openai import OpenAI # Or `google.generativeai` if you're using Gemini
# Set up your OpenAI API key
# NEVER hardcode API keys! Use environment variables or a config file.
# os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY_HERE" # DO NOT DO THIS IN PRODUCTION CODE
# Instead, set it in your shell: export OPENAI_API_KEY="sk-..."
client = OpenAI() # Assumes OPENAI_API_KEY is set in environment variables
def ask_ai_about_change(old_text, new_text, target_keyword="in stock"):
prompt = f"""
You are an intelligent assistant monitoring a webpage for changes related to product availability.
I will provide you with two versions of text extracted from a product page.
Your task is to determine if the product availability status has changed significantly,
specifically if it has become '{target_keyword}' when it previously was not,
or if it has changed away from '{target_keyword}'.
Old text:
"{old_text}"
New text:
"{new_text}"
Has the product availability changed to or from '{target_keyword}'?
If yes, briefly explain the change. If no, just say 'No significant change.'
Focus only on availability, ignore minor wording or formatting changes.
"""
try:
completion = client.chat.completions.create(
model="gpt-3.5-turbo", # Or "gpt-4", "gemini-pro", etc.
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
max_tokens=150
)
return completion.choices[0].message.content
except Exception as e:
print(f"Error calling AI model: {e}")
return "AI analysis failed."
# Example usage (you'd normally do this in a loop)
# For now, let's simulate a change
# Assume 'previous_relevant_text' was from an earlier fetch
previous_relevant_text = "Out of Stock. Expected restock in 2 weeks."
new_relevant_text = "In Stock! Limited quantity available."
ai_response = ask_ai_about_change(previous_relevant_text, new_relevant_text, "In Stock")
print(f"AI's assessment: {ai_response}")
# Simulate no change
new_relevant_text_no_change = "Out of Stock. Expected restock in 3 weeks."
ai_response_no_change = ask_ai_about_change(previous_relevant_text, new_relevant_text_no_change, "In Stock")
print(f"AI's assessment (no change simulation): {ai_response_no_change}")
This is where the agent truly becomes “AI.” Instead of me writing complex regex patterns or brittle string comparisons, I just tell the AI what kind of change I’m looking for. It’s like having a little intern who understands context! When I first tried this, I was worried the AI would get confused by slight wording variations. But with a good prompt, it was surprisingly solid. It correctly identified when “Coming Soon” became “Pre-order Now” as a significant change, even though the exact words “In Stock” weren’t present.
Step 4: Putting It All Together – The Agent Loop and Notification
An agent needs to run continuously. So we’ll wrap our fetching and checking logic in a loop. And when it finds something, it needs to tell us!
import time
# ... (import requests, BeautifulSoup, os, OpenAI from previous steps) ...
# Global variable to store the last known state
last_known_status_text = ""
def send_notification(message):
print(f"\n!!! AGENT ALERT !!!\n{message}\n")
# Here you would integrate with a real notification service:
# - Email (using smtplib)
# - Push notification service (Pushover, IFTTT webhook, Telegram bot)
# - SMS gateway
# For simplicity, we'll just print to console.
# Example for email (requires configuration):
# import smtplib
# from email.mime.text import MIMEText
# msg = MIMEText(message)
# msg['Subject'] = 'Webpage Change Detected!'
# msg['From'] = '[email protected]'
# msg['To'] = '[email protected]'
# try:
# with smtplib.SMTP_SSL('smtp.example.com', 465) as smtp:
# smtp.login('[email protected]', 'your_password')
# smtp.send_message(msg)
# print("Email notification sent.")
# except Exception as e:
# print(f"Failed to send email: {e}")
def run_agent(url, interval_seconds=300, target_keyword="In Stock"):
global last_known_status_text
print(f"Agent starting to monitor {url} every {interval_seconds} seconds.")
print("Initial fetch...")
# Initial fetch to set the baseline
html_content = fetch_page_content(url)
if html_content:
last_known_status_text = extract_relevant_info(html_content)
print(f"Initial status set: {last_known_status_text[:100]}...")
else:
print("Could not fetch initial content. Agent starting with empty baseline.")
# If initial fetch fails, agent will try again on first loop iteration
while True:
print(f"\n[{time.strftime('%Y-%m-%d %H:%M:%S')}] Checking {url}...")
current_html_content = fetch_page_content(url)
if current_html_content:
new_relevant_text = extract_relevant_info(current_html_content)
if not last_known_status_text: # Handle case where initial fetch failed
last_known_status_text = new_relevant_text
print(f"Baseline set after initial failure: {last_known_status_text[:100]}...")
if new_relevant_text != last_known_status_text:
print("Potential change detected. Asking AI...")
ai_analysis = ask_ai_about_change(last_known_status_text, new_relevant_text, target_keyword)
if "No significant change." not in ai_analysis:
send_notification(f"Webpage update for {url}:\n{ai_analysis}\nNew Text: {new_relevant_text[:200]}...")
last_known_status_text = new_relevant_text # Update baseline only on significant change
else:
print(f"AI determined: {ai_analysis} (minor, ignored)")
# Even if AI says no significant change, we might want to update the baseline
# to prevent repeated "potential change" alerts for the same minor difference.
# Or, keep it as is if you want AI to re-evaluate every time.
# For this example, we'll update to prevent spamming AI for static minor changes.
last_known_status_text = new_relevant_text
else:
print("No textual change detected in relevant section.")
else:
print(f"Failed to fetch content from {url} in this iteration.")
time.sleep(interval_seconds) # Wait before checking again
# --- Main execution ---
if __name__ == "__main__":
# Configure your target URL and desired interval
# Be mindful of the website's policies and don't make requests too frequently!
my_target_url = "https://www.some-store.com/new-widget-X" # <<< REPLACE THIS!
monitoring_interval = 600 # Check every 10 minutes (600 seconds)
desired_status = "In Stock" # What keyword are we looking for?
# IMPORTANT: Ensure your OpenAI API key is set as an environment variable!
# export OPENAI_API_KEY="sk-..." in your terminal before running.
run_agent(my_target_url, monitoring_interval, desired_status)
I set my interval to 5 minutes for that elusive smart home device. Every time I saw the "Checking..." message flash, I felt a little pang of excitement. When the notification finally hit – "In Stock! Limited quantity available." – I swear I heard angels sing. I clicked the link, added it to my cart, and checked out within seconds. Success! My little AI agent had saved me from endless refreshing and helped me snag that gadget.
Actionable Takeaways for Your First AI Agent
- Start Small: Don't try to build a complex conversational agent right off the bat. A simple monitoring task is perfect for learning the ropes.
- Inspect Your Target: Understanding the HTML structure of the webpage you're monitoring is crucial. Use your browser's developer tools.
- Prompt Engineering is Key: The better you describe what kind of change you're looking for to the AI model, the more accurate and useful its responses will be. Experiment with your prompts!
- Be Respectful: Don't hammer websites with requests. Use reasonable intervals, check
robots.txt, and understand a site's terms of service regarding automated access. - Secure Your Keys: Never hardcode API keys directly into your script. Use environment variables.
- Iterate: Your first attempt might not be perfect. Mine certainly wasn't! Adjust your `BeautifulSoup` selectors, refine your AI prompt, and tweak your notification method until it works for you.
Building this little agent was a huge confidence booster for me. It showed me that AI isn't just for big companies or academic research. It's a tool that we, as individual users and small-time developers, can use to solve our own problems and make our digital lives a little easier. So go forth, pick a webpage you're tired of manually checking, and build your own digital bloodhound. You'll be amazed at what you can achieve!
Related Articles
- When Your Agent Rebels: Mastering Kill Switches
- AI Agents in Real Estate: Transforming the Industry
- OpenAI API in 2026: 7 Things After 3 Months of Use
🕒 Last updated: · Originally published: March 17, 2026