Hey there, agent-in-training! Emma here, back on agent101.net. Today, I want to talk about something that’s been buzzing in my Slack channels and personal projects for the last few months: getting an AI agent to actually do something useful for you, consistently, without needing a PhD in prompt engineering or a server farm in your closet. Specifically, we’re going to explore how to build a super simple, single-task AI agent that fetches and summarizes specific news for you – a little personal news hound, if you will.
I know, I know. “AI agent” still sounds like something out of a sci-fi movie, right? Or maybe you’ve dabbled with ChatGPT and wondered, “Okay, but how do I make this thing run on its own without me typing every single command?” That’s exactly the gap we’re going to bridge today. My goal for this article is to demystify the process and show you that building a small, focused AI agent isn’t nearly as scary as it sounds. Think of it as teaching your digital assistant one very specific, repeatable trick.
My own journey into agents started, honestly, out of pure frustration. I was spending way too much time sifting through tech news, trying to find mentions of specific AI model updates or new agent frameworks. I’d set up Google Alerts, but they were often too broad or too slow. I wanted something that understood context, could filter out the noise, and give me a concise summary. So, like any good lazy programmer (which I proudly claim to be!), I thought, “There has to be a way to automate this with an AI.”
And there was! After a few false starts, some head-scratching over API keys, and a fair bit of trial and error with different large language models (LLMs), I landed on a pattern that works. It’s not a multi-agent system coordinating complex tasks, nor is it going to write your next novel. But it’s a perfect entry point for understanding how these pieces fit together to create something genuinely helpful.
Why a “News Hound” Agent?
When you’re just starting, picking a project that’s too ambitious is a sure-fire way to get discouraged. That’s why I recommend starting with a single-task agent. Our “News Hound” agent is perfect for a few reasons:
- Clear Goal: Find specific news, summarize it. Simple.
- Tangible Output: You get a summary you can actually read.
- Relies on External Data: It teaches you how agents interact with the outside world (fetching information).
- Uses LLMs for Interpretation: The core of most agents is an LLM understanding and generating text.
- Repeatable: Once built, you can run it daily, weekly, or whenever you want.
Imagine waking up, and your agent has already pulled together a short digest of all the latest advancements in “AI ethics in large language models” or “new developments in autonomous driving sensors.” No more endless scrolling through generic tech blogs. That’s the dream, and it’s totally achievable.
The Anatomy of Our Simple News Hound Agent
Every AI agent, no matter how simple, usually has a few core components. For our News Hound, here’s what we’ll need:
- The “Brain” (LLM): This is our large language model. It will understand what we’re looking for and summarize the information. I’ll be using OpenAI’s models for this example because they’re widely accessible and have good documentation, but you could swap in Anthropic’s Claude or even a local open-source model if you’re feeling adventurous.
- The “Eyes” (Information Fetcher): Our agent needs a way to get information from the internet. For news, an RSS feed reader or a simple web scraping library will do the trick. We’ll keep it simple with a news API for consistency.
- The “Instructions” (Prompt): This is how we tell the LLM what to do. It’s crucial for getting good results.
- The “Orchestrator” (Python Script): A simple script to tie everything together, telling the agent when to fetch, when and where to output the results.
Don’t worry if those terms sound a bit much right now. We’ll break each piece down.
What You’ll Need Before We Start
- Python: Make sure you have Python 3.8+ installed.
- An OpenAI API Key: You can get one from the OpenAI platform. There’s a free tier for initial testing, but you’ll likely need to add a payment method for sustained use. Keep this key secret!
- A News API Key (Optional but Recommended): Services like NewsAPI.org or GNews API offer free tiers that are perfect for this. It makes fetching structured news data much easier than raw web scraping. For this tutorial, I’ll assume you have a NewsAPI.org key.
- Basic Text Editor: VS Code, Sublime Text, or even Notepad will do.
Step 1: Setting Up Your Environment
First things first, let’s create a new directory for our project and install the necessary libraries. Open your terminal or command prompt:
mkdir news_hound_agent
cd news_hound_agent
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install openai requests python-dotenv
We’re installing:
openai: To interact with OpenAI’s models.requests: To make HTTP requests to the news API.python-dotenv: To safely store our API keys.
Next, create a file named .env in your news_hound_agent directory and add your API keys:
OPENAI_API_KEY="your_openai_api_key_here"
NEWS_API_KEY="your_newsapi_key_here"
Replace the placeholders with your actual keys. Make sure to add .env to your .gitignore file if you ever put this project into a Git repository!
Step 2: Building the Information Fetcher (The “Eyes”)
Let’s create a Python file named news_fetcher.py. This module will be responsible for grabbing the news articles.
# news_fetcher.py
import requests
import os
from dotenv import load_dotenv
load_dotenv() # Load environment variables from .env file
NEWS_API_KEY = os.getenv("NEWS_API_KEY")
NEWS_API_URL = "https://newsapi.org/v2/everything"
def fetch_news(query, language='en', sort_by='relevancy', page_size=10):
"""
Fetches news articles from NewsAPI.org based on a query.
"""
if not NEWS_API_KEY:
print("Error: NEWS_API_KEY not found in .env file.")
return []
params = {
'q': query,
'language': language,
'sortBy': sort_by,
'pageSize': page_size,
'apiKey': NEWS_API_KEY
}
try:
response = requests.get(NEWS_API_URL, params=params)
response.raise_for_status() # Raise an exception for HTTP errors (4xx or 5xx)
data = response.json()
articles = data.get('articles', [])
return articles
except requests.exceptions.RequestException as e:
print(f"Error fetching news: {e}")
return []
if __name__ == "__main__":
# Example usage when running this script directly
search_term = "AI agents for personal productivity"
articles = fetch_news(search_term, page_size=5)
if articles:
print(f"Found {len(articles)} articles for '{search_term}':")
for i, article in enumerate(articles):
print(f"{i+1}. {article.get('title', 'No Title')} - {article.get('url', 'No URL')}")
else:
print(f"No articles found for '{search_term}'.")
This script defines a function fetch_news that takes a query (e.g., “AI ethics”) and returns a list of article dictionaries. Each dictionary contains information like the title, description, and URL. The if __name__ == "__main__": block is just for testing this module in isolation.
Step 3: Crafting the Prompt (The “Instructions”)
This is where the magic of the LLM comes in. The better your prompt, the better your summary will be. Let’s create a file called summarizer.py.
# summarizer.py
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def summarize_articles(articles, specific_focus="general overview"):
"""
Summarizes a list of news articles using an OpenAI LLM.
The specific_focus parameter helps guide the summary.
"""
if not articles:
return "No articles provided ."
# Prepare the articles for the LLM
article_texts = []
for i, article in enumerate(articles):
title = article.get('title', 'No Title')
description = article.get('description', 'No Description')
url = article.get('url', 'No URL')
article_texts.append(f"Article {i+1}:\nTitle: {title}\nDescription: {description}\nURL: {url}\n---")
combined_text = "\n\n".join(article_texts)
# The prompt for our LLM
prompt = f"""
You are an expert news analyst. Your task is to review the following news articles and provide a concise summary.
The summary should focus specifically on "{specific_focus}".
Extract the key developments, trends, and important announcements related to this focus.
Keep the summary to under 300 words, using clear and professional language.
If an article is not relevant to the specific focus, you can briefly mention why or omit it.
News Articles:
{combined_text}
Summary focusing on "{specific_focus}":
"""
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo", # You can use "gpt-4" for better quality, but it's more expensive
messages=[
{"role": "system", "content": "You are a helpful and concise news summarizer."},
{"role": "user", "content": prompt}
],
max_tokens=500, # Adjust as needed
temperature=0.7 # A bit creative, but still factual
)
summary = response.choices[0].message.content.strip()
return summary
except Exception as e:
print(f"Error summarizing articles: {e}")
return "Could not generate summary due to an error."
if __name__ == "__main__":
# Example dummy articles for testing
dummy_articles = [
{"title": "AI in healthcare sees new funding", "description": "Investment surges in startups applying AI for diagnostics.", "url": "http://example.com/ai-health"},
{"title": "New electric car model released", "description": "Luxury EV brand unveils its latest vehicle with enhanced range.", "url": "http://example.com/ev-car"},
{"title": "Ethical AI guidelines proposed by EU", "description": "European Union drafts strict rules for responsible AI development.", "url": "http://example.com/eu-ai"},
]
focus = "AI ethics and regulation"
summary_result = summarize_articles(dummy_articles, focus)
print(f"\n--- Summary for '{focus}' ---\n{summary_result}")
In this script:
- We load the OpenAI API key.
summarize_articlestakes a list of articles and aspecific_focusstring. Thisspecific_focusis key! It tells the LLM what lens to use when summarizing, preventing generic outputs.- The prompt is carefully constructed to give the LLM a role (“expert news analyst”), clear instructions (concise summary, specific focus, word limit), and the content to work with.
- We use
gpt-3.5-turboas it’s a good balance of cost and performance for this task. Feel free to experiment withgpt-4if you want more nuanced summaries.
Step 4: The Orchestrator (Tying It All Together)
Finally, let’s create our main agent script, main_agent.py, which will call our fetcher and summarizer.
# main_agent.py
from news_fetcher import fetch_news
from summarizer import summarize_articles
import datetime
def run_news_hound_agent(search_query, summary_focus):
"""
Orchestrates the news fetching and summarization process.
"""
print(f"[{datetime.datetime.now()}] Starting News Hound Agent...")
print(f"Searching for: '{search_query}' with focus on '{summary_focus}'")
# Step 1: Fetch the news articles
print("Fetching news articles...")
articles = fetch_news(search_query, page_size=10) # Fetch 10 articles
if not articles:
print("No articles found or an error occurred during fetching. Exiting.")
return
print(f"Found {len(articles)} relevant articles.")
# Step 2: Summarize the articles
print("Summarizing articles with LLM...")
summary = summarize_articles(articles, summary_focus)
# Step 3: Output the results
print("\n--- Daily News Digest ---")
print(f"Date: {datetime.date.today()}")
print(f"Query: {search_query}")
print(f"Focus: {summary_focus}")
print("\nSummary:")
print(summary)
print("\n--- End of Digest ---")
# Optional: Save to a file
output_filename = f"news_digest_{datetime.date.today().isoformat()}.txt"
with open(output_filename, "w", encoding="utf-8") as f:
f.write(f"Daily News Digest for {datetime.date.today()}\n")
f.write(f"Query: {search_query}\n")
f.write(f"Focus: {summary_focus}\n\n")
f.write("Summary:\n")
f.write(summary)
f.write("\n\nOriginal Article Titles (for reference):\n")
for article in articles:
f.write(f"- {article.get('title', 'No Title')}\n")
print(f"\nDigest saved to {output_filename}")
if __name__ == "__main__":
# Define what your agent should search for and focus on
my_search_query = "AI agent frameworks OR LLM orchestration"
my_summary_focus = "new tools and methods for building AI agents"
run_news_hound_agent(my_search_query, my_summary_focus)
This script orchestrates the whole process:
- It defines a
run_news_hound_agentfunction that takes asearch_query(what to look for in news titles/descriptions) and asummary_focus(what the LLM should specifically highlight). - It calls
fetch_newsto get the raw articles. - It then passes those articles and the focus to
summarize_articles. - Finally, it prints the summary to the console and saves it to a text file for easy reading later.
Running Your News Hound Agent!
Now, open your terminal, make sure your virtual environment is active, and run:
python main_agent.py
You should see output indicating the agent is fetching news, then summarizing, and finally, your personalized news digest printed to the console and saved as a file! The first run might take a few seconds as the LLM processes the request.
What I love about this is how immediate the feedback is. You put in a query, specify a focus, and boom – you get a tailored summary. No more generic headlines. For me, this was a huge “aha!” moment. It wasn’t just talking to an AI; it was getting an AI to perform a specific, valuable task on my behalf.
Tweaking and Expanding Your Agent
This is just the beginning! Here are some ideas to make your News Hound even better:
- Multiple Queries/Foci: Modify
main_agent.pyto run for several different queries or foci and generate multiple digests. - Scheduling: Use tools like
cron(Linux/macOS) or Windows Task Scheduler to runmain_agent.pyautomatically every morning. - Different LLMs: Experiment with other LLMs. Maybe Anthropic’s Claude 3 Opus for longer contexts or Llama 3 for local processing.
- Output Formats: Instead of a text file, save the output as HTML, a Markdown file, or even send it to your email or a Slack channel (you’d need to integrate with those APIs).
- Contextual Filtering: Before summarizing, you could add an intermediate step where the LLM (or a simpler text classifier) determines if each fetched article is truly relevant to your specific focus, filtering out noise even further.
- Advanced News Sources: Explore other news APIs or even web scraping specific sites (though be mindful of terms of service!).
My own News Hound evolved from this basic setup. I now have it running nightly, scanning for updates on specific agent frameworks I’m watching, and it emails me a summary every morning. It’s saved me hours of sifting through RSS feeds and Twitter threads. It’s truly become my digital research assistant.
Actionable Takeaways
So, what should you take away from all this?
- Start Small, Think Big: Don’t try to build the next AGI on your first try. A single-purpose agent is a fantastic learning tool.
- Prompts are Power: The quality of your output is directly tied to the clarity and specificity of your prompts. Spend time refining them.
- APIs Are Your Friends: LLMs are powerful, but they need data. Learning to interact with external APIs (like NewsAPI) is fundamental.
- Code is the Glue: Python (or any scripting language) is what turns a collection of powerful components into a functioning agent.
- Iterate and Experiment: Don’t be afraid to change models, tweak prompts, or try different data sources. That’s how you learn what works best for your specific needs.
Building this News Hound agent is a practical, hands-on way to understand the core concepts behind AI agents. It shows you how to connect an LLM to external tools and automate a useful task. This foundational knowledge is what you’ll build upon as you explore more complex multi-agent systems or integrate agents into larger applications.
Go ahead, give it a try! You might be surprised at how quickly you can get your own little digital assistant up and running. And as always, if you hit a snag or discover a cool new way to extend this, drop a comment below or find me on social media. Happy agent building!
Related Articles
- Learn AI: Your Complete 2026 AI Beginner’s Path
- AI Agents for Beginners: Your Friendly Guide
- Ai Agent Tutorials With Real-World Examples
🕒 Last updated: · Originally published: March 19, 2026