Tech Trends Agent π
A robust, scalable AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
This project demonstrates how to build a production-ready AI-powered web service by combining three cutting-edge, open-source technologies:
- FastAPI for high-performance asynchronous APIs
- Pydantic-AI for type-safe, schema-driven agent construction
- Model Context Protocol (MCP) servers as plug-and-play tools
A quick glance at the UI: type a question, choose sources (Hacker News and/or Web Search), then get ranked trend cards with scores, links, and an AI-written summaryβso you can quickly see what's trending about any topic or technology.
π― What You'll Learn
- Advanced data modeling patterns with Pydantic
- Multi-agent AI systems with A2A communication
- MCP server integration for extensible AI tools
- Production-ready FastAPI deployment patterns with routers
- Docker containerization for AI services
- Type-safe AI agent development with a factory pattern
- Configuration for local LLMs like Ollama, LMStudio, and vLLM
ποΈ Architecture
The application is architected for clarity, scalability, and separation of concerns.
flowchart TD
subgraph UI["π Web UI + API Docs"]
U["π€ User"] -->|HTTP| R["π FastAPI Routers<br/>(app/routers)"]
end
subgraph CORE["π― Core Application"]
R --> D["βοΈ Dependencies<br/>(app/dependencies)"]
D --> AM["AgentManager"]
AM --> AF["AgentFactory"]
AF --> EA["π€ EntryAgent Service"]
AF --> SA["π€ SpecialistAgent Service"]
end
subgraph A2A_LAYER [π‘ Agent-to-Agent]
EA <--> A2A["A2A Protocol"]
SA <--> A2A
end
subgraph TOOLS["π Tooling (via MCP)"]
EA --> BS["π Brave Search MCP"]
EA --> HN["π° Hacker News MCP"]
SA --> GH["π GitHub MCP (opt.)"]
end
classDef agent fill:#ffffff,color:#111827,stroke:#60a5fa,stroke-width:2px,rx:10,ry:10
classDef svc fill:#f8fafc,color:#111827,stroke:#0288d1,stroke-width:2px,rx:10,ry:10
classDef toolActive fill:#ffffff,color:#111827,stroke:#16a34a,stroke-width:2px,rx:10,ry:10
class EA,SA agent
class R,D,AM,AF,A2A svc
class BS,HN,GH toolActive
π Quick Start (Docker - Recommended)
Prerequisites
- Docker and Docker Compose
- LLM API Key (e.g., from OpenAI) or a local LLM server (Ollama, etc.)
- GitHub token (optional, for enhanced GitHub features)
1. Clone and Setup
git clone <your-repo-url> cd Tech_Trends_Agent
2. Configure Environment
# Copy environment template cp env.example .env # Edit .env with your API keys and configuration vi .env # or your preferred editor
3. Configuring the LLM Provider
This application supports multiple LLM backends. You can configure the provider and model using the following environment variables in your .env file:
LLM_PROVIDER: The provider to use. Supported values are:openai(default)ollamalmstudiovllm
LLM_MODEL_NAME: The name of the model to use (e.g.,gpt-4o,llama3,mistral).LLM_API_BASE_URL: The base URL of the LLM provider's API. This is required for local models.- For Ollama:
http://localhost:11434/v1 - For LMStudio:
http://localhost:1234/v1 - For vLLM: The URL of your vLLM server's OpenAI-compatible endpoint.
- For Ollama:
LLM_API_KEY: The API key for the provider. For local models that do not require an API key, this can be left empty.
Example for Ollama:
LLM_PROVIDER=ollama LLM_MODEL_NAME=llama3 LLM_API_BASE_URL=http://localhost:11434/v1 LLM_API_KEY=
4. Start the App
# Start with Docker (recommended) ./docker-start.sh # Or manually with docker-compose docker-compose up --build -d
5. Access the Application
- Web UI: http://localhost:8000/ui
- Interactive API Documentation: http://localhost:8000/docs
- ReDoc Documentation: http://localhost:8000/redoc
- Health Check: http://localhost:8000/health
6. Stop the App
# Stop the application
./docker-stop.shπ Project Structure
HN_Github_Agents/
βββ app/
β βββ agents/ # AI agent service classes
β β βββ factory.py
β β βββ entry_agent.py
β β βββ specialist_agent.py
β βββ routers/ # FastAPI routers
β β βββ analysis.py
β β βββ chat.py
β β βββ system.py
β βββ models/ # Pydantic data models
β β βββ requests.py
β β βββ responses.py
β β βββ schemas.py
β βββ services/ # Business logic services
β β βββ a2a_service.py
β β βββ agent_manager.py
β βββ utils/ # Utilities and configuration
β β βββ config.py
β β βββ logging.py
β βββ dependencies.py # Shared dependencies
β βββ main.py # FastAPI application entrypoint
βββ data/ # Sample data
βββ static/ # Web interface files
βββ tests/ # Test suite
βββ docker-compose.yml
βββ Dockerfile
βββ pyproject.toml
π¨ Advanced Features
Extending Agent Capabilities
The new architecture uses an AgentFactory and composition instead of inheritance. To add a new agent service:
-
Create a new service class in
app/agents/. It does not need to inherit from any base class.# app/agents/new_agent_service.py from pydantic_ai import Agent class NewAgentService: def __init__(self, agent: Agent): self.agent = agent async def process(self, input_data: str) -> str: return await self.agent.run(f"Process this: {input_data}")
-
Update the
AgentManager(app/services/agent_manager.py) to create and use your new service.# In AgentManager.__init__ self.new_agent_service: NewAgentService | None = None # In AgentManager.initialize new_agent_prompt = "You are a new agent." new_pydantic_agent = agent_factory.create_agent(new_agent_prompt) self.new_agent_service = NewAgentService(new_pydantic_agent)
-
Add a new router in
app/routers/to expose your service via an API endpoint.
This approach keeps agent creation and configuration centralized in the AgentFactory and makes services modular and easy to test.
π€ Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass (
pytest) - Submit a pull request
