
A hands-on learning project implementing a Model Context Protocol server with FastMCP, streamable-http transport, and dynamic tool discovery
Published on January 25, 2026 by Dai Tran
project mcp-server ai-powered-application learning blog
17 min READ
This is a learning-focused project designed to gain hands-on experience developing a Model Context Protocol (MCP) server following modern best practices. The project implements a simple Todo API with two core operations (GET and POST), wrapped in a FastMCP server using streamable-http transport—the modern MCP standard that replaced Server-Sent Events (SSE).
The Model Context Protocol is an open standard developed by Anthropic that enables seamless integration between AI applications and external data sources. By building this MCP server, you’ll learn how AI assistants like Claude Desktop, Roo Code, and GitHub Copilot can access and manipulate real-world data through well-defined tools and resources.
This project teaches you how to:
MCP servers act as bridges between AI assistants and external systems, extending AI capabilities beyond their base knowledge. This project provides a practical foundation for building MCP servers that connect AI to databases, APIs, file systems, and other data sources—enabling more powerful and useful AI-powered workflows.
The Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how AI applications communicate with external systems and data sources. MCP enables:
MCP servers act as bridges between AI assistants and external systems, extending the capabilities of AI beyond their base knowledge and enabling them to interact with real-world data and services.
The Todo MCP server implements a layered architecture with three main components:
┌─────────────────┐
│ AI Client │ ← Roo Code / Claude Desktop
│ (VS Code) │
└────────┬────────┘
│ HTTP (streamable-http)
▼
┌─────────────────┐
│ MCP Server │ ← Port 8080
│ (FastMCP) │ Exposes MCP tools
└────────┬────────┘
│ HTTP REST API
▼
┌─────────────────┐
│ Backend API │ ← Port 8000
│ (FastAPI) │ Business logic & storage
└─────────────────┘
GET /api/todos - List todos with filteringPOST /api/todos - Create new todoGET /health - Health checkget_todos - Retrieve todos with optional filteringcreate_todo - Create new todo itemsThe Backend API provides RESTful endpoints for todo management:
class TodoCreate(BaseModel):
title: str
description: Optional[str] = None
status: Literal["pending", "in-progress", "completed"] = "pending"
class Todo(TodoCreate):
id: str
created_at: datetime
updated_at: datetime
GET /api/todos - List todos with filtering
@app.get("/api/todos")
async def list_todos(
status: Optional[str] = None,
search: Optional[str] = None,
limit: int = 10,
offset: int = 0
):
# Returns filtered todos with pagination
POST /api/todos - Create new todo
@app.post("/api/todos")
async def create_todo(todo: TodoCreate):
# Creates and returns new todo
GET /health - Health check endpoint
@app.get("/health")
async def health():
return {"status": "healthy"}
The MCP Server exposes tools that AI assistants can invoke:
Retrieves todos with optional filtering by status and search keyword.
@mcp.tool()
async def get_todos(
status: Optional[str] = None,
search: Optional[str] = None
) -> str:
"""
Get all todos with optional filtering.
Args:
status: Filter by status (pending, in-progress, completed)
search: Search keyword in title or description
Returns:
JSON string with todos list
"""
# Calls Backend API GET /api/todos
# Returns formatted results
Creates a new todo item.
@mcp.tool()
async def create_todo(
title: str,
description: Optional[str] = None,
status: str = "pending"
) -> str:
"""
Create a new todo item.
Args:
title: Todo title (required)
description: Optional description
status: Status (pending, in-progress, completed)
Returns:
JSON string with created todo
"""
# Calls Backend API POST /api/todos
# Returns created todo details
Environment variables can be set in .env file:
# Todo API Configuration
TODO_API_URL=http://localhost:8000
# Logging
LOG_LEVEL=INFO # Options: DEBUG, INFO, WARNING, ERROR
# MCP Server
MCP_PORT=8080
CLI options for MCP server:
python -m src.todo_mcp_server.cli \
--api-url http://localhost:8000 \
--log-level DEBUG \
--transport streamable-http \
--port 8080
python3 --versionIf you have Docker installed, start both services with a single command:
cd todo-mcp-server
docker compose up -d
This starts:
http://localhost:8000http://localhost:8080Skip to Configuring AI Client section.
git clone https://github.com/trantdai/genai.git
cd genai/mcp/todo-mcp-server
# Create virtual environment
python3 -m venv venv
# Activate it
# On macOS/Linux:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
# Verify activation (you should see (venv) in your prompt)
which python # Should point to venv/bin/python
pip install -r requirements.txt
Expected output:
Successfully installed fastapi-0.104.0 uvicorn-0.24.0 pydantic-2.0.0
httpx-0.25.0 python-dotenv-1.0.0 mcp-1.0.0
Open Terminal 1:
cd todo-mcp-server
source venv/bin/activate # Activate venv
uvicorn src.todo_mcp_server.api.main:app --reload --port 8000
You should see:
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process
INFO: Application startup complete.
✅ Backend API is now running on http://localhost:8000
Open Terminal 2:
cd todo-mcp-server
source venv/bin/activate # Activate venv
python -m src.todo_mcp_server.cli --transport streamable-http --port 8080
You should see:
2024-01-04 10:00:00 - todo_mcp_server - INFO - Starting Todo MCP Server
2024-01-04 10:00:00 - todo_mcp_server - INFO - Tools registered successfully
INFO: Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)
✅ MCP Server is now running on http://localhost:8080/mcp
Create .roo/mcp.json in your project root:
mkdir -p .roo
cat > .roo/mcp.json << 'EOF'
{
"mcpServers": {
"todo": {
"type": "streamable-http",
"url": "http://localhost:8080/mcp",
"disabled": false,
"alwaysAllow": []
}
}
}
EOF
Benefits:
{
"mcpServers": {
"todo": {
"type": "streamable-http",
"url": "http://localhost:8080/mcp"
}
}
}
Edit the configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Add configuration:
{
"mcpServers": {
"todo": {
"command": "python",
"args": ["-m", "src.todo_mcp_server.cli", "--transport", "stdio"],
"cwd": "/path/to/genai/mcp/todo-mcp-server"
}
}
}
Restart Claude Desktop to load the server.
# Health check
curl http://localhost:8000/health
# Expected: {"status":"healthy"}
# List todos (empty initially)
curl http://localhost:8000/api/todos
# Expected: {"todos":[],"total":0,"limit":10,"offset":0}
# Create a todo
curl -X POST http://localhost:8000/api/todos \
-H "Content-Type: application/json" \
-d '{"title":"Learn MCP","description":"Study MCP protocol"}'
In Roo Code or Claude Desktop, try:
The AI assistant should successfully invoke the MCP tools and display results.
Once connected to Claude Desktop, you can interact with the todo server naturally:
User: “Create a todo to review the MCP documentation with high priority”
Claude: Uses create_todo tool
{
"title": "Review MCP documentation",
"description": "Read through the Model Context Protocol documentation",
"priority": "high",
"status": "pending"
}
User: “Show me all high priority todos”
Claude: Uses list_todos tool with priority filter
User: “Mark todo #1 as in-progress”
Claude: Uses update_todo tool
{
"id": 1,
"status": "in-progress"
}
User: “Find all todos related to documentation”
Claude: Uses search_todos tool
todo-mcp-server/
├── src/
│ └── todo_mcp_server/
│ ├── __init__.py
│ ├── server.py # MCP server with FastMCP
│ ├── cli.py # CLI entry point
│ ├── api/
│ │ ├── main.py # FastAPI application
│ │ └── storage.py # In-memory storage
│ ├── tools/
│ │ ├── get_todos.py # Get todos tool
│ │ └── create_todo.py # Create todo tool
│ └── utils/
│ └── http_client.py # HTTP client for API calls
├── docs/
│ ├── README.md # Main documentation
│ ├── GETTING_STARTED.md # Quick start guide
│ ├── ARCHITECTURE.md # Architecture details
│ └── DEVELOPMENT.md # Development guide
├── tests/ # Test suite
├── docker-compose.yml # Docker configuration
├── Dockerfile # Container image
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
└── README.md
# Health check
curl http://localhost:8000/health
# List all todos
curl http://localhost:8000/api/todos
# List pending todos only
curl "http://localhost:8000/api/todos?status=pending"
# Search todos
curl "http://localhost:8000/api/todos?search=learn"
# Create a todo
curl -X POST http://localhost:8000/api/todos \
-H "Content-Type: application/json" \
-d '{
"title": "My Todo",
"description": "Todo description",
"status": "pending"
}'
In Roo Code or Claude Desktop:
Create Todos:
List Todos:
Search Todos:
For development with auto-reload:
# Backend API with auto-reload
uvicorn src.todo_mcp_server.api.main:app --reload --port 8000
# MCP Server (restart manually after code changes)
python -m src.todo_mcp_server.cli --log-level DEBUG
Error: Address already in use
Solution:
# Find process using port 8000
lsof -i :8000
# Kill the process
kill -9 <PID>
# Or use a different port
uvicorn src.todo_mcp_server.api.main:app --reload --port 8001
Error: ModuleNotFoundError: No module named 'fastapi'
Solution:
# Ensure virtual environment is activated
source venv/bin/activate
# Reinstall dependencies
pip install -r requirements.txt
Error: Connection refused when MCP server tries to reach API
Solution:
curl http://localhost:8000/healthSolution:
curl http://localhost:8080/mcpSolution:
The Todo MCP server seamlessly integrates with Claude Desktop, enabling natural language interactions for todo management:
Potential improvements for the Todo MCP server:
The Todo MCP server demonstrates how to build a practical, production-ready MCP server that extends AI assistant capabilities. By following the Model Context Protocol specification and implementing robust database operations, this server provides a solid foundation for task management through AI assistants. The project showcases best practices in MCP server development, including proper error handling, comprehensive testing, and clear documentation.
This implementation can serve as a template for building other MCP servers that connect AI assistants to various data sources and services, enabling more powerful and useful AI-powered workflows.