Weather Agent
Weather agent with OpenWeatherMap integration
The weather-agent template is the smallest useful agent: one LLM, one tool, one loop. The agent reads a natural-language question, decides when to call the weather API, and returns a grounded answer. It’s the right next step after the quickstart — enough moving parts to see tool use and durable reasoning, not so many that the shape is hidden.
What you’ll build
- A tool-calling agent that answers questions like “what’s the weather in Lisbon tomorrow?”
- A single
get_weathertool backed by the OpenWeatherMap API - A durable agent loop where each model call and each tool call is checkpointed
- Both Python and TypeScript variants of the same agent
Requirements
- Python 3.10+ or Node.js 20+
OPENAI_API_KEY(orANTHROPIC_API_KEY— see Customize)OPENWEATHER_API_KEYfrom openweathermap.org- The AGNT5 CLI
Install
curl -LsSf https://agnt5.com/cli.sh | bashSetup
Scaffold the project
agnt5 create weather-agent my-weather-agent —language python
cd my-weather-agentagnt5 create weather-agent my-weather-agent —language typescript
cd my-weather-agentSet environment variables
export OPENAI_API_KEY=sk-...
export OPENWEATHER_API_KEY=...Install dependencies
uv syncpip install -e .pnpm installRun the agent
agnt5 dev up
agnt5 invoke weather_agent --input '{"question": "Is it raining in Tokyo right now?"}'How it works
The agent is a workflow that drives a loop: send the conversation to the LLM, inspect the response, and either return the final answer or execute a tool call and append the result back to the conversation. Every LLM request runs inside ctx.step(), and every get_weather call is its own step. That means each call is journaled, retried on transient failure, and replayed on restart instead of re-executed.
The get_weather tool is a thin durable function over the OpenWeatherMap REST endpoint. The model decides when to call it by returning a tool-use message; the workflow dispatches it, captures the response, and feeds the JSON back into the next model turn. Because tool calls are checkpoints, a crash between the API response and the next LLM turn resumes cleanly without double-charging OpenWeatherMap.
The TypeScript variant mirrors the Python one. Pick whichever your service already uses — there’s no behavioral difference.
Key files
- worker.py (or worker.ts) — Registers the agent workflow and tool with AGNT5.
- tools/weather.py — The
get_weathertool wrapping the OpenWeatherMap API. - agent.py — The workflow loop: model call, tool dispatch, termination check.
- prompts.py — System prompt that tells the model how to use the tool.
Customize
Swap the model. The OpenAI client is isolated in one step. Replace it with Anthropic, Groq, or any provider — the loop shape doesn’t change. Set ANTHROPIC_API_KEY and adjust the client in agent.py.
Add a second tool. Register a get_forecast function alongside get_weather, add it to the tool schema, and the same loop handles it. Each new tool is an independent checkpoint.
Change the system prompt. prompts.py is where the agent’s behavior lives. Tighten it for terser answers or broaden it for multi-city comparisons.
Next steps
- Read /docs/build/agents for the agent execution model
- Try coding_agent for a multi-tool agent with sandboxed execution
- Look at deep_research to see the workflow pattern for planned fan-out