Get started Quickstart

Quickstart

Build a durable workflow that survives failures in 5 minutes

What you’ll build

A workflow that fetches data and summarizes it with an LLM. When it fails mid-execution, it resumes from where it left off — not from the beginning.

Prerequisites:

  • Python 3.10+
  • Docker
  • OpenAI API key (for LLM calls)

Install the CLI

curl -LsSf https://agnt5.com/cli.sh | bash
brew install agnt5/tap/agnt5

Create your project

agnt5 create --template py-quickstart my-first-workflow
cd my-first-workflow

Set your OpenAI API key:

echo 'OPENAI_API_KEY="sk-..."' > .env

Write your workflow

Open src/main.py and replace its contents:

import httpx
from agnt5 import function, workflow, Context, lm

@function
async def fetch_article(ctx: Context, url: str) -> str:
    """Fetch article content from a URL."""
    ctx.logger.info(f"Fetching {url}...")
    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        return response.text

@function
async def summarize(ctx: Context, content: str) -> str:
    """Summarize content using an LLM."""
    ctx.logger.info("Summarizing with LLM...")
    response = await lm.generate(
        model="openai/gpt-4o-mini",
        prompt=f"Summarize this article in 2-3 sentences:\n\n{content[:4000]}",
    )
    return response.text

@workflow
async def research(ctx: Context, url: str) -> dict:
    """Fetch an article and summarize it."""
    # Step 1: Fetch (checkpointed)
    content = await ctx.task(fetch_article, url=url)

    # Step 2: Summarize (checkpointed)
    summary = await ctx.task(summarize, content=content)

    return {"url": url, "summary": summary}

What’s happening:

  • @function makes each step durable — results are checkpointed
  • @workflow orchestrates the steps with automatic recovery
  • ctx.task() invokes functions with durability guarantees

Start the dev server

agnt5 dev up

Open the Dev Dashboard at http://localhost:34180 to see logs and traces.


Run your workflow

In a new terminal:

agnt5 run research --input '{"url": "https://example.com/article"}'

Watch the dashboard — you’ll see:

  1. fetch_article starts and completes
  2. summarize starts and completes
  3. Workflow returns the summary

Experience durability

Now let’s see what makes AGNT5 different. Modify summarize to fail on first attempt:

@function
async def summarize(ctx: Context, content: str) -> str:
    """Summarize content using an LLM."""
    ctx.logger.info("Summarizing with LLM...")

    # Simulate a failure on first attempt
    if ctx.attempt == 0:
        raise Exception("LLM API timeout!")

    response = await lm.generate(
        model="openai/gpt-4o-mini",
        prompt=f"Summarize in 2-3 sentences:\n\n{content[:4000]}",
    )
    return response.text

Run the workflow again:

agnt5 run research --input '{"url": "https://example.com/article"}'

Watch the dashboard:

  1. fetch_article completes (checkpointed)
  2. summarize fails
  3. summarize retries automatically
  4. fetch_article is NOT re-executed — it replays from checkpoint

This is durability: failures resume from the last successful step, not from the beginning.


Next steps

You’ve built a durable workflow. Now explore: