Agents
LLM-driven loops — instructions plus model plus tools, hosted inside step boundaries so their non-determinism is contained.
An agent is an
Agent-class instance — an LLM-driven loop that takes instructions, picks actions, and produces output. Agents are non-deterministic by design and run inside step boundaries so the runtime can journal their result.
from agnt5 import Agent
researcher = Agent(
name="researcher",
model="openai/gpt-4o-mini",
instructions=(
"Research the topic the user provides. Use the available tools to fetch "
"facts. Summarize your findings in three sentences."
),
tools=[search_database, fetch_article],
max_iterations=5,
temperature=0.3,
)
result = await researcher.run_sync("What is durable execution?")
print(result.output)The researcher instance is configured once and called many times. Each call starts a loop: the model proposes an action, the runtime executes it (a tool call, a handoff, or a final answer), the loop continues until the model produces a final answer or the iteration limit is hit.
The mental model
An agent is configuration plus a loop. Configuration is the constructor: name, model, instructions (the system prompt), tools (capabilities the model can invoke), handoffs (other agents the model can transfer to), max_iterations (the safety limit), temperature. The loop is what run_sync (or its async siblings) drives: each iteration, the model sees the conversation state, proposes an action, and the runtime executes it.
There are three kinds of action a model can propose. A tool call invokes a @tool-decorated callable from the agent’s tools=[...] list; the tool runs, its output goes back to the model, and the loop continues. A handoff transfers control to another agent listed in handoffs=[...]; the receiving agent takes over and produces the final answer. A final answer ends the loop; the runtime returns an AgentResult whose .output is the answer.
Non-determinism is the defining property. The same input may produce different outputs across runs, different tool calls within one run, different handoff decisions across versions of the same model. AGNT5 reconciles this with deterministic workflows by hosting the agent’s call inside a step boundary. The agent runs once, the step journals the AgentResult, and the workflow body sees a deterministic value on replay.
Why it works this way
LLM agency requires a loop, and a loop requires a host that contains its non-determinism. AGNT5 puts that host at the step boundary: when a workflow calls a @function that runs an agent, the function executes inside ctx.step, the agent’s loop runs inside the function, and the journal records the function’s return value. Replay reads the recorded value; the agent does not run again.
The constructor pattern (configure once, call many times) is also intentional. Agent configuration includes the system prompt, tools, and model — all of which influence behavior in subtle ways. Centralizing them in one Agent instance means there is one place to audit the agent’s capabilities, one place to tune its temperature, one place to swap its model.
Edge cases and gotchas
- Never call an agent from a workflow body without
ctx.step. Callingagent.run_sync(...)directly inside a@workflow-decorated function is a determinism violation: the agent’s output will differ across replays. Wrap it in a@functionand reach it throughctx.step. max_iterationsis the safety net for runaway loops. Without it, a model that keeps proposing tool calls without converging will loop indefinitely. Set it explicitly; do not rely on the default.- Handoffs run inside the original step. When agent A hands off to agent B inside one step, the journal records one result — agent B’s output. The handoff is invisible to the workflow above.
- Agents-as-tools follow the same rule. Pass another
Agenttotools=[...]and the parent agent can invoke it as a tool. The whole composition runs inside the step that started it. agentis lowercase in prose. The Python class isAgent; in body text the noun isagent, never “AI agent” or “Agent”.- Streaming changes the API, not the durability model. Streaming variants (
run_stream, agent streaming events) deliver tokens as they generate, but the loop and step boundary work the same way; the journal still records the final result. run_syncis async. The_syncsuffix refers to the agent’s loop completing before the call returns, not to blocking the event loop. Alwaysawaitit.
Related concepts
- Tools — the capabilities agents invoke during their loop.
- Functions — the host an agent runs inside, so workflows can checkpoint it.
- Determinism — why workflows have rules — why agents must run inside step boundaries.
- Picking the right primitive — when to reach for an agent versus a workflow.