This example builds a customer support agent that handles ticket construction tasks. It uses memory to get better over time: the first ticket takes 14 planned steps, but by the 20th similar request, the agent follows a proven pattern in 3 seconds.
A new request arrives: "Set up a ticket routing system with SLA tracking."
memories = client.memory.recall(
context="ticket routing system with SLA tracking",
domain="helpdesk",
limit=5
)
if memories.pattern:
print(f"Found pattern: {memories.pattern.name}")
print(f"Success rate: {memories.pattern.success_rate:.0%}")
print(f"Based on {memories.total_matches} similar episodes")
On the first run, no memories exist. On later runs, the agent might get:
Found pattern: ticket-routing-sla
Success rate: 94%
Based on 12 similar episodes
schema = {"name": "HelpdeskApp", "orbitals": []}
for instr in plan.instructions:
prompt = f"Apply this to the schema: {instr.action}({instr.params})"
schema = your_llm.execute(prompt, schema=schema)
# Verify after each level completes
if instr == plan.last_instruction_at_level(instr.level):
check = client.verify(schema=schema)
if not check.valid:
repairs = client.rank_edits(schema=schema, errors=check.predicted_errors)
schema = your_llm.apply_repairs(schema, repairs.suggestions[0])
client.memory.store(
domain="helpdesk",
context="ticket routing system with SLA tracking",
schema=schema,
actions_taken=[i.action for i in plan.instructions],
outcome="success" if final_check.valid else "failure",
metadata={"goal": "std-helpdesk", "steps": plan.total_instructions}
)
After 20 helpdesk tasks, the agent's recall consistently returns a pattern. Planning confirms the pattern is still valid for the specific request, and execution follows the known-good path. Tasks that took 14 LLM calls on day one take 3-4 calls by week two, because the agent skips exploratory steps and goes straight to what works.