Agents & Workflows
Typed DAGs, speculative and parallel execution, per-step budgets, capability-aware routing, and machine-readable receipts. Use it standalone, or underneath LangChain, CrewAI, AutoGen, or your own orchestration.
Your first workflow
Pick a template name or ship an inline DAG. Routing, fallbacks, and budget enforcement are handled per step.
from kairosroute import KairosRoute
kr = KairosRoute(api_key="kr-your-key")
result = kr.workflows.execute(
workflow="research",
input={"goal": "Summarize the latest SoC announcements from April 2026"},
budget={"max_cost_usd": 0.25, "on_exceed": "downgrade"},
)
print(result["outputs"]["summary"])
print(f"Cost: ${result['total_cost_usd']:.4f}, Latency: {result['total_latency_ms']}ms")WorkflowExecution with per-step model picks, costs, latencies, receipt IDs, and templated outputs, ready to render, store, or hand to the next agent.Stream step events
Render progress mid-flight. SSE emits step_started, step_completed, step_failed, and a final event carrying the full execution.
for event in kr.workflows.stream(
workflow="research",
input={"goal": "Latest AI chips"},
):
if event["type"] == "step_completed":
print(f"{event['step_id']}: {event['model_id']} ({event['latency_ms']}ms)")
elif event["type"] == "final":
print("Done:", event["result"]["outputs"])Inline DAGs
Templates are just JSON. Ship your own:
result = kr.workflows.execute(
workflow={
"name": "refactor-and-test",
"steps": [
{"id": "plan", "prompt": "Plan a refactor for: {{goal}}", "tier": "balanced"},
{"id": "code", "prompt": "Implement: {{steps.plan.output}}", "tier": "high",
"capabilities": [{"kind": "code_generation", "required": True}]},
{"id": "test", "prompt": "Write tests for: {{steps.code.output}}",
"depends_on": ["code"], "tier": "balanced", "strategy": "speculative"},
],
"outputs": {"code": "{{steps.code.output}}", "tests": "{{steps.test.output}}"},
},
input={"goal": "Extract auth into a middleware"},
)id: unique step identifierprompt: supports{{goal}},{{steps.X.output}},{{input.var}}tier:fast|balanced|highstrategy:single|speculative|parallelcapabilities: array of capability hintsdepends_on: step IDs that must finish firstcondition: JSONLogic expression for conditional executionbudget: per-step cost/token cap
single: one model, one callspeculative: draft+verify race, fast model draft, stronger model verify; keep the verifier's output if they disagreeparallel: race N models, keep first to finish (Promise.any)
Built-in templates
Ten canned DAGs cover the most common agent shapes. GET /v1/workflows returns their structure.
| Name | Shape | Outputs |
|---|---|---|
| research | gather → synthesize → cite | summary, citations |
| write | outline → draft → polish | draft |
| analyze | classify → extract → reason → report | report |
| code | plan → implement → test + explain (parallel) | code, tests, explanation |
| transform | parse → map → validate | output, schema |
| decide | options → score → justify | decision |
| refactor | analyze → propose → apply | diff |
| classify | single-step classification with capability hints | label |
| qa | retrieve-and-answer with citations | answer |
| extract | structured extraction into typed fields | fields |
Tasks: multi-step state
A task is a logical agent run identified by task_id. Calls sharing a task id roll up into a single snapshot with cumulative cost, latency, call count, and an optional budget envelope. Workflows create tasks automatically; you can also create them implicitly by passing task_id to a chat completion.
# Retrieve current state of a task
snapshot = kr.tasks.get("task_abc123", include_receipts=True)
print(snapshot["task"]["status"])
print(snapshot["task"]["total_cost_usd"])
for r in snapshot["receipts"]:
print(r["decision"]["model_id"], "->", r["execution"]["status"])
# Cancel a running task
kr.tasks.cancel("task_abc123")Task states: pending → planning → running → completed / failed / cancelled / budget_exceeded. Paused tasks resume on the next matching call.
Budget enforcement
Attach a BudgetSpec to any workflow or task. Three policies:
Capability-aware routing
Declare what a step needs. KR picks a model that satisfies those capabilities at the lowest cost within the quality tier.
tool_usestructured_outputjson_modefunction_callingvisionaudio_inaudio_outembeddingsreasoningcode_generationlong_contextfast_inferencelow_costmultilingualstreamingOverride default capability → routing-requirement mapping per tenant via POST /v1/capabilities.
Routing receipts
Every decision KR makes is logged as a machine-readable RoutingReceipt: the candidate pool considered, why each was filtered or chosen, the selected model and fallback chain, the actual cost/latency/status, and classifier diagnostics. Pull them via tasks.get(..., include_receipts=true) or via the API reference. Receipts feed the built-in benchmarks page and the nightly model-scoring refit loop.
Ready to build?
Full endpoint and response schemas in the API reference. Grab a key and ship your first DAG.
Get Your API Key →