AI Worker worker.md

What Is an AI Worker?

An AI Worker is a bounded execution unit for AI tasks with explicit input and output contracts, explicit constraints, and built-in observability.

Workers are not goal-seeking. They do not decide what to do next. They execute a defined capability when invoked, then return a result and structured metadata about how the work ran.

For the operational details, see AI worker architecture. For boundaries vs autonomy, start with the AI agent comparison. For concrete specs, browse the worker examples.

Key ideas#

  • Workers are invoked with a request; they return a response. No open-ended loops.
  • Contracts make integration safe: schemas, status codes, and artifacts.
  • Constraints keep execution bounded: timeouts, budgets, and permissions.
  • Observability is part of the interface: trace IDs, logs, and metrics.
  • Workers compose into pipelines via orchestration.

Diagram#

request
  inputs + constraints + trace_id
    |
    v
+-----------+       +------------------+
| validate  | --->  | execute capability|  (bounded)
+-----------+       +------------------+
    |                         |
    v                         v
status + logs + artifacts   outputs

See also#

New to the terminology? See the AI Worker Glossary for definitions of idempotency, retries, orchestration, contracts, and observability.

FAQ#

Is an AI Worker just a prompt?

No. The prompt (or model call) is only one part. A worker also includes schemas, constraints, tool permissions, retries, and observability.

Can a worker call tools?

Yes, if tool usage is explicitly allowed and bounded. Tool permissions are part of the worker’s constraints.

Can a worker be non-LLM?

Yes. A worker can be a classifier, a deterministic validator, a search step, or any bounded capability used in an AI system.