Skip to content

High-level harnesses

The harness engineering chapter covered shaping a single agent’s actions through AGENTS.md, skills, hooks, and subagents. This page is one level of abstraction up — it covers tools and patterns that treat agents as a manageable workforce.

So far in this guide, you have been an engineer — you have worked interactively with a single agent, steering it turn by turn in real time. Now, you will become a manager, delegating work to a fleet of agents running in parallel. Instead of supervising each agent individually, you will manage the output queue — a review inbox, an issue tracker, a PR pipeline. Your coding assistant no longer serves as a conductor, but as an orchestrator.

The key difference is running several agents simultaneously, each on an isolated task. You hand different issues to separate agents at once, come back and review, and merge the ones you like. That is qualitatively different from the sequential, one-task-at-a-time conductor workflow from the previous chapters.

Subagents are also parallel, but they are different: a subagent is spawned by the agent to partition a single task’s context. The agent decides when to spawn one, waits for the result, and folds it back into its own session. You as the human still trigger one top-level session and review one result.

What is described here is different: you spawn multiple fully independent agent sessions, each assigned to a separate task. No session knows about the others. You do not need to wait for any single agent — you come back later and review the queue of results in bulk.

In practice, each agent needs its own isolated workspace — typically a separate Git worktree — so their changes do not interfere. A dashboard or queue then surfaces results as agents finish, letting you review and merge at your own pace.

For example, Conductor Melty Labs. is a tool built around this model, running multiple AI coding agents (Claude Code and Codex) in parallel worktrees with a shared review dashboard.

Agents do not always need to wait for you to trigger them — you can set them up in advance to run on a schedule. The pattern is similar to a cron job or a CI pipeline: describe a recurring task, define when it should run, and have an agent execute it in the background. Results land in a review inbox or are auto-archived if nothing needs attention.

This is well-suited for tasks like:

  • Daily issue triage
  • Surfacing and summarizing CI failures
  • Generating release briefs
  • Checking for regressions between versions

With scheduled agents, the process becomes closer to a CI pipeline than a chat window — an agent is no longer a tool you reach for, but a background process.

Example application features built around this pattern:

A natural extension of scheduled agents is wiring them directly to your issue tracker. Instead of manually assigning tasks to agents, the system monitors a board and automatically spawns an agent for each new issue in scope. Engineers decide what issues belong in scope; the orchestrator handles assignment and execution.

Agent behavior can be defined in a workflow file versioned alongside the code — the same way you version a CI pipeline. When an agent finishes, it gathers evidence (CI results, PR review feedback, complexity analysis) for human review.

For example, Symphony OpenAI. is an open-source orchestration service that implements this pattern, monitoring a Linear board and running a Codex agent per issue in an isolated workspace.

Running multiple agents in parallel may create coordination problems — agents must exchange information without overloading any one context window. Two broad patterns have emerged.

The simpler one is hub-and-spoke orchestration, where a lead agent spawns workers, collects their outputs, and consolidates them. Workers never communicate directly. The benefit is simplicity, as the full picture is present in one place. The cost is that every intermediate result, log line, and failed attempt flows back through the orchestrator’s context, degrading its reasoning quality over time.

The more capable pattern is collaborative teaming, where agents share a task list, claim work independently, and can send messages directly to one another. A worker can flag a dependency, request a peer review, or broadcast a finding without routing it through the lead. The lead’s context stays clean; coordination happens at the edges.

In practice, most pipelines fall somewhere on a spectrum between these extremes, often organized into three levels:

  1. Isolated workers — each agent runs independently and returns its output to the caller.
  2. Orchestrated workflows — outputs become inputs for the next stage via shared files or aggregated results.
  3. Collaborative teams — agents share a task graph, can send direct or broadcast messages, and notify the lead when work completes.

The right level depends on how tightly coupled the tasks are. Independent parallel tasks — security scans, test runs, lint checks — fit level 1 or 2 well. Tasks that need to challenge or build on each other’s intermediate findings call for level 3.

For reference, Claude Code Agent Teams Anthropic. implements level 3 with a shared task list, file-locked claiming, mailboxes for direct and broadcast messages, and idle notifications back to the lead.

Beyond specific products, there is an emerging pattern known as the Code Factory. The idea is a repository setup where agents autonomously write code, open pull requests, and a separate review agent validates those PRs with machine-verifiable evidence. If validation passes, the PR merges without human intervention.

The continuous loop looks like this:

  1. Agent writes code and opens a PR.
  2. Risk-aware CI gates check the change.
  3. A review agent inspects the PR and collects evidence — screenshots, test results, static analysis.
  4. If all checks pass, the PR lands automatically.
  5. If anything fails, the agent retries or flags the issue for human review.

The code factory pattern is the technical foundation of a broader idea: that a single person with a well-configured agent fleet can operate at the scale that would previously have required a full engineering team.

This requires connecting agents to communication platforms, scheduling systems, and external services — turning a single machine into an always-on runtime that responds to messages, executes tasks, and ships work continuously. As an example of tooling in this space, OpenClaw Peter Steinberger. packages infrastructure for exactly this kind of setup.

In From IDEs to AI Agents with Steve Yegge Gergely Orosz., Yegge argues that the engineering profession is reorganizing around exactly this spectrum. His framing: most engineers are at the low end of AI adoption today, and those who stay there risk being outcompeted by engineers who learn to orchestrate agent fleets — to act as owners of work queues rather than writers of individual functions.