Going 10x
Agentic engineering is like multicore CPUs. Agents aren’t always faster than humans (sometimes they are, for sure). But a single human can control several agents in parallel, and that’s where the productivity speedup comes from. This page contains tips on how to deal with this effectively.
Conflict management
Section titled “Conflict management”- The most popular option is to have each parallel agent run in a Git worktree.
- A Git worktree is an additional working directory attached to the same repository, so you can have multiple branches checked out side by side without cloning the repo again.
- For this to be efficient, your app needs to be easily bootable with a blank or seeded state from a fresh Git checkout.
- Raw Git worktrees can feel annoying when you want to quickly jump into an agent’s changes, switch editor context, or test uncommitted work on a simulator. There is also a Git limitation: each worktree of a single checkout must operate on a different
HEAD(branch/commit/ref). - Some coding agents offer built-in worktree management, such as Conductor Spotlight Testing and Codex Handoff , which reduce a lot of the plumbing involved in moving between an isolated agent workspace and the place where you actually run or test the app.
- Models easily handle prompts such as
do this in a /tmp worktree.
- You can also try keeping several separate full Git checkouts.
- Or, you can go YOLO and actually have multiple agents operate on a single codebase.
- Tell your agents to commit frequently, atomically, and only stage stuff from the current thread.
You can put this as a rule in your
AGENTS.mdfile. - Avoid invoking tasks that would write to the same parts of the repository.
- Tell your agents to commit frequently, atomically, and only stage stuff from the current thread.
You can put this as a rule in your
Help! My agents are writing spaghetti!!!
Section titled “Help! My agents are writing spaghetti!!!”This is where vibe coding starts to diverge from agentic engineering. You, as a human, do serious work, and you must be held responsible for it. Models can make a mess because they are rewarded for task completion during training, not long-term architecture. This is (still) a human’s job.
- Organize the code in rather small modules with strict boundaries, predictable structure, and well-defined input/output.
- Enforce invariants in code, not only in documentation. Strict types, assertions, and tons of tests are your friend here.
- Enforce code patterns mechanically. Tell agents to write dedicated linters and CI jobs.
- What an agent doesn’t see doesn’t exist. Unlike humans, agents have no memory. Make sure all knowledge is either kept in the repository as files or easily reachable through MCP. Periodically verify this knowledge is actually read by agents.
Sounds familiar, doesn’t it? These are standard practices of large-scale engineering. Agentic engineering just amplifies problems. There is only one tip that applies to working with agents, which you’d be unlikely to adopt when working in 100% human teams:
Read more:
- Harness engineering: leveraging Codex in an agent-first world OpenAI. 2026-02-11
- AI Is Forcing Us To Write Good Code Steve Krenzel. 2025-12-29
Automated code reviews
Section titled “Automated code reviews”Automated review can run in parallel with implementation and catch obvious issues before a human reviewer spends time on them.
At first, these tools may produce low-quality feedback. That is expected, just as newly hired engineers need ramp-up time before they can review effectively. High-quality review depends on context: familiarity with the codebase, its history, and the team’s rules.
To make automated review agents useful, capture that context in artifacts they can read. There is no shared standard across tools yet, so you need to learn how your chosen review tool is configured and provide context in the format it expects.
In-editor / local review workflows:
- Code reviewer subagent in Claude Code
- Reviewing Code with Cursor | Cursor Docs
- Codex CLI features (run local code review)
- Warden
PR review bots:
Use these tools to reduce toil, not to skip ownership: don’t ask teammates to review code you haven’t reviewed yourself.
- Anti-patterns: things to avoid Simon Willison. 2026-03-04