Prompting Techniques
Once you have a sensible workflow, the next question is how to interact with the agent within it. Good prompting is less about writing massive instructions and more about giving the right kind of guidance at the right time. With a good prompt, the agent can often find the right answers on its own.
This page collects tactics that help at different moments of engineering.
Clarify the task first
Section titled “Clarify the task first”Many prompting failures begin before the agent writes a single line of code. If the task is underspecified, framed too narrowly, or missing the right reference points, the agent will confidently optimize for the wrong thing. These techniques help you align on intent before execution begins.
Frame first, then implement
Section titled “Frame first, then implement”If the task is vague, do not jump straight to architecture. First, ask the agent for a short framing brief covering the problem, desired outcome, non-goals, options, and open questions with owners.
A false certainty that the agent knows your intent is a common failure mode. Ask the agent to separate facts, assumptions, and preferences, then review whether its understanding is correct. You can include the following snippet in your prompt:
Interview me relentlessly about every aspect of this plan until we reach ashared understanding. Walk down each branch of the design tree, resolvingdependencies between decisions one-by-one.Source: grill-me skill Matt Pocock. 2026-02-25
Socratic prompting
Section titled “Socratic prompting”TL;DR: Instead of giving the answer, ask questions that lead the agent to reach it on their own. Example: Rather than “The bug is in the condition,” ask “What are the assumptions?”, “Is this condition always true?”, and “What happens for the edge case?” The goal is to surface assumptions, check logic, and arrive at the conclusion through the agent’s own reasoning.
Source: Prompting best practices Anthropic.
Underprompting
Section titled “Underprompting”Intentionally omit some element of the prompt so that:
- The agent explores that area more thoroughly in its own way.
- You can see what’s missing in AGENTS.md / codebase / docs from what the agent tried to read.
Document knowledge
Section titled “Document knowledge”TL;DR: If a fact should survive a reset, handoff, or future task, ask the agent to write it into the repository instead of leaving it in chat.
Treat AGENTS.md as a short map.
Treat repo-local Markdown as the system of record.
This works well for architecture notes, domain terminology, debugging discoveries, migration constraints, and execution plans. A small, focused document is usually better than one giant instruction file. The goal is not “more docs”. The goal is making the right knowledge discoverable, versioned, and loadable on demand.
Sources:
- Knowledge Document - Augmented Coding Patterns Lada Kesseler. 2025-10-20
- Harness engineering: leveraging Codex in an agent-first world OpenAI. 2026-02-11
Borrowing from other code
Section titled “Borrowing from other code”If you know you’re working on a problem that has been solved somewhere else, tell your agent to find and borrow the solution.
- If this is open-source code, you can tell it to clone it into
/tmp. - There is also mcp.grep.app if you’re just looking for code snippets.
- Beware of legal (copyright) concerns!
Depending on the codebase, a useful pattern is to have several golden files that the agent can use as references.
- For example, if you have a component library, tell the agent to follow the implementation of the
ui/Button.tsxcomponent. - If you’re doing repetitive module refactoring, make the changes in one golden module, then ask the agent to reproduce them in other modules.
Steer the execution
Section titled “Steer the execution”Once the task is clear, the next job is to keep execution on useful rails without micromanaging every action. That does not require a human to watch every step. Much of this can be automated too.
Red/green TDD
Section titled “Red/green TDD”TL;DR: Write tests first, confirm they fail (red), then implement until they pass (green). This is a strong fit for coding agents because it reduces broken or unused code and builds a robust test suite.
Source: Red/green TDD Simon Willison. 2026-02-23
Knowledge checkpoint
Section titled “Knowledge checkpoint”TL;DR: After planning but before implementation, checkpoint the plan in the repo and maybe commit it. That preserves the expensive part, your planning and explanations, while making failed implementation attempts cheap to retry.
Ask the agent to extract the agreed plan into a short Markdown file, commit that file as a checkpoint, and only then start coding. If the implementation goes sideways, reset to the checkpoint and try again without redoing the planning work. This is especially useful when a model is overeager to jump straight into code before the plan is stable.
For example:
We have the plan. Before you implement, write the agreed approach to ashort Markdown file in the repository. Keep it concise and action-oriented.Then make a git commit as a checkpoint. Only after that, start implementation.If the implementation attempt fails, reset to the checkpoint and try again fromthe saved plan.Source: Knowledge Checkpoint - Augmented Coding Patterns Lada Kesseler. 2025-10-01
Parallel implementations
Section titled “Parallel implementations”TL;DR: When the task has multiple plausible solutions or a high chance of failure, branch from one checkpoint and let several implementations race in parallel. This works especially well when work involves some degree of creativity, like interface design.
The pattern is:
- Create a checkpoint with the plan and a Git commit.
- Fork into parallel workspaces, for example with Git worktrees. Alternatively, you can spawn multiple subagents and tell each one to write to a different file.
- Launch multiple implementations from the same starting point.
- Review the results side by side.
- Keep the best version or combine the strongest parts.
Source: Parallel Implementations - Augmented Coding Patterns Lada Kesseler. 2025-10-01
When text and code is not enough
Section titled “When text and code is not enough”Sometimes the fastest way to explain a problem is not another painfully typed paragraph. Other forms of communication can be valuable too.
Multimodal input
Section titled “Multimodal input”Screenshot a bug or broken UI in your app. Or record a short video of something changing over time. Drop it into the prompt and say “fix it”. You can even add arrows and annotations in your screenshot tool. Frontier models handle multimodal input very well.
Put a human in the loop
Section titled “Put a human in the loop”Ask the agent to prepare a step-by-step reproduction of a complex manual flow (what to click, in what order); you perform the steps (e.g., log in to the browser with your account), report back, and the agent continues or verifies. Like augmented manual QA: the agent scripts the scenario, and the human does the sensitive or interactive bits.
Just talk to it
Section titled “Just talk to it”If typing is tiring, you can use your voice instead. Some coding agents have built-in voice dictation features. Similar things are also provided by accessibility features of operating systems, like Dictation on macOS, or Voice Typing on Windows, and many third-party apps for all platforms.
Interactive playgrounds
Section titled “Interactive playgrounds”When exploring a new topic or prototyping an algorithm or some component, you tell the agent to build interactive playgrounds to explain the concept visually and to expose quick controls for fine-tuning the parameters.
Take a look at these utilities:
- Playground Claude Plugin Anthropic.
- Introducing Showboat and Rodney, so agents can demo what they’ve built Simon Willison. 2026-02-10
- Feedback Loopable Lewis Metcalf. 2026-02-05
Your exoskeleton
Section titled “Your exoskeleton”Some prompting patterns are less about a single task and more about extending your reach as an engineer. They turn the agent into an exoskeleton around your normal workflow: surfacing understanding, handling mechanical work, and exploring unfamiliar systems.
Walkthroughs
Section titled “Walkthroughs”TL;DR: You can tell your agent to explain to you in human language what happens in the code, or to help you review its work by letting the agent showcase it. Example prompt:
Read the source and then plan a linear walkthrough of the code that explains howit all works in detail. Then create a walkthrough.md file in the repo and buildthe walkthrough in there, using Mermaid diagrams and commentary notes or whateveryou need. Include snippets of code you are talking about.Sources:
- Linear walkthroughs Simon Willison. 2026-02-25
- Shareable Walkthroughs Amp. 2026-01-29
Models are good at Git
Section titled “Models are good at Git”You can tell your agent:
- To make a commit.
- To create a branch/worktree or check out some repo to
/tmp. - To submit a PR.
- To merge/rebase and fix conflicts (make sure to back up, or ask the agent to use
git reflogin bad scenarios). - To use the
ghCLI to read issues/PR comments or GitHub Actions logs.
Reverse engineer
Section titled “Reverse engineer”Agents are surprisingly good at reverse engineering apps. Minified/compiled code is not such a big cognitive burden for them, and they have broad knowledge of useful tools in this space.
Try pointing your agent at some website or Android APK where there’s something you want to mimic. Initially, you may need to interactively install some tools that the agent would like to use.
Issue pinpointer
Section titled “Issue pinpointer”Got a huge log file, maybe a screenshot, and you need to identify the bug? Paste everything you have into your agent prompt and tell it to find the exact log lines itself.