skillup

name: my-agent-builder description: Turns a vague task description into a complete agent team prompt. Asks clarifying questions then outputs a copy-pasteable team creation prompt. argument-hint: [what you want the team to do] user-invocable: true disable-model-invocation: true allowed-tools: Read, Glob

The user wants to create an agent team. Their initial description: $ARGUMENTS

Step 1 — Parse intent

If $ARGUMENTS is empty, ask the user: "What do you want the agent team to work on?"

Otherwise use $ARGUMENTS as the starting point.

Step 2 — Identify task type

Based on the description, silently classify the request as one of:

  • implementation — building something new
  • review — reviewing existing code or a PR
  • debug — investigating a bug or competing hypotheses
  • research — evaluating options or comparing approaches

Step 3 — Ask clarifying questions

Ask ALL of the questions for the identified task type in a single message. Do not ask one at a time.

If implementation:

  1. What are the distinct layers or modules? (e.g. frontend, backend, tests, data)
  2. Which files or directories are involved, or should be created?
  3. Are there dependencies between layers? Classify each one:
    • Contract dependency (agents need to agree on a shared API, data structure, or interface) → these are resolved upfront by embedding the agreed contract in every affected agent's prompt, so all agents run in parallel.
    • Runtime dependency (an agent literally cannot start without another's output existing — e.g. a test agent running against real code) → these are true blockers; those agents are spawned in a later wave.
  4. Should teammates submit a plan for approval before writing any code?

If review:

  1. What lenses should each reviewer use? (e.g. security, performance, code quality)
  2. Should reviewers share findings with each other before reporting to the lead?
  3. Should the lead produce a final written report? If so, what filename?

If debug:

  1. What are the competing hypotheses or suspected causes?
  2. Should teammates actively try to disprove each other's theories?
  3. Where should the final findings be written?

If research:

  1. What specific options or angles should be explored?
  2. What criteria matter most? (e.g. cost, performance, complexity, ecosystem)
  3. Should teammates challenge each other's conclusions before the lead synthesizes?

Always ask these regardless of type:

  • How many teammates? (suggest a number based on the task — 3 for review/research/debug, 4 for implementation — but let the user override)
  • Any model preferences? (default is Sonnet for all; Opus for deep analysis, Haiku for routine work)
  • Ownership strictness: Should the lead fix small mismatches itself (pragmatic — faster, default) or route all corrections back to the responsible teammate (strict — cleaner ownership, extra round-trips)?

Step 4 — Generate the prompt

Once you have the answers, output a complete formatted team creation prompt the user can copy-paste directly into a Claude instance. Use this structure:

Create an agent team to [goal]. [If plan approval requested: Require plan approval before any teammate writes any code.]

Spawn [N] teammates with these exact names and ownership boundaries:

- "[name]": owns [files/dirs] — [specific focus and responsibilities]
- "[name]": owns [files/dirs] — [specific focus and responsibilities]
...

[If contract dependencies exist — shared APIs, data structures, or interfaces multiple teammates need:
Embed the agreed contract directly in every affected teammate's spawn prompt so all can start simultaneously. Example:
  "All teammates use this agreed tomato object shape: { id, x, y, radius, speed, squashed, splatTimer }"]

[If runtime dependencies exist — a teammate truly cannot start until another's output exists (e.g. tests running against real code):
Create those tasks with a dependency on the blocking task. The task system will automatically unblock them when the dependency completes. Do NOT delay spawning the teammate — spawn everyone upfront and let the task dependency handle the sequencing.]

[If teammates should communicate: "[name]" and "[name]" should message each other to agree on [interface/API/findings] before implementing.]

[If research/review: Each teammate should work independently, then share findings with each other before reporting back.]

File ownership is strict — no teammate edits files outside their assigned area.

Lead: spawn all teammates at the same time. Only use task dependencies to sequence work that has a true runtime blocker — not just because teammates share a contract. Coordinate and synthesize only — do not implement anything yourself.

[If pragmatic ownership (default): After all teammates finish, audit the files for mismatches (IDs, class names, shared interfaces). Fix minor issues directly, then write the final summary to [output file or README.md].]

[If strict ownership: You may NOT write or edit any file yourself — not even small fixes. After all teammates finish, audit the files. If mismatches exist, send correction tasks back to the responsible teammate and wait for them to fix their own file before proceeding. Only write [output file or README.md] once all files are confirmed correct.]

After outputting the prompt, add a short note:

Copy the block above and paste it into a new Claude Code instance in your project directory.