Browse docs

Get Started

Prompting guidelines

High-quality prompting is about clear outcomes, scoped context, and tight feedback loops, not long vague requests.

Use a simple prompt structure

  • Outcome: one sentence describing exactly what should be true when done.
  • Scope: files, components, or surfaces that should change.
  • Constraints: performance, UX, compatibility, deadlines, or style rules.
  • Acceptance checks: concrete checks the agent must pass before handoff.
  • Out-of-scope: what should not change.
text
Template:
Goal: ...
Scope: ...
Constraints: ...
Acceptance criteria: ...
Do not change: ...

Bad prompts vs better prompts

text
Bad: "Improve this."
Better: "Refactor the signup form submit flow to prevent duplicate submits and keep existing keyboard shortcuts unchanged."
text
Bad: "Fix performance."
Better: "Reduce dashboard initial render time by minimizing unnecessary re-renders in the chart panel. Keep current UI behavior identical."
text
Bad: "Add tests."
Better: "Add tests for auth token refresh failure path and expired-session redirect. Include one happy path and two failure-path tests."

Avoid context dilution

Large unfiltered dumps dilute attention and raise costs. Provide only the context needed for the current step.

  • Attach only the files or snippets directly tied to the task.
  • If a source is large, include a short summary and only key excerpts.
  • Split broad requests into milestones and review after each milestone.
  • Prefer incremental follow-ups over one giant omnibus prompt.

Declare constraints and non-goals

  • Specify what cannot change (API shape, keyboard behavior, design language).
  • Call out compatibility constraints (macOS/Windows/Linux, browser support, etc.).
  • Include risk boundaries such as no database schema changes or no dependency additions.
text
Example:
Constraints: keep API responses backwards-compatible; no new dependencies.
Do not change: existing keyboard shortcuts or command palette behavior.

Use examples when format matters

If output shape matters, provide one or two examples. This is especially useful for summaries, plans, and review reports.

text
Return format:
1) One-line summary
2) Key risks
3) Proposed next step
4) What I need from you

Prompt in short review loops

  • Start with a focused request, then inspect the result.
  • Give targeted corrective feedback: what was right, what was wrong, what to change next.
  • Ask for a brief plan before implementation on medium/large tasks.
  • Use "continue from this exact point" style follow-ups to preserve momentum.

Practical default

For most tasks: plan first, implement second, verify third. This usually improves reliability and reduces rework.