--- description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. --- ## User Input ```text $ARGUMENTS ``` You **MUST** consider the user input before proceeding (if not empty). ## Outline Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. Execution steps: 1. **Discover Feature Context**: ```bash REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd) # Source features location helper SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" PLUGIN_DIR="$(dirname "$SCRIPT_DIR")" source "$PLUGIN_DIR/lib/features-location.sh" # Initialize features directory get_features_dir "$REPO_ROOT" BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null) FEATURE_NUM=$(echo "$BRANCH" | grep -oE '^[0-9]{3}') [ -z "$FEATURE_NUM" ] && FEATURE_NUM=$(list_features "$REPO_ROOT" | grep -oE '^[0-9]{3}' | sort -nr | head -1) find_feature_dir "$FEATURE_NUM" "$REPO_ROOT" # FEATURE_DIR is now set by find_feature_dir FEATURE_SPEC="${FEATURE_DIR}/spec.md" ``` Validate: If FEATURE_SPEC doesn't exist, ERROR: "No spec found. Run `/specify` first." 2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). Functional Scope & Behavior: - Core user goals & success criteria - Explicit out-of-scope declarations - User roles / personas differentiation Domain & Data Model: - Entities, attributes, relationships - Identity & uniqueness rules - Lifecycle/state transitions - Data volume / scale assumptions Interaction & UX Flow: - Critical user journeys / sequences - Error/empty/loading states - Accessibility or localization notes Non-Functional Quality Attributes: - Performance (latency, throughput targets) - Scalability (horizontal/vertical, limits) - Reliability & availability (uptime, recovery expectations) - Observability (logging, metrics, tracing signals) - Security & privacy (authN/Z, data protection, threat assumptions) - Compliance / regulatory constraints (if any) Integration & External Dependencies: - External services/APIs and failure modes - Data import/export formats - Protocol/versioning assumptions Edge Cases & Failure Handling: - Negative scenarios - Rate limiting / throttling - Conflict resolution (e.g., concurrent edits) Constraints & Tradeoffs: - Technical constraints (language, storage, hosting) - Explicit tradeoffs or rejected alternatives Terminology & Consistency: - Canonical glossary terms - Avoided synonyms / deprecated terms Completion Signals: - Acceptance criteria testability - Measurable Definition of Done style indicators Misc / Placeholders: - TODO markers / unresolved decisions - Ambiguous adjectives ("robust", "intuitive") lacking quantification For each category with Partial or Missing status, add a candidate question opportunity unless: - Clarification would not materially change implementation or validation strategy - Information is better deferred to planning phase (note internally) 3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: - Maximum of 10 total questions across the whole session. - Each question must be answerable with EITHER: * A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR * A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words"). - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic. 4. Sequential questioning loop (interactive): - Present EXACTLY ONE question at a time. - For multiple‑choice questions render options as a Markdown table: | Option | Description | |--------|-------------| | A |