Files
2025-11-30 08:39:24 +08:00

432 lines
17 KiB
Markdown
Raw Permalink Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
description: Create or update the feature specification from a natural language feature description.
---
<!--
ATTRIBUTION CHAIN:
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
Copyright (c) GitHub, Inc. | MIT License
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
3. Forked: SpecSwarm plugin with tech stack management
by Marty Bonacci & Claude Code (2025)
-->
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{ARGS}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
Given that feature description, do this:
1. **Create New Feature Structure** (replaces script execution):
a. **Find Repository Root and Initialize Features Directory**:
```bash
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
# Source features location helper
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
source "$PLUGIN_DIR/lib/features-location.sh"
# Initialize features directory (handles migration if needed)
ensure_features_dir "$REPO_ROOT"
```
b. **Determine Next Feature Number**:
```bash
# Get next feature number using helper
FEATURE_NUM=$(get_next_feature_number "$REPO_ROOT")
```
c. **Create Feature Slug from Description**:
```bash
# Convert description to kebab-case
# Example: "User Authentication System" → "user-authentication-system"
SLUG=$(echo "$ARGUMENTS" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//' | sed 's/-$//')
```
d. **Capture Parent Branch and Create Feature Branch** (if git available):
```bash
BRANCH_NAME="${FEATURE_NUM}-${SLUG}"
if git rev-parse --git-dir >/dev/null 2>&1; then
# Capture current branch as parent BEFORE switching
PARENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Branch Setup"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo " Parent branch: $PARENT_BRANCH (current branch)"
echo " Feature branch: $BRANCH_NAME (will be created)"
echo ""
echo " The feature branch will be created from $PARENT_BRANCH."
echo " When complete, it will merge back to $PARENT_BRANCH."
echo ""
read -p "Is this correct? (y/n): " branch_confirm
if [ "$branch_confirm" != "y" ]; then
echo ""
echo "❌ Branch setup cancelled"
echo ""
echo "Please checkout the correct parent branch first, then run:"
echo " /specswarm:specify \"$FEATURE_DESCRIPTION\""
exit 0
fi
echo ""
# Create and switch to new feature branch
git checkout -b "$BRANCH_NAME"
else
# Non-git: use environment variable
export SPECIFY_FEATURE="$BRANCH_NAME"
PARENT_BRANCH="unknown"
fi
```
e. **Create Feature Directory Structure**:
```bash
FEATURE_DIR="${FEATURES_DIR}/${FEATURE_NUM}-${SLUG}"
mkdir -p "${FEATURE_DIR}/checklists"
mkdir -p "${FEATURE_DIR}/contracts"
```
f. **Set Path Variables** (use these for remainder of command):
```bash
SPEC_FILE="${FEATURE_DIR}/spec.md"
CHECKLISTS_DIR="${FEATURE_DIR}/checklists"
```
2. Load spec template to understand required sections:
- Try to read `templates/spec-template.md` if it exists
- If template missing, use this embedded minimal template:
```markdown
# Feature: [Feature Name]
## Overview
[Brief description of the feature and its purpose]
## User Scenarios
[Key user flows and scenarios]
## Functional Requirements
[What the system must do]
## Success Criteria
[Measurable outcomes that define success]
## Key Entities
[Important data entities involved]
## Assumptions
[Documented assumptions and reasonable defaults]
```
3. **Parse the user's feature description** from `$ARGUMENTS` and validate:
- If empty: ERROR "No feature description provided"
- Extract key concepts: actors, actions, data, constraints
4. **Framework & Dependency Compatibility Check** (for upgrade/migration features):
If the feature description mentions dependency upgrades (PHP, Laravel, Node, framework versions, etc.), perform compatibility validation:
a. **Detect Upgrade Context**:
```bash
# Check if feature involves upgrades
INVOLVES_UPGRADE=$(echo "$ARGUMENTS" | grep -iE '(upgrade|migrat|updat).*(php|laravel|node|framework|version|[0-9]\.[0-9])')
```
b. **Read Project Dependencies** (if upgrade detected):
```bash
if [ -n "$INVOLVES_UPGRADE" ] && [ -f "${REPO_ROOT}/composer.json" ]; then
# Extract PHP requirement
PHP_CURRENT=$(grep -Po '(?<="php":\s")[^"]+' "${REPO_ROOT}/composer.json" 2>/dev/null)
# Extract Laravel/framework version
FRAMEWORK=$(grep -Po '(?<="laravel/framework":\s")[^"]+' "${REPO_ROOT}/composer.json" 2>/dev/null)
fi
```
c. **Cross-Reference Compatibility Matrices**:
When upgrade targets are identified in the feature description, check known compatibility constraints:
**Laravel Compatibility Matrix**:
- Laravel 5.8: PHP 7.2 - 7.4 ONLY
- Laravel 6.x: PHP 7.2 - 8.0
- Laravel 7.x: PHP 7.2 - 8.0
- Laravel 8.x: PHP 7.3 - 8.1
- Laravel 9.x: PHP 8.0 - 8.2
- Laravel 10.x: PHP 8.1 - 8.3
- Laravel 11.x: PHP 8.2 - 8.3
**Key Detection Rules**:
- If feature mentions "PHP 8.x upgrade" AND Laravel 5.8 detected → BLOCKER
- If feature mentions "PHP 8.x upgrade" AND Laravel 6-7 detected → WARNING (check target PHP version)
- If feature mentions framework upgrade dependencies → Include in spec
d. **Add Blockers Section to Spec** (if incompatibilities found):
If compatibility issues detected, add this section AFTER "Overview" and BEFORE "User Scenarios":
```markdown
## ⚠️ CRITICAL BLOCKERS & DEPENDENCIES
### [Framework] Version Incompatibility
**Issue**: [Current framework version] officially supports [compatible versions] only.
**Current State**:
- Framework: [Detected version from composer.json]
- Target: [Upgrade target from feature description]
- Compatibility: ❌ NOT COMPATIBLE
**Resolution Options**:
1. **Recommended**: Upgrade framework first
- Path: [Current] → [Intermediate versions] → [Target compatible version]
- Benefit: Official support, maintained compatibility
- Timeline: [Estimated complexity]
2. **Community Patches**: Use unofficial compatibility patches
- Benefit: Faster, smaller scope
- Risk: Unsupported, may break in production
- Recommendation: NOT recommended for production
3. **Stay on Compatible Version**: Delay target upgrade
- Keep: [Current compatible version]
- Timeline: [Until when it's supported]
- Benefit: Stable, supported
4. **Accept Risk**: Proceed with unsupported configuration
- Risk: High - potential breaking changes
- Required: Extensive testing, acceptance of maintenance burden
- Recommendation: Only if timeline critical and resources available
**Recommended Path**: [Most appropriate option with reasoning]
**Impact on This Feature**: This blocker must be resolved before beginning implementation. Consider creating separate features for:
- Feature XXX: [Framework] upgrade to [compatible version]
- Feature YYY: [Dependency] upgrade (this feature, dependent on XXX)
```
e. **Document Assumptions About Compatibility**:
Even if no blockers found, add relevant assumptions to the Assumptions section:
- "Framework version [X] is compatible with [upgrade target]"
- "Standard upgrade path follows: [path]"
- "Breaking changes from [source docs URL] have been reviewed"
5. Follow this execution flow:
1. For unclear aspects:
- Make informed guesses based on context and industry standards
- Only mark with [NEEDS CLARIFICATION: specific question] if:
- The choice significantly impacts feature scope or user experience
- Multiple reasonable interpretations exist with different implications
- No reasonable default exists
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
2. Fill User Scenarios & Testing section
If no clear user flow: ERROR "Cannot determine user scenarios"
3. Generate Functional Requirements
Each requirement must be testable
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
4. Define Success Criteria
Create measurable, technology-agnostic outcomes
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
Each criterion must be verifiable without implementation details
5. Identify Key Entities (if data involved)
6. Return: SUCCESS (spec ready for planning)
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
**IMPORTANT: Include YAML Frontmatter**:
The spec.md file MUST start with YAML frontmatter containing metadata:
```yaml
---
parent_branch: ${PARENT_BRANCH}
feature_number: ${FEATURE_NUM}
status: In Progress
created_at: $(date -Iseconds)
---
```
This metadata enables the `/specswarm:complete` command to merge back to the correct parent branch.
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
```markdown
# Specification Quality Checklist: [FEATURE NAME]
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: [DATE]
**Feature**: [Link to spec.md]
## Content Quality
- [ ] No implementation details (languages, frameworks, APIs)
- [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed
## Requirement Completeness
- [ ] No [NEEDS CLARIFICATION] markers remain
- [ ] Requirements are testable and unambiguous
- [ ] Success criteria are measurable
- [ ] Success criteria are technology-agnostic (no implementation details)
- [ ] All acceptance scenarios are defined
- [ ] Edge cases are identified
- [ ] Scope is clearly bounded
- [ ] Dependencies and assumptions identified
## Feature Readiness
- [ ] All functional requirements have clear acceptance criteria
- [ ] User scenarios cover primary flows
- [ ] Feature meets measurable outcomes defined in Success Criteria
- [ ] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
```
b. **Run Validation Check**: Review the spec against each checklist item:
- For each item, determine if it passes or fails
- Document specific issues found (quote relevant spec sections)
c. **Handle Validation Results**:
- **If all items pass**: Mark checklist complete and proceed to step 6
- **If items fail (excluding [NEEDS CLARIFICATION])**:
1. List the failing items and specific issues
2. Update the spec to address each issue
3. Re-run validation until all items pass (max 3 iterations)
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- **If [NEEDS CLARIFICATION] markers remain**:
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
3. For each clarification needed (max 3), present options to user in this format:
```markdown
## Question [N]: [Topic]
**Context**: [Quote relevant spec section]
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
**Suggested Answers**:
| Option | Answer | Implications |
|--------|--------|--------------|
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
**Your choice**: _[Wait for user response]_
```
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
- Use consistent spacing with pipes aligned
- Each cell should have spaces around content: `| Content |` not `|Content|`
- Header separator must have at least 3 dashes: `|--------|`
- Test that the table renders correctly in markdown preview
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
6. Present all questions together before waiting for responses
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
9. Re-run validation after all clarifications are resolved
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
7. Report completion with:
- Branch name (if git repo)
- Feature directory path
- Spec file path
- Checklist results
- Readiness for next phase (`/clarify` or `/plan`)
**NOTE:** This command creates the feature branch (if git), initializes the feature directory structure, and creates the initial spec.md file.
## General Guidelines
## Quick Guidelines
- Focus on **WHAT** users need and **WHY**.
- Avoid HOW to implement (no tech stack, APIs, code structure).
- Written for business stakeholders, not developers.
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
### Section Requirements
- **Mandatory sections**: Must be completed for every feature
- **Optional sections**: Include only when relevant to the feature
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
### For AI Generation
When creating this spec from a user prompt:
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
- Significantly impact feature scope or user experience
- Have multiple reasonable interpretations with different implications
- Lack any reasonable default
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
6. **Common areas needing clarification** (only if no reasonable default exists):
- Feature scope and boundaries (include/exclude specific use cases)
- User types and permissions (if multiple conflicting interpretations possible)
- Security/compliance requirements (when legally/financially significant)
**Examples of reasonable defaults** (don't ask about these):
- Data retention: Industry-standard practices for the domain
- Performance targets: Standard web/mobile app expectations unless specified
- Error handling: User-friendly messages with appropriate fallbacks
- Authentication method: Standard session-based or OAuth2 for web apps
- Integration patterns: RESTful APIs unless specified otherwise
### Success Criteria Guidelines
Success criteria must be:
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
4. **Verifiable**: Can be tested/validated without knowing implementation details
**Good examples**:
- "Users can complete checkout in under 3 minutes"
- "System supports 10,000 concurrent users"
- "95% of searches return results in under 1 second"
- "Task completion rate improves by 40%"
**Bad examples** (implementation-focused):
- "API response time is under 200ms" (too technical, use "Users see results instantly")
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
- "React components render efficiently" (framework-specific)
- "Redis cache hit rate above 80%" (technology-specific)