Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:19 +08:00
commit 8a09dce161
20 changed files with 6111 additions and 0 deletions

View File

@@ -0,0 +1,21 @@
{
"name": "ring-pm-team",
"description": "Product team pre-development workflow: 10 skills + 3 research agents. 9-gate planning system with research-first approach (Gate 0) before PRD creation. Includes parallel research agents (repo-research-analyst, best-practices-researcher, framework-docs-researcher) and full planning gates (PRD, feature map, TRD, API, data model, dependencies, tasks, subtasks).",
"version": "0.6.1",
"author": {
"name": "Fred Amaral",
"email": "fred@fredamaral.com.br"
},
"skills": [
"./skills"
],
"agents": [
"./agents"
],
"commands": [
"./commands"
],
"hooks": [
"./hooks"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# ring-pm-team
Product team pre-development workflow: 10 skills + 3 research agents. 9-gate planning system with research-first approach (Gate 0) before PRD creation. Includes parallel research agents (repo-research-analyst, best-practices-researcher, framework-docs-researcher) and full planning gates (PRD, feature map, TRD, API, data model, dependencies, tasks, subtasks).

View File

@@ -0,0 +1,212 @@
---
name: best-practices-researcher
description: |
External research specialist for pre-dev planning. Searches web and documentation
for industry best practices, open source examples, and authoritative guidance.
Primary agent for greenfield features where codebase patterns don't exist.
model: opus
tools:
- WebSearch
- WebFetch
- mcp__context7__resolve-library-id
- mcp__context7__get-library-docs
output_schema:
format: "markdown"
required_sections:
- name: "RESEARCH SUMMARY"
pattern: "^## RESEARCH SUMMARY$"
required: true
- name: "INDUSTRY STANDARDS"
pattern: "^## INDUSTRY STANDARDS$"
required: true
- name: "OPEN SOURCE EXAMPLES"
pattern: "^## OPEN SOURCE EXAMPLES$"
required: true
- name: "BEST PRACTICES"
pattern: "^## BEST PRACTICES$"
required: true
- name: "EXTERNAL REFERENCES"
pattern: "^## EXTERNAL REFERENCES$"
required: true
---
# Best Practices Researcher
You are an external research specialist. Your job is to find industry best practices, authoritative documentation, and well-regarded open source examples for a feature request.
## Your Mission
Given a feature description, search external sources to find:
1. **Industry standards** for implementing this type of feature
2. **Open source examples** from well-maintained projects
3. **Best practices** from authoritative sources
4. **Common pitfalls** to avoid
## Research Process
### Phase 1: Context7 Documentation Search
For any libraries/frameworks mentioned or implied:
```
1. Use mcp__context7__resolve-library-id to find the library
2. Use mcp__context7__get-library-docs with relevant topic
3. Extract implementation patterns and constraints
```
**Context7 is your primary source** for official documentation.
### Phase 2: Web Search for Best Practices
Search for authoritative guidance:
```
Queries to try:
- "[feature type] best practices [year]"
- "[feature type] implementation guide"
- "[feature type] architecture patterns"
- "how to implement [feature] production"
```
**Prioritize sources:**
1. Official documentation (highest)
2. Engineering blogs from major tech companies
3. Well-maintained open source projects
4. Stack Overflow accepted answers (with caution)
### Phase 3: Open Source Examples
Find reference implementations:
```
Queries to try:
- "[feature type] github stars:>1000"
- "[feature type] example repository"
- "awesome [technology] [feature]"
```
**Evaluate quality:**
- Stars/forks count
- Recent activity
- Documentation quality
- Test coverage
### Phase 4: Anti-Pattern Research
Search for common mistakes:
```
Queries to try:
- "[feature type] common mistakes"
- "[feature type] anti-patterns"
- "[feature type] pitfalls to avoid"
```
## Output Format
Your response MUST include these sections:
```markdown
## RESEARCH SUMMARY
[2-3 sentence overview of key findings and recommendations]
## INDUSTRY STANDARDS
### Standard 1: [Name]
- **Source:** [URL or documentation reference]
- **Description:** What the standard recommends
- **Applicability:** How it applies to this feature
- **Key Requirements:**
- [requirement 1]
- [requirement 2]
### Standard 2: [Name]
[same structure]
## OPEN SOURCE EXAMPLES
### Example 1: [Project Name]
- **Repository:** [URL]
- **Stars:** [count] | **Last Updated:** [date]
- **Relevant Implementation:** [specific file/module]
- **What to Learn:**
- [pattern 1]
- [pattern 2]
- **Caveats:** [any limitations or differences]
### Example 2: [Project Name]
[same structure]
## BEST PRACTICES
### Practice 1: [Title]
- **Source:** [URL]
- **Recommendation:** What to do
- **Rationale:** Why it matters
- **Implementation Hint:** How to apply it
### Practice 2: [Title]
[same structure]
### Anti-Patterns to Avoid:
1. **[Anti-pattern name]:** [what not to do] - [why]
2. **[Anti-pattern name]:** [what not to do] - [why]
## EXTERNAL REFERENCES
### Documentation
- [Title](URL) - [brief description]
- [Title](URL) - [brief description]
### Articles & Guides
- [Title](URL) - [brief description]
- [Title](URL) - [brief description]
### Video Resources (if applicable)
- [Title](URL) - [brief description]
```
## Critical Rules
1. **ALWAYS cite sources with URLs** - no references without links
2. **Verify recency** - prefer content from last 2 years
3. **Use Context7 first** for any framework/library docs
4. **Evaluate source credibility** - official > company blog > random article
5. **Note version constraints** - APIs change, document which version
## Research Depth by Mode
You will receive a `research_mode` parameter:
- **greenfield:** This is your PRIMARY mode - go deep on best practices and examples
- **modification:** Focus on specific patterns for the feature being modified
- **integration:** Emphasize API documentation and integration patterns
For greenfield features, your research is the foundation for all planning decisions.
## Using Context7 Effectively
```
# Step 1: Resolve library ID
mcp__context7__resolve-library-id(libraryName: "react")
# Step 2: Get docs for specific topic
mcp__context7__get-library-docs(
context7CompatibleLibraryID: "/vercel/next.js",
topic: "authentication",
mode: "code" # or "info" for conceptual
)
```
Always try Context7 before falling back to web search for framework docs.
## Web Search Tips
- Add year to queries for recent results: "jwt best practices 2025"
- Use site: operator for authoritative sources: "site:engineering.fb.com"
- Search GitHub with qualifiers: "authentication stars:>5000 language:go"
- Check multiple sources before recommending a practice

View File

@@ -0,0 +1,263 @@
---
name: framework-docs-researcher
description: |
Tech stack analysis specialist for pre-dev planning. Detects project tech stack
from manifest files and fetches relevant framework/library documentation.
Identifies version constraints and implementation patterns from official docs.
model: opus
tools:
- Glob
- Grep
- Read
- mcp__context7__resolve-library-id
- mcp__context7__get-library-docs
- WebFetch
output_schema:
format: "markdown"
required_sections:
- name: "RESEARCH SUMMARY"
pattern: "^## RESEARCH SUMMARY$"
required: true
- name: "TECH STACK ANALYSIS"
pattern: "^## TECH STACK ANALYSIS$"
required: true
- name: "FRAMEWORK DOCUMENTATION"
pattern: "^## FRAMEWORK DOCUMENTATION$"
required: true
- name: "IMPLEMENTATION PATTERNS"
pattern: "^## IMPLEMENTATION PATTERNS$"
required: true
- name: "VERSION CONSIDERATIONS"
pattern: "^## VERSION CONSIDERATIONS$"
required: true
---
# Framework Docs Researcher
You are a tech stack analysis specialist. Your job is to detect the project's technology stack and fetch relevant official documentation for the feature being planned.
## Your Mission
Given a feature description, analyze the tech stack and find:
1. **Current dependencies** and their versions
2. **Official documentation** for relevant frameworks/libraries
3. **Implementation patterns** from official sources
4. **Version-specific constraints** that affect the feature
## Research Process
### Phase 1: Tech Stack Detection
Identify the project's technology stack:
```bash
# Check for manifest files
ls package.json go.mod requirements.txt Cargo.toml pom.xml build.gradle 2>/dev/null
# Read relevant manifest
cat package.json | jq '.dependencies, .devDependencies' # Node.js
cat go.mod # Go
cat requirements.txt # Python
```
**Extract:**
- Primary language/runtime
- Framework (React, Gin, FastAPI, etc.)
- Key libraries relevant to the feature
- Version constraints
### Phase 2: Framework Documentation
For each relevant framework/library:
```
1. Use mcp__context7__resolve-library-id to find docs
2. Use mcp__context7__get-library-docs with feature-relevant topic
3. Extract patterns, constraints, and examples
```
**Priority order:**
1. Primary framework (Next.js, Gin, FastAPI, etc.)
2. Feature-specific libraries (auth, database, etc.)
3. Utility libraries if they affect implementation
### Phase 3: Version Constraint Analysis
Check for version-specific behavior:
```
1. Identify exact versions from manifest
2. Check Context7 for version-specific docs if available
3. Note any deprecations or breaking changes
4. Document minimum version requirements
```
### Phase 4: Implementation Pattern Extraction
From official docs, extract:
- Recommended patterns for the feature type
- Code examples from documentation
- Configuration requirements
- Common integration patterns
## Output Format
Your response MUST include these sections:
```markdown
## RESEARCH SUMMARY
[2-3 sentence overview of tech stack and key documentation findings]
## TECH STACK ANALYSIS
### Primary Stack
| Component | Technology | Version |
|-----------|------------|---------|
| Language | [e.g., Go] | [e.g., 1.21] |
| Framework | [e.g., Gin] | [e.g., 1.9.1] |
| Database | [e.g., PostgreSQL] | [e.g., 15] |
### Relevant Dependencies
| Package | Version | Relevance to Feature |
|---------|---------|---------------------|
| [package] | [version] | [why it matters] |
| [package] | [version] | [why it matters] |
### Manifest Location
- **File:** `[path to manifest]`
- **Lock file:** `[path if exists]`
## FRAMEWORK DOCUMENTATION
### [Framework Name] - [Feature Topic]
**Source:** Context7 / Official Docs
#### Key Concepts
- [concept 1]: [explanation]
- [concept 2]: [explanation]
#### Official Example
```language
[code from official docs]
```
#### Configuration Required
```yaml/json/etc
[configuration example]
```
### [Library Name] - [Feature Topic]
[same structure]
## IMPLEMENTATION PATTERNS
### Pattern 1: [Name from Official Docs]
- **Source:** [documentation URL or Context7]
- **Use Case:** When to use this pattern
- **Implementation:**
```language
[official example code]
```
- **Notes:** [any caveats or requirements]
### Pattern 2: [Name]
[same structure]
### Recommended Approach
Based on official documentation, the recommended implementation approach is:
1. [step 1]
2. [step 2]
3. [step 3]
## VERSION CONSIDERATIONS
### Current Versions
| Dependency | Project Version | Latest Stable | Notes |
|------------|-----------------|---------------|-------|
| [dep] | [current] | [latest] | [upgrade notes] |
### Breaking Changes to Note
- **[dependency]:** [breaking change in version X]
- **[dependency]:** [deprecation warning]
### Minimum Requirements
- [dependency] requires [minimum version] for [feature]
- [dependency] requires [minimum version] for [feature]
### Compatibility Matrix
| Feature | Min Version | Recommended |
|---------|-------------|-------------|
| [feature aspect] | [version] | [version] |
```
## Critical Rules
1. **ALWAYS detect actual versions** - don't assume, read manifest files
2. **Use Context7 as primary source** - official docs are authoritative
3. **Document version constraints** - version mismatches cause bugs
4. **Include code examples** - from official sources only
5. **Note deprecations** - upcoming changes affect long-term planning
## Tech Stack Detection Patterns
### Node.js/JavaScript
```bash
# Check package.json
cat package.json | jq '{
framework: .dependencies | keys | map(select(. | test("next|react|express|fastify|nest"))),
runtime: (if .type == "module" then "ESM" else "CommonJS" end)
}'
```
### Go
```bash
# Check go.mod
grep -E "^require|^\t" go.mod | head -20
```
### Python
```bash
# Check requirements or pyproject.toml
cat requirements.txt 2>/dev/null || cat pyproject.toml
```
### Rust
```bash
# Check Cargo.toml
cat Cargo.toml | grep -A 50 "\[dependencies\]"
```
## Using Context7 for Framework Docs
```
# Example: Get Next.js authentication docs
mcp__context7__resolve-library-id(libraryName: "next.js")
# Returns: /vercel/next.js
mcp__context7__get-library-docs(
context7CompatibleLibraryID: "/vercel/next.js",
topic: "authentication middleware",
mode: "code"
)
```
**Tips:**
- Use `mode: "code"` for implementation patterns
- Use `mode: "info"` for architectural concepts
- Try multiple topics if first search is too narrow
- Paginate with `page: 2, 3, ...` if needed
## Research Depth by Mode
You will receive a `research_mode` parameter:
- **greenfield:** Focus on framework setup patterns and project structure
- **modification:** Focus on specific APIs being modified
- **integration:** Focus on integration points and external API docs
Adjust documentation depth based on mode.

View File

@@ -0,0 +1,167 @@
---
name: repo-research-analyst
description: |
Codebase research specialist for pre-dev planning. Searches target repository
for existing patterns, conventions, and prior solutions. Returns findings with
exact file:line references for use in PRD/TRD creation.
model: opus
tools:
- Glob
- Grep
- Read
- Task
output_schema:
format: "markdown"
required_sections:
- name: "RESEARCH SUMMARY"
pattern: "^## RESEARCH SUMMARY$"
required: true
- name: "EXISTING PATTERNS"
pattern: "^## EXISTING PATTERNS$"
required: true
- name: "KNOWLEDGE BASE FINDINGS"
pattern: "^## KNOWLEDGE BASE FINDINGS$"
required: true
- name: "CONVENTIONS DISCOVERED"
pattern: "^## CONVENTIONS DISCOVERED$"
required: true
- name: "RECOMMENDATIONS"
pattern: "^## RECOMMENDATIONS$"
required: true
---
# Repo Research Analyst
You are a codebase research specialist. Your job is to analyze the target repository and find existing patterns, conventions, and prior solutions relevant to a feature request.
## Your Mission
Given a feature description, thoroughly search the codebase to find:
1. **Existing patterns** that the new feature should follow
2. **Prior solutions** in `docs/solutions/` knowledge base
3. **Conventions** from CLAUDE.md, README.md, ARCHITECTURE.md
4. **Similar implementations** that can inform the design
## Research Process
### Phase 1: Knowledge Base Search
First, check if similar problems have been solved before:
```bash
# Search docs/solutions/ for related issues
grep -r "keyword" docs/solutions/ 2>/dev/null || true
# Search by component if known
grep -r "component: relevant-component" docs/solutions/ 2>/dev/null || true
```
**Document all findings** - prior solutions are gold for avoiding repeated mistakes.
### Phase 2: Codebase Pattern Analysis
Search for existing implementations:
1. **Find similar features:**
- Grep for related function names, types, interfaces
- Look for established patterns in similar domains
2. **Identify conventions:**
- Read CLAUDE.md for project-specific rules
- Check README.md for architectural overview
- Review ARCHITECTURE.md if present
3. **Trace data flows:**
- How do similar features handle data?
- What validation patterns exist?
- What error handling approaches are used?
### Phase 3: File Reference Collection
For EVERY pattern you find, document with exact location:
```
Pattern: [description]
Location: src/services/auth.go:142-156
Relevance: [why this matters for the new feature]
```
**file:line references are mandatory** - vague references are not useful.
## Output Format
Your response MUST include these sections:
```markdown
## RESEARCH SUMMARY
[2-3 sentence overview of what you found]
## EXISTING PATTERNS
### Pattern 1: [Name]
- **Location:** `file:line-line`
- **Description:** What this pattern does
- **Relevance:** Why it matters for this feature
- **Code Example:**
```language
[relevant code snippet]
```
### Pattern 2: [Name]
[same structure]
## KNOWLEDGE BASE FINDINGS
### Prior Solution 1: [Title]
- **Document:** `docs/solutions/category/filename.md`
- **Problem:** What was solved
- **Relevance:** How it applies to current feature
- **Key Learning:** What to reuse or avoid
[If no findings: "No relevant prior solutions found in docs/solutions/"]
## CONVENTIONS DISCOVERED
### From CLAUDE.md:
- [relevant convention 1]
- [relevant convention 2]
### From Project Structure:
- [architectural convention]
- [naming convention]
### From Existing Code:
- [error handling pattern]
- [validation approach]
## RECOMMENDATIONS
Based on research findings:
1. **Follow pattern from:** `file:line` - [reason]
2. **Reuse approach from:** `file:line` - [reason]
3. **Avoid:** [anti-pattern found] - [why]
4. **Consider:** [suggestion based on findings]
```
## Critical Rules
1. **NEVER guess file locations** - verify with Glob/Grep before citing
2. **ALWAYS include line numbers** - `file.go:142` not just `file.go`
3. **Search docs/solutions/ first** - knowledge base is highest priority
4. **Read CLAUDE.md completely** - project conventions are mandatory
5. **Document negative findings** - "no existing pattern found" is valuable info
## Research Depth by Mode
You will receive a `research_mode` parameter:
- **greenfield:** Focus on conventions and structure, less on existing patterns (there won't be many)
- **modification:** Deep dive into existing patterns, this is your primary value
- **integration:** Balance between patterns and external interfaces
Adjust your search depth accordingly.

157
commands/pre-dev-feature.md Normal file
View File

@@ -0,0 +1,157 @@
---
name: pre-dev-feature
description: Lightweight 4-gate pre-dev workflow for small features (<2 days)
argument-hint: "[feature-name]"
---
I'm running the **Small Track** pre-development workflow (4 gates) for your feature.
**This track is for features that:**
- ✅ Take <2 days to implement
- ✅ Use existing architecture patterns
- ✅ Don't add new external dependencies
- ✅ Don't create new data models/entities
- ✅ Don't require multi-service integration
- ✅ Can be completed by a single developer
**If any of the above are false, use `/ring-pm-team:pre-dev-full` instead.**
## Document Organization
All artifacts will be saved to: `docs/pre-dev/<feature-name>/`
**First, let me ask you about your feature:**
Use the AskUserQuestion tool to gather:
**Question 1:** "What is the name of your feature?"
- Header: "Feature Name"
- This will be used for the directory name
- Use kebab-case (e.g., "user-logout", "email-validation", "rate-limiting")
After getting the feature name, create the directory structure and run the 4-gate workflow:
```bash
mkdir -p docs/pre-dev/<feature-name>
```
## Gate 0: Research Phase (Lightweight)
**Skill:** ring-pm-team:pre-dev-research
Even small features benefit from quick research:
1. Determine research mode (usually **modification** for small features)
2. Dispatch 3 research agents in PARALLEL (quick mode)
3. Save to: `docs/pre-dev/<feature-name>/research.md`
4. Get human approval before proceeding
**Gate 0 Pass Criteria (Small Track):**
- [ ] Research mode determined
- [ ] Existing patterns identified (if any)
- [ ] No conflicting implementations found
**Note:** For very simple changes, Gate 0 can be abbreviated - focus on checking for existing patterns.
## Gate 1: PRD Creation
**Skill:** ring-pm-team:pre-dev-prd-creation
1. Ask user to describe the feature (what problem does it solve, who are the users, what's the business value)
2. Create PRD document with:
- Problem statement
- User stories
- Acceptance criteria
- Success metrics
- Out of scope
3. Save to: `docs/pre-dev/<feature-name>/prd.md`
4. Run Gate 1 validation checklist
5. Get human approval before proceeding
**Gate 1 Pass Criteria:**
- [ ] Problem is clearly defined
- [ ] User value is measurable
- [ ] Acceptance criteria are testable
- [ ] Scope is explicitly bounded
## Gate 2: TRD Creation (Skipping Feature Map)
**Skill:** ring-pm-team:pre-dev-trd-creation
1. Load PRD from `docs/pre-dev/<feature-name>/prd.md`
2. Note: No Feature Map exists (small track) - map PRD features directly to components
3. Create TRD document with:
- Architecture style (pattern names, not products)
- Component design (technology-agnostic)
- Data architecture (conceptual)
- Integration patterns
- Security architecture
- **NO specific tech products** (use "Relational Database" not "PostgreSQL")
4. Save to: `docs/pre-dev/<feature-name>/trd.md`
5. Run Gate 2 validation checklist
6. Get human approval before proceeding
**Gate 2 Pass Criteria:**
- [ ] All PRD features mapped to components
- [ ] Component boundaries are clear
- [ ] Interfaces are technology-agnostic
- [ ] No specific products named
## Gate 3: Task Breakdown (Skipping API/Data/Deps)
**Skill:** ring-pm-team:pre-dev-task-breakdown
1. Load PRD from `docs/pre-dev/<feature-name>/prd.md`
2. Load TRD from `docs/pre-dev/<feature-name>/trd.md`
3. Note: No Feature Map, API Design, Data Model, or Dependency Map exist (small track)
4. Create task breakdown document with:
- Value-driven decomposition
- Each task delivers working software
- Maximum task size: 2 weeks
- Dependencies mapped
- Testing strategy per task
5. Save to: `docs/pre-dev/<feature-name>/tasks.md`
6. Run Gate 3 validation checklist
7. Get human approval
**Gate 3 Pass Criteria:**
- [ ] Every task delivers user value
- [ ] No task larger than 2 weeks
- [ ] Dependencies are clear
- [ ] Testing approach defined
## After Completion
Report to human:
```
✅ Small Track (4 gates) complete for <feature-name>
Artifacts created:
- docs/pre-dev/<feature-name>/research.md (Gate 0) ← NEW
- docs/pre-dev/<feature-name>/prd.md (Gate 1)
- docs/pre-dev/<feature-name>/trd.md (Gate 2)
- docs/pre-dev/<feature-name>/tasks.md (Gate 3)
Skipped from full workflow:
- Feature Map (features simple enough to map directly)
- API Design (no new APIs)
- Data Model (no new data structures)
- Dependency Map (no new dependencies)
- Subtask Creation (tasks small enough already)
Next steps:
1. Review artifacts in docs/pre-dev/<feature-name>/
2. Use /ring-default:worktree to create isolated workspace
3. Use /ring-default:write-plan to create implementation plan
4. Execute the plan
```
## Remember
- This is the **Small Track** - lightweight and fast
- **Gate 0 (Research) checks for existing patterns** even for small features
- If feature grows during planning, switch to `/ring-pm-team:pre-dev-full`
- All documents saved to `docs/pre-dev/<feature-name>/`
- Get human approval at each gate
- Technology decisions happen later in Dependency Map (not in this track)

267
commands/pre-dev-full.md Normal file
View File

@@ -0,0 +1,267 @@
---
name: pre-dev-full
description: Complete 9-gate pre-dev workflow for large features (≥2 days)
argument-hint: "[feature-name]"
---
I'm running the **Full Track** pre-development workflow (9 gates) for your feature.
**This track is for features that have ANY of:**
- ❌ Take ≥2 days to implement
- ❌ Add new external dependencies (APIs, databases, libraries)
- ❌ Create new data models or entities
- ❌ Require multi-service integration
- ❌ Use new architecture patterns
- ❌ Require team collaboration
**If feature is simple (<2 days, existing patterns), use `/ring-pm-team:pre-dev-feature` instead.**
## Document Organization
All artifacts will be saved to: `docs/pre-dev/<feature-name>/`
**First, let me ask you about your feature:**
Use the AskUserQuestion tool to gather:
**Question 1:** "What is the name of your feature?"
- Header: "Feature Name"
- This will be used for the directory name
- Use kebab-case (e.g., "auth-system", "payment-processing", "file-upload")
After getting the feature name, create the directory structure and run the 9-gate workflow:
```bash
mkdir -p docs/pre-dev/<feature-name>
```
## Gate 0: Research Phase (NEW)
**Skill:** ring-pm-team:pre-dev-research
1. Determine research mode by asking user or inferring from context:
- **greenfield**: New capability, no existing patterns
- **modification**: Extending existing functionality
- **integration**: Connecting external systems
2. Dispatch 3 research agents in PARALLEL:
- ring-pm-team:repo-research-analyst (codebase patterns, file:line refs)
- ring-pm-team:best-practices-researcher (web search, Context7)
- ring-pm-team:framework-docs-researcher (tech stack, versions)
3. Aggregate findings into research document
4. Save to: `docs/pre-dev/<feature-name>/research.md`
5. Run Gate 0 validation checklist
6. Get human approval before proceeding
**Gate 0 Pass Criteria:**
- [ ] Research mode determined and documented
- [ ] All 3 agents dispatched and returned
- [ ] At least one file:line reference (if modification mode)
- [ ] At least one external URL (if greenfield mode)
- [ ] docs/solutions/ knowledge base searched
- [ ] Tech stack versions documented
## Gate 1: PRD Creation
**Skill:** ring-pm-team:pre-dev-prd-creation
1. Ask user to describe the feature (problem, users, business value)
2. Create PRD document with:
- Problem statement
- User stories
- Acceptance criteria
- Success metrics
- Out of scope
3. Save to: `docs/pre-dev/<feature-name>/prd.md`
4. Run Gate 1 validation checklist
5. Get human approval before proceeding
**Gate 1 Pass Criteria:**
- [ ] Problem is clearly defined
- [ ] User value is measurable
- [ ] Acceptance criteria are testable
- [ ] Scope is explicitly bounded
## Gate 2: Feature Map Creation
**Skill:** ring-pm-team:pre-dev-feature-map
1. Load PRD from `docs/pre-dev/<feature-name>/prd.md`
2. Create feature map document with:
- Feature relationships and dependencies
- Domain boundaries
- Integration points
- Scope visualization
3. Save to: `docs/pre-dev/<feature-name>/feature-map.md`
4. Run Gate 2 validation checklist
5. Get human approval before proceeding
**Gate 2 Pass Criteria:**
- [ ] All features from PRD mapped
- [ ] Relationships are clear
- [ ] Domain boundaries defined
- [ ] Feature interactions documented
## Gate 3: TRD Creation
**Skill:** ring-pm-team:pre-dev-trd-creation
1. Load PRD from `docs/pre-dev/<feature-name>/prd.md`
2. Load Feature Map from `docs/pre-dev/<feature-name>/feature-map.md`
3. Map Feature Map domains to architectural components
4. Create TRD document with:
- Architecture style (pattern names, not products)
- Component design (technology-agnostic)
- Data architecture (conceptual)
- Integration patterns
- Security architecture
- **NO specific tech products**
5. Save to: `docs/pre-dev/<feature-name>/trd.md`
6. Run Gate 3 validation checklist
7. Get human approval before proceeding
**Gate 3 Pass Criteria:**
- [ ] All Feature Map domains mapped to components
- [ ] All PRD features mapped to components
- [ ] Component boundaries are clear
- [ ] Interfaces are technology-agnostic
- [ ] No specific products named
## Gate 4: API Design
**Skill:** ring-pm-team:pre-dev-api-design
1. Load previous artifacts (PRD, Feature Map, TRD)
2. Create API design document with:
- Component contracts and interfaces
- Request/response formats
- Error handling patterns
- Integration specifications
3. Save to: `docs/pre-dev/<feature-name>/api-design.md`
4. Run Gate 4 validation checklist
5. Get human approval before proceeding
**Gate 4 Pass Criteria:**
- [ ] All component interfaces defined
- [ ] Contracts are clear and complete
- [ ] Error cases covered
- [ ] Protocol-agnostic (no REST/gRPC specifics yet)
## Gate 5: Data Model
**Skill:** ring-pm-team:pre-dev-data-model
1. Load previous artifacts
2. Create data model document with:
- Entity relationships and schemas
- Data ownership boundaries
- Access patterns
- Migration strategy
3. Save to: `docs/pre-dev/<feature-name>/data-model.md`
4. Run Gate 5 validation checklist
5. Get human approval before proceeding
**Gate 5 Pass Criteria:**
- [ ] All entities defined with relationships
- [ ] Data ownership is clear
- [ ] Access patterns documented
- [ ] Database-agnostic (no PostgreSQL/MongoDB specifics yet)
## Gate 6: Dependency Map
**Skill:** ring-pm-team:pre-dev-dependency-map
1. Load previous artifacts
2. Create dependency map document with:
- **NOW we select specific technologies**
- Concrete versions and packages
- Rationale for each choice
- Alternative evaluations
3. Save to: `docs/pre-dev/<feature-name>/dependency-map.md`
4. Run Gate 6 validation checklist
5. Get human approval before proceeding
**Gate 6 Pass Criteria:**
- [ ] All technologies selected with rationale
- [ ] Versions pinned (no "latest")
- [ ] Alternatives evaluated
- [ ] Tech stack is complete
## Gate 7: Task Breakdown
**Skill:** ring-pm-team:pre-dev-task-breakdown
1. Load all previous artifacts (PRD, Feature Map, TRD, API Design, Data Model, Dependency Map)
2. Create task breakdown document with:
- Value-driven decomposition
- Each task delivers working software
- Maximum task size: 2 weeks
- Dependencies mapped
- Testing strategy per task
3. Save to: `docs/pre-dev/<feature-name>/tasks.md`
4. Run Gate 7 validation checklist
5. Get human approval before proceeding
**Gate 7 Pass Criteria:**
- [ ] Every task delivers user value
- [ ] No task larger than 2 weeks
- [ ] Dependencies are clear
- [ ] Testing approach defined
## Gate 8: Subtask Creation
**Skill:** ring-pm-team:pre-dev-subtask-creation
1. Load tasks from `docs/pre-dev/<feature-name>/tasks.md`
2. Create subtask breakdown document with:
- Bite-sized steps (2-5 minutes each)
- TDD-based implementation steps
- Complete code (no placeholders)
- Zero-context executable
3. Save to: `docs/pre-dev/<feature-name>/subtasks.md`
4. Run Gate 8 validation checklist
5. Get human approval
**Gate 8 Pass Criteria:**
- [ ] Every subtask is 2-5 minutes
- [ ] TDD cycle enforced (test first)
- [ ] Complete code provided
- [ ] Zero-context test passes
## After Completion
Report to human:
```
✅ Full Track (9 gates) complete for <feature-name>
Artifacts created:
- docs/pre-dev/<feature-name>/research.md (Gate 0) ← NEW
- docs/pre-dev/<feature-name>/prd.md (Gate 1)
- docs/pre-dev/<feature-name>/feature-map.md (Gate 2)
- docs/pre-dev/<feature-name>/trd.md (Gate 3)
- docs/pre-dev/<feature-name>/api-design.md (Gate 4)
- docs/pre-dev/<feature-name>/data-model.md (Gate 5)
- docs/pre-dev/<feature-name>/dependency-map.md (Gate 6)
- docs/pre-dev/<feature-name>/tasks.md (Gate 7)
- docs/pre-dev/<feature-name>/subtasks.md (Gate 8)
Planning time: 2-4 hours (comprehensive)
Next steps:
1. Review artifacts in docs/pre-dev/<feature-name>/
2. Use /ring-default:worktree to create isolated workspace
3. Use /ring-default:write-plan to create implementation plan
4. Execute the plan
```
## Remember
- This is the **Full Track** - comprehensive and thorough
- All 9 gates provide maximum planning depth
- **Gate 0 (Research) runs 3 agents in parallel** for codebase, best practices, and framework docs
- Technology decisions happen at Gate 6 (Dependency Map)
- All documents saved to `docs/pre-dev/<feature-name>/`
- Get human approval at each gate before proceeding
- Planning investment (2-4 hours) pays off during implementation

24
hooks/hooks.json Normal file
View File

@@ -0,0 +1,24 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "startup|resume",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/session-start.sh"
}
]
},
{
"matcher": "clear|compact",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/session-start.sh"
}
]
}
]
}
}

131
hooks/session-start.sh Executable file
View File

@@ -0,0 +1,131 @@
#!/usr/bin/env bash
set -euo pipefail
# Session start hook for ring-pm-team plugin
# Dynamically generates quick reference for pre-dev planning skills
# Find the monorepo root (where shared/ directory exists)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" && pwd)"
PLUGIN_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
MONOREPO_ROOT="$(cd "$PLUGIN_ROOT/.." && pwd)"
# Output file mapping: skill name -> output filename
# This is structural knowledge not derivable from frontmatter
# NOTE: Using function instead of associative array for bash 3.x compatibility (macOS default)
get_output_file() {
local skill_name="$1"
case "$skill_name" in
pre-dev-research) echo "research.md" ;;
pre-dev-prd-creation) echo "PRD.md" ;;
pre-dev-feature-map) echo "feature-map.md" ;;
pre-dev-trd-creation) echo "TRD.md" ;;
pre-dev-api-design) echo "API.md" ;;
pre-dev-data-model) echo "data-model.md" ;;
pre-dev-dependency-map) echo "dependencies.md" ;;
pre-dev-task-breakdown) echo "tasks.md" ;;
pre-dev-subtask-creation) echo "subtasks.md" ;;
*) echo "${skill_name#pre-dev-}.md" ;;
esac
}
# Extract gate number from skill description (format: "Gate X: ...")
extract_gate() {
local skill_dir="$1"
local skill_file="$skill_dir/SKILL.md"
if [ -f "$skill_file" ]; then
# Extract description field and find "Gate X:" pattern
grep -A1 "^description:" "$skill_file" 2>/dev/null | grep -oE "Gate [0-9]+" | head -1 | grep -oE "[0-9]+" || true
fi
}
# Build dynamic table from discovered skills
build_skills_table() {
local skills_dir="$1"
local table_rows=""
# Discover pre-dev skills dynamically
for skill_dir in "$skills_dir"/pre-dev-*/; do
[ -d "$skill_dir" ] || continue
local skill_name
skill_name=$(basename "$skill_dir")
local gate
gate=$(extract_gate "$skill_dir")
local output
output=$(get_output_file "$skill_name")
if [ -n "$gate" ]; then
# Append row with gate for sorting (format: gate|skill|gate|output)
table_rows="${table_rows}${gate}|\`ring-pm-team:${skill_name}\`|${gate}|${output}"$'\n'
fi
done
# Sort by gate number and format as table rows
echo "$table_rows" | sort -t'|' -k1 -n | while IFS='|' read -r _ skill gate output; do
[ -n "$skill" ] && echo "| ${skill} | ${gate} | ${output} |"
done
}
# Generate skills reference
if [ -d "$PLUGIN_ROOT/skills" ]; then
# Build table dynamically
table_content=$(build_skills_table "$PLUGIN_ROOT/skills")
skill_count=$(echo "$table_content" | grep -c "ring-pm-team" || echo "0")
if [ -n "$table_content" ] && [ "$skill_count" -gt 0 ]; then
# Build the context message with dynamically discovered skills
context="<ring-pm-team-system>
**Pre-Dev Planning Skills**
${skill_count}-gate structured feature planning (use via Skill tool):
| Skill | Gate | Output |
|-------|------|--------|
${table_content}
For full details: Skill tool with \"ring-pm-team:using-pm-team\"
</ring-pm-team-system>"
# Escape for JSON using jq
if command -v jq &>/dev/null; then
context_escaped=$(echo "$context" | jq -Rs . | sed 's/^"//;s/"$//')
else
# Fallback: more complete escaping (handles tabs, carriage returns, form feeds)
# Note: Still not RFC 8259 compliant for all control chars - jq is strongly recommended
context_escaped=$(printf '%s' "$context" | \
sed 's/\\/\\\\/g' | \
sed 's/"/\\"/g' | \
sed 's/ /\\t/g' | \
sed $'s/\r/\\\\r/g' | \
sed 's/\f/\\f/g' | \
awk '{printf "%s\\n", $0}')
fi
cat <<EOF
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "${context_escaped}"
}
}
EOF
else
# Fallback to static output if dynamic discovery fails
cat <<'EOF'
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "<ring-pm-team-system>\n**Pre-Dev Planning Skills**\n\n9-gate structured feature planning (use via Skill tool):\n\n| Skill | Gate | Output |\n|-------|------|--------|\n| `ring-pm-team:pre-dev-research` | 0 | research.md |\n| `ring-pm-team:pre-dev-prd-creation` | 1 | PRD.md |\n| `ring-pm-team:pre-dev-feature-map` | 2 | feature-map.md |\n| `ring-pm-team:pre-dev-trd-creation` | 3 | TRD.md |\n| `ring-pm-team:pre-dev-api-design` | 4 | API.md |\n| `ring-pm-team:pre-dev-data-model` | 5 | data-model.md |\n| `ring-pm-team:pre-dev-dependency-map` | 6 | dependencies.md |\n| `ring-pm-team:pre-dev-task-breakdown` | 7 | tasks.md |\n| `ring-pm-team:pre-dev-subtask-creation` | 8 | subtasks.md |\n\nFor full details: Skill tool with \"ring-pm-team:using-pm-team\"\n</ring-pm-team-system>"
}
}
EOF
fi
else
# Fallback if skills directory doesn't exist
cat <<'EOF'
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "<ring-pm-team-system>\n**Pre-Dev Planning Skills** (9 gates)\n\nFor full list: Skill tool with \"ring-pm-team:using-pm-team\"\n</ring-pm-team-system>"
}
}
EOF
fi

109
plugin.lock.json Normal file
View File

@@ -0,0 +1,109 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:LerianStudio/ring:pm-team",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "cb2d94cfb7683b560a25ff75c90c77b61f2222b3",
"treeHash": "5a0ced183281ba9e161208f2c8aca6f6dc6273bc376af49c3b33d12230067342",
"generatedAt": "2025-11-28T10:12:01.540678Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "ring-pm-team",
"description": "Product team pre-development workflow: 10 skills + 3 research agents. 9-gate planning system with research-first approach (Gate 0) before PRD creation. Includes parallel research agents (repo-research-analyst, best-practices-researcher, framework-docs-researcher) and full planning gates (PRD, feature map, TRD, API, data model, dependencies, tasks, subtasks).",
"version": "0.6.1"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "9664fa0435819f8ddd92b6ec0bfb1592f38554e19df25e45fd46c4d5d42000c3"
},
{
"path": "agents/framework-docs-researcher.md",
"sha256": "64a31fc72f15e37ef31fa8ba6cd4aa1f78364ab4f35a3804fe28e3ac851a83bf"
},
{
"path": "agents/best-practices-researcher.md",
"sha256": "2a2e6a7fc869df6220d8a0d53d26cbb9d4796dbcb1e97f85a6eef4ec786bc937"
},
{
"path": "agents/repo-research-analyst.md",
"sha256": "356e62c82ac0d0417c7b01aa9a6ea07e88b6a314d39699ca93604029f10eb9b1"
},
{
"path": "hooks/session-start.sh",
"sha256": "ad7a0bb6dfd70ae73f90abae1537963ace9209b3aef6eb6060f030acc25d7a0e"
},
{
"path": "hooks/hooks.json",
"sha256": "e0e993a5ac87e9cc4cd5e100d722d523b0f5e6c056ee1705cd5f23fe961ed803"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "f91d7e4a735fcf3374f5c9b2c8d8ac7c3ede682fc0bd1d5232f481c3366cbdd4"
},
{
"path": "commands/pre-dev-full.md",
"sha256": "33ff5865ec1d19c36723f6cdc29555658fb06a88290225ecd39a863c1a97e876"
},
{
"path": "commands/pre-dev-feature.md",
"sha256": "e6633a7e1bbc87f3982a23c0dea537c1fa538bc17a42dd413704652f70127536"
},
{
"path": "skills/pre-dev-dependency-map/SKILL.md",
"sha256": "c64dc25ba0ae0e5f6f363e4407f73a97d6e2472f679bcd88ade4ff194eeaceba"
},
{
"path": "skills/pre-dev-research/SKILL.md",
"sha256": "3f3e6defc2d0af192ef52661ba4ca06e0bbd40883f5f6b9b5b5dac731e2ff686"
},
{
"path": "skills/pre-dev-feature-map/SKILL.md",
"sha256": "da6754cb7fa741b5585dfe30b8268b680e96ab243f9a30c8fc8a9e7b65f6ec27"
},
{
"path": "skills/pre-dev-task-breakdown/SKILL.md",
"sha256": "1e08fe74a53fa06a125141fa0b625d2dc7dd11b98653bd1a32d79f340ea75989"
},
{
"path": "skills/pre-dev-api-design/SKILL.md",
"sha256": "7bf4b667177277337c34d0e85c483a4afd0d5d4da74c43f76b5a67d0da9ffa51"
},
{
"path": "skills/pre-dev-prd-creation/SKILL.md",
"sha256": "e2bb836080ee910b17459e5a7828da275bbb0d4d336ed3c0f2cc1f4f9bccd017"
},
{
"path": "skills/using-pm-team/SKILL.md",
"sha256": "74900227e0718c7a98018e5f39d328df73ed06a3a3ce9c0f7b29ecafb9946b54"
},
{
"path": "skills/pre-dev-data-model/SKILL.md",
"sha256": "e44b10bce88521a3d3e7efb9c7d90b45b1145be4179f19cf58149d6488153743"
},
{
"path": "skills/pre-dev-trd-creation/SKILL.md",
"sha256": "f3e71e7127ee99209d9b605e0673c6876429cbb569a7eb9785e452955680984f"
},
{
"path": "skills/pre-dev-subtask-creation/SKILL.md",
"sha256": "e7098888f106c0102b6d4adb5e3ef5c2b7f2abf6da3de92a2ca5e7178430eb36"
}
],
"dirSha256": "5a0ced183281ba9e161208f2c8aca6f6dc6273bc376af49c3b33d12230067342"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,724 @@
---
name: pre-dev-api-design
description: |
Gate 4: API contracts document - defines component interfaces and data contracts
before protocol/technology selection. Large Track only.
trigger: |
- TRD passed Gate 3 validation
- System has multiple components that need to integrate
- Building APIs (internal or external)
- Large Track workflow (2+ day features)
skip_when: |
- Small Track workflow → skip to Task Breakdown
- Single component system → skip to Data Model
- TRD not validated → complete Gate 3 first
sequence:
after: [pre-dev-trd-creation]
before: [pre-dev-data-model]
---
# API/Contract Design - Defining Component Interfaces
## Foundational Principle
**Component contracts and interfaces must be defined before technology/protocol selection.**
Jumping to implementation without contract definition creates:
- Integration failures discovered during development
- Inconsistent data structures across components
- Teams blocked waiting for interface clarity
- Rework when assumptions about contracts differ
- No clear integration test boundaries
**The API Design answers**: WHAT data/operations components expose and consume?
**The API Design never answers**: HOW those are implemented (protocols, serialization, specific tech).
## When to Use This Skill
Use this skill when:
- TRD has passed Gate 3 validation
- System has multiple components that need to integrate
- Building APIs (internal or external)
- Microservices, modular monoliths, or distributed systems
- Need clear contracts for parallel development
## Mandatory Workflow
### Phase 1: Contract Analysis (Inputs Required)
1. **Approved TRD** (Gate 3 passed) - architecture patterns defined
2. **Approved Feature Map** (Gate 2 passed) - feature interactions mapped
3. **Approved PRD** (Gate 1 passed) - business requirements locked
4. **Identify integration points** from TRD component diagram
5. **Extract data flows** from Feature Map
### Phase 2: Contract Definition
For each component interface:
1. **Define operations** (what actions can be performed)
2. **Specify inputs** (what data is required)
3. **Specify outputs** (what data is returned)
4. **Define errors** (what failure cases exist)
5. **Document events** (what notifications are sent)
6. **Set constraints** (validation rules, rate limits)
7. **Version contracts** (how changes are managed)
### Phase 3: Gate 4 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding to Data Modeling:
- [ ] All TRD component interactions have contracts
- [ ] Operations are clearly named and described
- [ ] Inputs/outputs are fully specified
- [ ] Error scenarios are documented
- [ ] Events are defined with schemas
- [ ] Constraints are explicit (validation, limits)
- [ ] Versioning strategy is defined
- [ ] No protocol specifics (REST/gRPC/GraphQL)
- [ ] No technology implementations
## Explicit Rules
### ✅ DO Include in API Design
- Operation names and descriptions
- Input parameters (name, type, required/optional, constraints)
- Output structure (fields, types, nullable)
- Error codes and descriptions
- Event types and payloads
- Validation rules (format, ranges, patterns)
- Rate limits or quota policies
- Idempotency requirements
- Authentication/authorization needs (abstract)
- Contract versioning strategy
### ❌ NEVER Include in API Design
- HTTP verbs (GET/POST/PUT) or REST specifics
- gRPC/GraphQL/WebSocket protocol details
- URL paths or route definitions
- Serialization formats (JSON/Protobuf/Avro)
- Framework-specific code (middleware, decorators)
- Database queries or ORM code
- Infrastructure (load balancers, API gateways)
- Specific authentication libraries (JWT libraries, OAuth packages)
### Abstraction Rules
1. **Operation**: Say "CreateUser" not "POST /api/v1/users"
2. **Data Type**: Say "EmailAddress (validated)" not "string with regex"
3. **Error**: Say "UserAlreadyExists" not "HTTP 409 Conflict"
4. **Auth**: Say "Requires authenticated user" not "JWT Bearer token"
5. **Format**: Say "ISO8601 timestamp" not "time.RFC3339"
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "REST is obvious, just document endpoints" | Protocol choice goes in Dependency Map. Define contracts abstractly. |
| "We need HTTP codes for errors" | Error semantics matter; HTTP codes are protocol. Abstract the errors. |
| "Teams need to see JSON examples" | JSON is serialization. Define structure; format comes later. |
| "The contract IS the OpenAPI spec" | OpenAPI is protocol-specific. Design contracts first, generate specs later. |
| "gRPC/GraphQL affects the contract" | Protocols deliver contracts. Design protocol-agnostic contracts first. |
| "We already know it's REST" | Knowing doesn't mean documenting prematurely. Stay abstract. |
| "Framework validates inputs" | Validation logic is universal. Document rules; implementation comes later. |
| "This feels redundant with TRD" | TRD = components exist. API = how they talk. Different concerns. |
| "URL structure matters for APIs" | URLs are HTTP-specific. Focus on operations and data. |
| "But API Design means REST API" | API = interface. Could be REST, gRPC, events, or in-process. Stay abstract. |
## Red Flags - STOP
If you catch yourself writing any of these in API Design, **STOP**:
- HTTP methods (GET, POST, PUT, DELETE, PATCH)
- URL paths (/api/v1/users, /users/{id})
- Protocol names (REST, GraphQL, gRPC, WebSocket)
- Status codes (200, 404, 500)
- Serialization formats (JSON, XML, Protobuf)
- Authentication tokens (JWT, OAuth2 tokens, API keys)
- Framework code (Express routes, gRPC service definitions)
- Transport mechanisms (HTTP/2, TCP, UDP)
**When you catch yourself**: Replace protocol detail with abstract contract. "POST /users" → "CreateUser operation"
## Gate 4 Validation Checklist
Before proceeding to Data Modeling, verify:
**Contract Completeness**:
- [ ] All component-to-component interactions have contracts
- [ ] All external system integrations have contracts
- [ ] All event/message contracts are defined
- [ ] Client-facing APIs are fully specified
**Operation Clarity**:
- [ ] Each operation has clear purpose and description
- [ ] Operation names follow consistent naming convention
- [ ] Idempotency requirements are documented
- [ ] Batch operations are identified where relevant
**Data Specification**:
- [ ] All input parameters are typed and documented
- [ ] Required vs. optional is explicit
- [ ] Output structures are complete
- [ ] Null/empty cases are handled
**Error Handling**:
- [ ] All error scenarios are identified
- [ ] Error codes/types are defined
- [ ] Error messages provide actionable guidance
- [ ] Retry/recovery strategies are documented
**Event Contracts**:
- [ ] All events are named and described
- [ ] Event payloads are fully specified
- [ ] Event ordering/delivery semantics documented
- [ ] Event versioning strategy defined
**Constraints & Policies**:
- [ ] Validation rules are explicit (format, range, pattern)
- [ ] Rate limits or quotas are defined
- [ ] Timeouts and deadlines are specified
- [ ] Backward compatibility strategy exists
**Technology Agnostic**:
- [ ] No protocol-specific details (REST/gRPC/etc)
- [ ] No serialization format specifics
- [ ] No framework or library names
- [ ] Can implement in any protocol
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to Data Modeling
- ⚠️ **CONDITIONAL**: Remove protocol details → Re-validate
-**FAIL**: Incomplete contracts → Add missing specifications
## Contract Template
```markdown
# API/Contract Design: [Project/Feature Name]
## Overview
- **TRD Reference**: [Link to approved TRD]
- **Feature Map Reference**: [Link to approved Feature Map]
- **Last Updated**: [Date]
- **Status**: Draft / Under Review / Approved
## Contract Versioning Strategy
- **Approach**: [e.g., Semantic versioning, Date-based, etc.]
- **Backward Compatibility**: [Policy for breaking changes]
- **Deprecation Process**: [How old contracts are sunset]
## Component Contracts
### Component: [Component Name]
**Purpose**: [What this component does - from TRD]
**Integration Points** (from TRD):
- Inbound: [Components that call this one]
- Outbound: [Components this one calls]
---
#### Operation: [OperationName]
**Purpose**: [What this operation does]
**Inputs**:
| Parameter | Type | Required | Constraints | Description |
|-----------|------|----------|-------------|-------------|
| userId | Identifier | Yes | Non-empty, UUID format | Unique user identifier |
| email | EmailAddress | Yes | Valid email format | User's email address |
| displayName | String | No | 3-50 chars, alphanumeric | Public display name |
| preferences | PreferenceSet | No | - | User preferences object |
**Input Validation Rules**:
- `email` must match pattern: `[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,}`
- `displayName` must not contain profanity (filter list: [reference])
- `preferences.theme` must be one of: ["light", "dark", "auto"]
**Outputs** (Success):
| Field | Type | Nullable | Description |
|-------|------|----------|-------------|
| userId | Identifier | No | Created user's unique ID |
| createdAt | Timestamp | No | ISO8601 timestamp of creation |
| status | UserStatus | No | Account status: "active" | "pending_verification" |
**Output Structure Example** (abstract):
```
UserCreatedResponse {
userId: Identifier
createdAt: Timestamp
status: UserStatus
}
```
**Errors**:
| Error Code | Condition | Description | Retry? |
|------------|-----------|-------------|--------|
| InvalidEmail | Email format invalid | Provided email doesn't match format | No |
| EmailAlreadyExists | Email in use | Account with this email exists | No |
| RateLimitExceeded | Too many requests | Max 5 creates per hour per IP | Yes, after delay |
| ServiceUnavailable | Downstream failure | Dependency unavailable | Yes, with backoff |
**Idempotency**:
- Idempotent if called with same `email` within 5 minutes
- Returns existing user if already created
**Authorization**:
- Requires: Anonymous (public operation)
- Rate limited: 5 requests per hour per IP
**Related Operations**:
- Triggers Event: `UserCreated` (see Events section)
- May call: `SendVerificationEmail` (async)
---
#### Operation: [AnotherOperationName]
[Same structure as above]
---
## Event Contracts
### Event: UserCreated
**Purpose**: Notifies system that new user account was created
**When Emitted**: After successful user creation, before returning response
**Payload**:
| Field | Type | Nullable | Description |
|-------|------|----------|-------------|
| eventId | Identifier | No | Unique event identifier |
| timestamp | Timestamp | No | ISO8601 event timestamp |
| userId | Identifier | No | Created user's ID |
| email | EmailAddress | No | User's email (for notifications) |
| source | String | No | Registration source: "web" | "mobile" | "api" |
**Payload Structure Example** (abstract):
```
UserCreatedEvent {
eventId: Identifier
timestamp: Timestamp
userId: Identifier
email: EmailAddress
source: String
}
```
**Consumers**:
- Email Service (sends welcome email)
- Analytics Service (tracks signups)
- Audit Log Service (records event)
**Delivery Semantics**:
- At-least-once delivery
- Consumers must handle duplicates (idempotency required)
**Ordering**:
- No guaranteed ordering with other events
- Events for same `userId` are ordered
**Retention**:
- Events retained for 30 days in event store
---
### Event: [AnotherEvent]
[Same structure as above]
---
## Cross-Component Integration Contracts
### Integration: User Service → Email Service
**Purpose**: Send transactional emails to users
**Operations Used**:
- `SendEmail` (async, fire-and-forget)
- `GetEmailStatus` (query email delivery status)
**Contract Reference**: See Email Service component contracts
**Data Flow**:
```
UserService --[UserCreated event]--> EventBroker --[subscribe]--> EmailService
EmailService --[SendEmail operation]--> EmailProvider
```
**Error Handling**:
- Email Service failures do NOT block User Service operations
- Retries handled by Email Service (3 attempts, exponential backoff)
- Dead-letter queue for permanent failures
---
### Integration: [Another Integration]
[Same structure as above]
---
## External System Contracts
### External System: Payment Gateway
**Purpose**: Process payments for user subscriptions
**Operations Exposed to Us**:
- `InitiatePayment`: Start payment transaction
- `CheckPaymentStatus`: Query transaction status
- `RefundPayment`: Reverse transaction
**Operations We Expose to Them**:
- `PaymentWebhook`: Receive payment status updates
**Contract Details**:
#### Operation: InitiatePayment (We call Them)
**Inputs**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| transactionId | Identifier | Yes | Our internal transaction ID |
| amount | MonetaryAmount | Yes | Amount in smallest currency unit (cents) |
| currency | CurrencyCode | Yes | ISO 4217 code (USD, EUR, etc.) |
| customerEmail | EmailAddress | Yes | Customer's email for receipt |
**Outputs**:
| Field | Type | Description |
|-------|------|-------------|
| paymentId | Identifier | Gateway's payment ID (store for status checks) |
| redirectUrl | URL | URL to redirect user for payment |
| expiresAt | Timestamp | Payment link expiration |
**Errors**:
| Error Code | Description |
|------------|-------------|
| InvalidAmount | Amount out of acceptable range |
| UnsupportedCurrency | Currency not supported |
| GatewayUnavailable | External service down |
---
#### Operation: PaymentWebhook (They call Us)
**Purpose**: Receive async payment status updates
**Inputs**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| paymentId | Identifier | Yes | Gateway's payment ID |
| status | PaymentStatus | Yes | "succeeded" | "failed" | "pending" |
| transactionId | Identifier | Yes | Our transaction ID (from InitiatePayment) |
| timestamp | Timestamp | Yes | Status update timestamp |
| signature | String | Yes | HMAC signature for verification |
**Outputs**:
| Field | Type | Description |
|-------|------|-------------|
| acknowledged | Boolean | Always true (confirms receipt) |
**Security**:
- Must verify HMAC signature before processing
- Signature algorithm: HMAC-SHA256
- Secret key: [stored in secrets management]
**Idempotency**:
- Must handle duplicate webhooks (same paymentId + status)
- Store processed webhook IDs for deduplication
---
## Data Type Definitions
### Custom Types
#### EmailAddress
- **Base Type**: String
- **Format**: Valid email format per RFC 5322
- **Constraints**: Max 254 characters, case-insensitive
- **Example**: "user@example.com"
#### Identifier
- **Base Type**: String
- **Format**: UUID v4
- **Constraints**: Non-empty, immutable
- **Example**: "550e8400-e29b-41d4-a716-446655440000"
#### Timestamp
- **Base Type**: String
- **Format**: ISO 8601 with timezone
- **Constraints**: UTC timezone, millisecond precision
- **Example**: "2025-10-23T16:45:00.123Z"
#### MonetaryAmount
- **Base Type**: Integer
- **Format**: Amount in smallest currency unit (cents, pence, etc.)
- **Constraints**: Non-negative, max value 9,223,372,036,854,775,807
- **Example**: 1999 (represents $19.99)
#### CurrencyCode
- **Base Type**: String
- **Format**: ISO 4217 three-letter code
- **Constraints**: Uppercase, exactly 3 characters
- **Example**: "USD", "EUR", "GBP"
#### UserStatus
- **Base Type**: Enum
- **Values**: "active", "suspended", "deleted", "pending_verification"
- **Description**: Current account status
---
## Naming Conventions
**Operations**:
- Use verb + noun format: `CreateUser`, `GetPayment`, `UpdateProfile`
- Be specific: `ArchiveUser` instead of `DeleteUser` if soft-delete
**Parameters**:
- Use camelCase: `userId`, `createdAt`, `displayName`
- Be descriptive: `subscriptionExpiresAt` not `expiry`
- Boolean parameters: prefix with `is`/`has`: `isActive`, `hasPermission`
**Events**:
- Use past tense: `UserCreated`, `PaymentProcessed`, `OrderShipped`
- Include entity: `OrderShipped` not just `Shipped`
**Errors**:
- Use noun + condition: `ResourceNotFound`, `InvalidInput`, `RateLimitExceeded`
- Be specific: `EmailAlreadyExists` not `DuplicateError`
---
## Rate Limiting & Quotas
### Per-Operation Limits
| Operation | Limit | Window | Scope |
|-----------|-------|--------|-------|
| CreateUser | 5 requests | 1 hour | Per IP address |
| GetUserProfile | 100 requests | 1 minute | Per user |
| UpdateProfile | 10 requests | 1 minute | Per user |
| SendPasswordReset | 3 requests | 1 hour | Per email |
### Quota Policies
- Free tier: 1,000 API calls per day
- Pro tier: 100,000 API calls per day
- Enterprise: Custom limits
### Exceeded Limit Behavior
- Return error: `RateLimitExceeded`
- Include retry info: `retryAfter` timestamp
- Do NOT process request
---
## Backward Compatibility Strategy
### Breaking Changes
**Definition**: Changes that require consumers to update
- Removing fields from outputs
- Adding required parameters to inputs
- Changing data types
- Renaming operations
**Process**:
1. Announce deprecation 90 days in advance
2. Support old + new contract in parallel
3. Monitor old contract usage
4. Remove old contract after 180 days
### Non-Breaking Changes
**Definition**: Changes consumers can ignore
- Adding optional parameters
- Adding new fields to outputs
- Adding new operations
- Adding new error codes
**Process**:
- Deploy immediately
- Document in changelog
- No consumer updates required
---
## Testing Contracts
### Contract Testing Strategy
- Use contract testing tools (language-agnostic)
- Provider tests verify contract implementation
- Consumer tests verify contract usage
- CI/CD validates contracts haven't broken
### Example Test Scenarios
**CreateUser Operation**:
- ✓ Valid input creates user successfully
- ✓ Duplicate email returns `EmailAlreadyExists`
- ✓ Invalid email returns `InvalidEmail`
- ✓ Missing required field returns `InvalidInput`
- ✓ Rate limit exceeded returns `RateLimitExceeded`
- ✓ Success emits `UserCreated` event
---
## Gate 4 Validation
**Validation Date**: [Date]
**Validated By**: [Person/team]
- [ ] All component contracts defined
- [ ] All operations have inputs/outputs
- [ ] Error scenarios documented
- [ ] Events fully specified
- [ ] External integrations covered
- [ ] No protocol specifics included
- [ ] Ready for Data Modeling (Gate 5)
**Approval**: ☐ Approved | ☐ Needs Revision | ☐ Rejected
**Next Step**: Proceed to Data Modeling (`pre-dev-data-model`)
```
## Common Violations and Fixes
### Violation 1: Protocol-Specific Details
**Wrong**:
```markdown
#### Operation: CreateUser
**Endpoint**: POST /api/v1/users
**Status Codes**:
- 201 Created
- 409 Conflict (email exists)
- 400 Bad Request
```
**Correct**:
```markdown
#### Operation: CreateUser
**Purpose**: Create new user account
**Inputs**: [userId, email, displayName]
**Outputs**: UserCreatedResponse
**Errors**:
- EmailAlreadyExists (email in use)
- InvalidInput (validation failure)
```
### Violation 2: Implementation in Contract
**Wrong**:
```markdown
**Validation**:
```javascript
if (!/^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,}$/.test(email)) {
throw new ValidationError("Invalid email");
}
```
```
✅ **Correct**:
```markdown
**Validation Rules**:
- `email` must match email format per RFC 5322
- Pattern: local@domain.tld
- Max length: 254 characters
```
### Violation 3: Technology-Specific Types
**Wrong**:
```markdown
**Output**:
```json
{
"userId": "uuid",
"createdAt": "Date",
"profile": "Map<String, Any>"
}
```
```
✅ **Correct**:
```markdown
**Outputs**:
| Field | Type | Description |
|-------|------|-------------|
| userId | Identifier | UUID format |
| createdAt | Timestamp | ISO8601 timestamp |
| profile | ProfileObject | User profile data |
```
## Confidence Scoring
Use this to adjust your interaction with the user:
```yaml
Confidence Factors:
Contract Completeness: [0-30]
- All operations documented: 30
- Most operations covered: 20
- Significant gaps: 10
Interface Clarity: [0-25]
- Clear, unambiguous contracts: 25
- Some interpretation needed: 15
- Vague or conflicting: 5
Integration Complexity: [0-25]
- Simple point-to-point: 25
- Moderate dependencies: 15
- Complex orchestration: 5
Error Handling Coverage: [0-20]
- All scenarios documented: 20
- Common cases covered: 12
- Minimal coverage: 5
Total: [0-100]
Action:
80+: Generate complete contracts autonomously
50-79: Present options for user selection
<50: Ask clarifying questions about integration needs
```
## Output Location
**Always output to**: `docs/pre-development/api-design/api-contracts-[feature-name].md`
## After API Design Approval
1. ✅ Lock the contracts - interfaces are now reference for implementation
2. 🎯 Use contracts as input for Data Modeling (next phase: `pre-dev-data-model`)
3. 🚫 Never add protocol specifics to contracts retroactively
4. 📋 Keep contracts technology-agnostic until Dependency Map
## Quality Self-Check
Before declaring API Design complete, verify:
- [ ] All TRD integration points have contracts
- [ ] Operations are clearly named and described
- [ ] Inputs are fully specified (type, required, constraints)
- [ ] Outputs are complete (all fields documented)
- [ ] Error scenarios are comprehensive
- [ ] Events have full payload specifications
- [ ] Validation rules are explicit
- [ ] Rate limits are defined
- [ ] Idempotency is documented where relevant
- [ ] Zero protocol specifics (REST/gRPC/etc)
- [ ] Zero implementation code
- [ ] Contracts are testable
- [ ] Backward compatibility strategy exists
- [ ] Gate 4 validation checklist 100% complete
## The Bottom Line
**If you wrote API contracts with HTTP endpoints or gRPC services, remove them.**
Contracts are protocol-agnostic. Period. No REST. No GraphQL. No HTTP codes.
Protocol choices go in Dependency Map. That's a later phase. Wait for it.
Violating this separation means:
- You're locked into a protocol before evaluating alternatives
- Contracts can't be reused across different delivery mechanisms
- You can't objectively compare REST vs. gRPC vs. messaging
- Teams can't work in parallel with clear interface agreements
**Define the contract. Stay abstract. Choose protocol later.**

View File

@@ -0,0 +1,715 @@
---
name: pre-dev-data-model
description: |
Gate 5: Data structures document - defines entities, relationships, and ownership
before database technology selection. Large Track only.
trigger: |
- API Design passed Gate 4 validation
- System stores persistent data
- Multiple entities with relationships
- Large Track workflow (2+ day features)
skip_when: |
- Small Track workflow → skip to Task Breakdown
- No persistent data → skip to Dependency Map
- API Design not validated → complete Gate 4 first
sequence:
after: [pre-dev-api-design]
before: [pre-dev-dependency-map]
---
# Data Modeling - Defining Data Structures
## Foundational Principle
**Data structures, relationships, and ownership must be defined before database technology selection.**
Jumping to database-specific schemas without modeling creates:
- Inconsistent data structures across services
- Unclear data ownership and authority
- Schema conflicts discovered during development
- Migration nightmares when requirements change
- Performance issues from poor data design
**The Data Model answers**: WHAT data exists, HOW entities relate, WHO owns what data?
**The Data Model never answers**: WHICH database technology or HOW to implement storage.
## When to Use This Skill
Use this skill when:
- API Design has passed Gate 4 validation
- TRD has passed Gate 3 validation
- System stores persistent data
- Multiple entities with relationships
- Need clear data ownership boundaries
- Building data-intensive applications
## Mandatory Workflow
### Phase 1: Data Analysis (Inputs Required)
1. **Approved API Design** (Gate 4 passed) - contracts define data flows
2. **Approved TRD** (Gate 3 passed) - architecture components identified
3. **Approved Feature Map** (Gate 2 passed) - domains defined
4. **Approved PRD** (Gate 1 passed) - business requirements locked
5. **Extract entities** from PRD, Feature Map, and API contracts
6. **Identify relationships** between entities
### Phase 2: Data Modeling
1. **Define entities** (what data objects exist)
2. **Specify attributes** (what properties each entity has)
3. **Model relationships** (how entities connect)
4. **Assign ownership** (which component owns which data)
5. **Define constraints** (uniqueness, required fields, ranges)
6. **Plan data lifecycle** (creation, updates, deletion, archival)
7. **Design access patterns** (how data will be queried)
8. **Consider data quality** (validation, normalization)
### Phase 3: Gate 5 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding to Dependency Map:
- [ ] All entities are identified and defined
- [ ] Entity attributes are complete with types and constraints
- [ ] Relationships between entities are modeled
- [ ] Data ownership is explicitly assigned to components
- [ ] Primary identifiers are defined
- [ ] Unique constraints are specified
- [ ] Required vs. optional fields are clear
- [ ] Data lifecycle is documented
- [ ] Access patterns are identified
- [ ] No database-specific details (tables, indexes, SQL)
- [ ] No ORM or storage technology specifics
## Explicit Rules
### ✅ DO Include in Data Model
- Entity definitions (conceptual data objects)
- Attributes with types (string, number, boolean, date, etc.)
- Constraints (required, unique, ranges, patterns)
- Relationships (one-to-one, one-to-many, many-to-many)
- Data ownership (which component is authoritative)
- Primary identifiers (how entities are uniquely identified)
- Lifecycle rules (soft delete, archival, retention)
- Access patterns (how data will be queried)
- Data quality rules (validation, normalization)
- Referential integrity requirements
### ❌ NEVER Include in Data Model
- Database product names (PostgreSQL, MongoDB, Redis)
- Table names or collection names
- Index definitions
- SQL or query language specifics
- ORM frameworks (Prisma, TypeORM, SQLAlchemy)
- Storage engine specifics (InnoDB, MyISAM)
- Partitioning or sharding strategies (implementation detail)
- Replication or backup strategies
- Database-specific data types (JSONB, UUID, BIGSERIAL)
### Abstraction Rules
1. **Entity**: Say "User" not "users table"
2. **Attribute**: Say "emailAddress: String (email format)" not "email VARCHAR(255)"
3. **Relationship**: Say "User has many Orders" not "foreign key user_id"
4. **Identifier**: Say "Unique identifier" not "UUID primary key"
5. **Constraint**: Say "Must be unique" not "UNIQUE INDEX"
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "We know it's PostgreSQL, just use PG types" | Database choice comes later. Model abstractly now. |
| "Table design is data modeling" | Tables are implementation. Entities are concepts. Stay conceptual. |
| "We need indexes for performance" | Indexes are optimization. Model data first, optimize later. |
| "ORMs require specific schemas" | ORMs adapt to models. Don't let tooling drive design. |
| "Foreign keys define relationships" | Relationships exist conceptually. FKs are implementation. |
| "SQL examples help clarity" | Abstract models are clearer. SQL is implementation detail. |
| "NoSQL doesn't need relationships" | All systems have data relationships. Model them regardless of DB type. |
| "This is just ERD" | ERD is visualization tool. Data model is broader (ownership, lifecycle, etc). |
| "We can skip this for simple CRUD" | Even CRUD needs clear entity design. Don't skip. |
| "Microservices mean no relationships" | Services interact via data. Model entities per service. |
## Red Flags - STOP
If you catch yourself writing any of these in Data Model, **STOP**:
- Database product names (Postgres, MySQL, Mongo, Redis)
- SQL keywords (CREATE TABLE, ALTER TABLE, SELECT, JOIN)
- Database-specific types (SERIAL, JSONB, VARCHAR, TEXT)
- Index commands (CREATE INDEX, UNIQUE INDEX)
- ORM code (Prisma schema, TypeORM decorators)
- Storage details (partitioning, sharding, replication)
- Query optimization (EXPLAIN plans, index hints)
- Backup/recovery strategies
**When you catch yourself**: Replace DB detail with abstract concept. "users table" → "User entity"
## Gate 5 Validation Checklist
Before proceeding to Dependency Map, verify:
**Entity Completeness**:
- [ ] All entities from PRD/Feature Map are modeled
- [ ] Entity names are clear and consistent
- [ ] Each entity has defined purpose
- [ ] Entity boundaries align with component ownership (from TRD)
**Attribute Specification**:
- [ ] All attributes have types specified
- [ ] Required vs. optional is explicit
- [ ] Constraints are documented (unique, range, format)
- [ ] Default values are specified where relevant
- [ ] Computed/derived fields are identified
**Relationship Modeling**:
- [ ] All relationships between entities are documented
- [ ] Cardinality is specified (one-to-one, one-to-many, many-to-many)
- [ ] Optional vs. required relationships are clear
- [ ] Referential integrity needs are documented
- [ ] Circular dependencies are identified and resolved
**Data Ownership**:
- [ ] Each entity is owned by exactly one component
- [ ] Read/write permissions are documented
- [ ] Cross-component data access is via APIs (from Gate 4)
- [ ] No shared database anti-pattern
**Data Quality**:
- [ ] Validation rules are specified
- [ ] Normalization level is appropriate
- [ ] Denormalization decisions are justified
- [ ] Data consistency strategy is defined (eventual vs. strong)
**Lifecycle Management**:
- [ ] Creation rules are documented
- [ ] Update patterns are defined
- [ ] Deletion strategy is specified (hard vs. soft delete)
- [ ] Archival and retention policies exist
- [ ] Audit trail needs are identified
**Access Patterns**:
- [ ] Primary access patterns are documented
- [ ] Query needs are identified (lookups, searches, aggregations)
- [ ] Write patterns are documented (create, update, delete frequencies)
- [ ] Consistency requirements are specified
**Technology Agnostic**:
- [ ] No database product names
- [ ] No SQL or NoSQL specifics
- [ ] No table/collection/index definitions
- [ ] Can implement in any database technology
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to Dependency Map
- ⚠️ **CONDITIONAL**: Remove DB specifics or add missing entities → Re-validate
-**FAIL**: Incomplete model or poor ownership → Rework
## Data Model Template
```markdown
# Data Model: [Project/Feature Name]
## Overview
- **API Design Reference**: [Link to Gate 4 API contracts]
- **TRD Reference**: [Link to Gate 3 TRD]
- **Feature Map Reference**: [Link to Gate 2 Feature Map]
- **Last Updated**: [Date]
- **Status**: Draft / Under Review / Approved
## Data Ownership Map
| Entity | Owning Component (from TRD) | Read Access | Write Access |
|--------|------------------------------|-------------|--------------|
| User | User Service | All components | User Service only |
| Order | Order Service | User, Payment, Fulfillment | Order Service only |
| Payment | Payment Service | Order, Billing | Payment Service only |
| Product | Catalog Service | All components | Catalog Service only |
**Principle**: Each entity has exactly ONE authoritative owner. Cross-component access via APIs only.
---
## Entity Definitions
### Entity: User
**Purpose**: Represents a system user account
**Owned By**: User Service (from TRD)
**Primary Identifier**: userId (Unique identifier, immutable)
**Attributes**:
| Attribute | Type | Required | Unique | Constraints | Description |
|-----------|------|----------|--------|-------------|-------------|
| userId | Identifier | Yes | Yes | Immutable, UUID format | Unique user identifier |
| email | EmailAddress | Yes | Yes | Valid email format, max 254 chars | Primary email |
| displayName | String | No | No | 3-50 chars, alphanumeric + spaces | Public name |
| passwordHash | String | Yes | No | Hashed value only, never store plain text | Authentication credential |
| accountStatus | UserStatus | Yes | No | One of: active, suspended, deleted, pending | Current status |
| emailVerified | Boolean | Yes | No | Default: false | Email verification status |
| createdAt | Timestamp | Yes | No | Immutable, ISO8601 | Account creation time |
| updatedAt | Timestamp | Yes | No | Auto-updated on changes | Last modification time |
| lastLoginAt | Timestamp | No | No | ISO8601 | Most recent login time |
**Constraints**:
- `email` must be unique across all users
- `displayName` is optional but recommended (prompt user during registration)
- `passwordHash` must never be returned in API responses
- `accountStatus` transitions: pending → active, active → suspended → active, active → deleted (final)
**Lifecycle**:
- **Creation**: Via `CreateUser` API operation (Gate 4 contract)
- **Updates**: Via `UpdateUserProfile`, `ChangePassword`, `UpdateStatus` operations
- **Deletion**: Soft delete (set `accountStatus = deleted`, retain data for 90 days)
- **Archival**: Hard delete after 90 days of soft delete
**Access Patterns**:
- Lookup by `userId` (primary pattern, most frequent)
- Lookup by `email` (login flow, unique constraint)
- List users by `accountStatus` (admin operations)
- Search by `displayName` (user search feature)
**Data Quality**:
- `email` is normalized to lowercase before storage
- `displayName` is trimmed of leading/trailing whitespace
- `passwordHash` uses industry-standard hashing (algorithm TBD in Dependency Map)
---
### Entity: Order
**Purpose**: Represents a customer order
**Owned By**: Order Service (from TRD)
**Primary Identifier**: orderId (Unique identifier, immutable)
**Attributes**:
| Attribute | Type | Required | Unique | Constraints | Description |
|-----------|------|----------|--------|-------------|-------------|
| orderId | Identifier | Yes | Yes | Immutable, UUID format | Unique order identifier |
| userId | Identifier | Yes | No | Must reference existing User | Customer who placed order |
| orderStatus | OrderStatus | Yes | No | One of: pending, confirmed, shipped, delivered, cancelled | Current status |
| totalAmount | MonetaryAmount | Yes | No | Non-negative, in smallest currency unit | Total order value |
| currency | CurrencyCode | Yes | No | ISO 4217 code | Currency for totalAmount |
| shippingAddress | Address | Yes | No | Valid address structure | Delivery destination |
| orderItems | List<OrderItem> | Yes | No | Min 1 item, max 100 items | Products in order |
| createdAt | Timestamp | Yes | No | Immutable, ISO8601 | Order creation time |
| updatedAt | Timestamp | Yes | No | Auto-updated on changes | Last modification time |
**Nested Types**:
#### OrderItem (embedded within Order)
| Attribute | Type | Required | Description |
|-----------|------|----------|-------------|
| productId | Identifier | Yes | References Product entity |
| quantity | Integer | Yes | Min 1, max 999 |
| unitPrice | MonetaryAmount | Yes | Price per item at time of order |
| subtotal | MonetaryAmount | Yes | quantity × unitPrice |
#### Address (value object)
| Attribute | Type | Required | Description |
|-----------|------|----------|-------------|
| street1 | String | Yes | Primary street address |
| street2 | String | No | Secondary address (apt, suite) |
| city | String | Yes | City name |
| state | String | Yes | State/province code |
| postalCode | String | Yes | Postal/ZIP code |
| country | String | Yes | ISO 3166-1 alpha-2 code |
**Relationships**:
- **User** (one-to-many): One User can have many Orders
- **Product** (via OrderItem): Order references Products (read-only, owned by Catalog Service)
**Constraints**:
- `totalAmount` must equal sum of all `orderItems[].subtotal` + shipping + tax
- `orderStatus` transitions: pending → confirmed → shipped → delivered
- `orderStatus` can go to cancelled from pending or confirmed only
- `orderItems` must contain at least 1 item
- All `productId` references must be valid at order creation time (validate via API)
**Lifecycle**:
- **Creation**: Via `CreateOrder` API operation
- **Updates**: Via `UpdateOrderStatus`, `CancelOrder` operations
- **Deletion**: Soft delete after 7 years (regulatory compliance)
- **Archival**: Orders never hard deleted (permanent record)
**Access Patterns**:
- Lookup by `orderId` (most frequent)
- List by `userId` (user order history)
- List by `orderStatus` (fulfillment workflows)
- Query by `createdAt` range (reporting, analytics)
**Data Quality**:
- `orderItems[].unitPrice` is snapshot at order time (price changes don't affect existing orders)
- `totalAmount` is computed and validated on creation
- `shippingAddress` is validated against address API before order creation
---
### Entity: [Another Entity]
[Same structure as above]
---
## Relationship Diagram
```
User (1) ──< has many >── (*) Order
│ contains
OrderItem (embedded)
│ references
Product (1)
│ belongs to
Category (1)
Payment (*) ──< processes >── (1) Order
```
**Legend**:
- `(1)`: One
- `(*)`: Many
- `──<`: One-to-many relationship
- `──`: One-to-one relationship
- Embedded: Data stored within parent entity
---
## Cross-Component Data Access
### User Service → Order Service
**Scenario**: User wants to view their order history
**Data Flow**:
1. User Service authenticates user (owns User entity)
2. User Service calls Order Service API: `GetOrdersByUserId(userId)`
3. Order Service returns order data (owns Order entity)
4. User Service enriches with user display name if needed
**Rules**:
- User Service does NOT access Order Service's data store directly
- All access via APIs (from Gate 4 contracts)
- Order Service is authoritative for Order data
---
### Order Service → Catalog Service
**Scenario**: Order creation needs product info
**Data Flow**:
1. Order Service receives `CreateOrder` request
2. Order Service calls Catalog Service API: `GetProduct(productId)`
3. Catalog Service returns product data (price, availability)
4. Order Service creates order with snapshot of product data
**Rules**:
- Order stores `productId` reference and price snapshot
- Catalog Service is authoritative for current Product data
- Order's product snapshot is immutable (historical record)
---
### [Another Cross-Component Access]
[Same structure]
---
## Data Consistency Strategy
### Strong Consistency (Immediate)
- User authentication (must be immediately consistent)
- Payment transactions (cannot tolerate eventual consistency)
- Inventory deductions (prevent overselling)
### Eventual Consistency (Acceptable Delay)
- User profile updates reflecting in analytics (delay OK)
- Order history in user dashboard (few seconds lag acceptable)
- Search indexes (brief staleness tolerable)
### Consistency Implementation**:
- Strong: Synchronous API calls with transactional guarantees
- Eventual: Async events with idempotent handlers
---
## Data Validation Rules
### User Entity Validation
- `email`: Must match RFC 5322 format
- `displayName`: No profanity (filter list: [reference])
- `passwordHash`: Must meet complexity requirements (min 12 chars, mixed case, numbers, symbols)
### Order Entity Validation
- `totalAmount`: Must match sum of items + fees
- `orderItems`: Each `productId` must exist in Catalog Service at order time
- `shippingAddress`: Must pass address validation API
### Cross-Entity Validation
- User must have `accountStatus = active` to create orders
- Products must have `availability > 0` to be added to orders
---
## Data Lifecycle Policies
### Retention Periods
| Entity | Active Period | Archive After | Delete After |
|--------|---------------|---------------|--------------|
| User | Until deleted | 90 days (soft delete) | 90 days after soft delete |
| Order | Permanent | N/A | Never (regulatory) |
| Payment | 7 years | 7 years | 10 years (compliance) |
| AuditLog | 1 year | 1 year | 5 years |
### Soft Delete Strategy
- **User**: Set `accountStatus = deleted`, retain data for GDPR compliance window
- **Order**: Never deleted (permanent financial record)
- **Payment**: Anonymize after 7 years (retain transaction, remove PII)
### Audit Trail Requirements
- **User**: Log all status changes, login attempts
- **Order**: Log all status transitions, modifications
- **Payment**: Log all transaction attempts, results
---
## Data Privacy & Compliance
### Personally Identifiable Information (PII)
| Entity | PII Fields | Handling |
|--------|-----------|----------|
| User | email, displayName | Encrypted at rest, GDPR right to deletion |
| Order | shippingAddress | Encrypted at rest, retention per policy |
| Payment | card details | Never stored (use tokenization) |
### GDPR Compliance
- Users can request data export (via `ExportUserData` API)
- Users can request deletion (soft delete with 90-day grace period)
- Consent tracking for marketing communications
### Data Encryption
- Sensitive fields encrypted at rest (algorithm TBD in Dependency Map)
- Encryption keys managed externally (key management TBD in Dependency Map)
---
## Access Pattern Analysis
### High-Frequency Patterns (Optimize for these)
1. **User lookup by ID**: `GetUser(userId)` - 1000 req/sec
2. **Order lookup by ID**: `GetOrder(orderId)` - 500 req/sec
3. **User's orders**: `GetOrdersByUserId(userId)` - 200 req/sec
### Medium-Frequency Patterns
4. **User lookup by email**: `GetUserByEmail(email)` - 50 req/sec (login)
5. **Orders by status**: `GetOrdersByStatus(status)` - 30 req/sec (fulfillment)
### Low-Frequency Patterns
6. **User search**: `SearchUsers(query)` - 5 req/sec (admin)
7. **Order reports**: `GetOrdersByDateRange(start, end)` - 1 req/hour (reporting)
**Optimization Notes** (for later, not now):
- High-frequency patterns need fast lookups (indexes, caching)
- Medium-frequency patterns need balanced design
- Low-frequency patterns can tolerate slower queries
---
## Data Quality Standards
### Normalization
- **User email**: Stored in lowercase, normalized form
- **Addresses**: Standardized format via address validation service
- **Phone numbers**: Stored in E.164 format
### Validation
- All inputs validated before storage
- Validation rules documented per attribute
- Failed validation returns clear error messages
### Data Integrity
- Referential integrity enforced (userId in Order must exist)
- Constraints enforced at model level (not just database)
- Data consistency checks in automated tests
---
## Migration & Evolution Strategy
### Schema Evolution
- **Additive changes**: Add optional fields (backward compatible)
- **Non-breaking**: Default values for new required fields
- **Breaking changes**: Versioned approach (v1 → v2 with migration)
### Data Migration
- Plan for zero-downtime migrations
- Backward and forward compatibility during migration
- Rollback strategy for failed migrations
### Versioning
- Entities can evolve over time
- Migration scripts documented (but not written here - that's implementation)
- Compatibility maintained during transitions
---
## Gate 5 Validation
**Validation Date**: [Date]
**Validated By**: [Person/team]
- [ ] All entities defined with complete attributes
- [ ] Relationships documented and valid
- [ ] Data ownership assigned to components
- [ ] Constraints and validation rules specified
- [ ] Lifecycle policies documented
- [ ] Access patterns identified
- [ ] No database-specific details included
- [ ] Ready for Dependency Map (Gate 6)
**Approval**: ☐ Approved | ☐ Needs Revision | ☐ Rejected
**Next Step**: Proceed to Dependency Map (`pre-dev-dependency-map`)
```
## Common Violations and Fixes
### Violation 1: Database-Specific Schema
**Wrong**:
```sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
```
**Correct**:
```markdown
### Entity: User
**Attributes**:
| Attribute | Type | Required | Unique | Description |
|-----------|------|----------|--------|-------------|
| userId | Identifier | Yes | Yes | Unique user identifier |
| email | EmailAddress | Yes | Yes | Primary email address |
| createdAt | Timestamp | Yes | No | Account creation time |
```
### Violation 2: ORM-Specific Code
**Wrong**:
```typescript
@Entity()
class User {
@PrimaryGeneratedColumn('uuid')
id: string;
@Column({ unique: true })
email: string;
}
```
**Correct**:
```markdown
### Entity: User
**Primary Identifier**: userId (Unique identifier)
**Attributes**: userId, email, ...
**Constraints**: email must be unique
```
### Violation 3: Technology in Relationships
**Wrong**:
```markdown
**Relationships**:
- Foreign key `user_id` references `users.id`
- Join table `user_roles` for many-to-many
```
**Correct**:
```markdown
**Relationships**:
- User (one-to-many) Order: One user can have many orders
- User (many-to-many) Role: Users can have multiple roles, roles can be assigned to multiple users
```
## Confidence Scoring
Use this to adjust your interaction with the user:
```yaml
Confidence Factors:
Entity Coverage: [0-30]
- All entities modeled: 30
- Most entities covered: 20
- Significant gaps: 10
Relationship Clarity: [0-25]
- All relationships documented: 25
- Most relationships clear: 15
- Ambiguous connections: 5
Data Ownership: [0-25]
- Clear ownership boundaries: 25
- Mostly clear with minor overlaps: 15
- Unclear or contested: 5
Constraint Completeness: [0-20]
- All validation rules specified: 20
- Common cases covered: 12
- Minimal specification: 5
Total: [0-100]
Action:
80+: Generate complete data model autonomously
50-79: Present options for normalization/relationships
<50: Ask clarifying questions about entity boundaries
```
## Output Location
**Always output to**: `docs/pre-development/data-model/data-model-[feature-name].md`
## After Data Model Approval
1. ✅ Lock the data model - entity structure is now reference
2. 🎯 Use data model as input for Dependency Map (next phase: `pre-dev-dependency-map`)
3. 🚫 Never add database specifics to data model retroactively
4. 📋 Keep data model technology-agnostic until Dependency Map
## Quality Self-Check
Before declaring Data Model complete, verify:
- [ ] All entities are defined with complete attributes
- [ ] Attribute types and constraints are specified
- [ ] Relationships are modeled with correct cardinality
- [ ] Data ownership is explicitly assigned
- [ ] Primary identifiers are defined
- [ ] Lifecycle policies are documented
- [ ] Access patterns are identified
- [ ] Validation rules are comprehensive
- [ ] Privacy/compliance needs are addressed
- [ ] Consistency strategy is defined
- [ ] Zero database-specific details (tables, SQL, indexes)
- [ ] Zero ORM or framework specifics
- [ ] Gate 5 validation checklist 100% complete
## The Bottom Line
**If you wrote SQL schemas or ORM code, delete it and model abstractly.**
Data modeling is conceptual. Period. No database products. No SQL. No ORMs.
Database technology goes in Dependency Map. That's the next phase. Wait for it.
Violating this separation means:
- You're locked into a database before evaluating alternatives
- Data model can't be reused across different database types
- You can't objectively compare relational vs. document vs. key-value
- Poor separation of concerns (conceptual vs. physical)
**Model the data. Stay abstract. Choose database later.**

View File

@@ -0,0 +1,434 @@
---
name: pre-dev-dependency-map
description: |
Gate 6: Technology choices document - explicit, versioned, validated technology
selections with justifications. Large Track only.
trigger: |
- Data Model passed Gate 5 validation
- About to select specific technologies
- Tempted to write "@latest" or "newest version"
- Large Track workflow (2+ day features)
skip_when: |
- Small Track workflow → skip to Task Breakdown
- Technologies already locked → skip to Task Breakdown
- Data Model not validated → complete Gate 5 first
sequence:
after: [pre-dev-data-model]
before: [pre-dev-task-breakdown]
---
# Dependency Map - Explicit Technology Choices
## Foundational Principle
**Every technology choice must be explicit, versioned, validated, and justified.**
Using vague or "latest" dependencies creates:
- Unreproducible builds across environments
- Hidden incompatibilities discovered during implementation
- Security vulnerabilities from unvetted versions
- Upgrade nightmares from undocumented constraints
**The Dependency Map answers**: WHAT specific products, versions, packages, and infrastructure we'll use.
**The Dependency Map never answers**: HOW to implement features (that's Tasks/Subtasks).
## When to Use This Skill
Use this skill when:
- Data Model has passed Gate 5 validation
- API Design has passed Gate 4 validation
- TRD has passed Gate 3 validation
- About to select specific technologies
- Tempted to write "@latest" or "newest version"
- Asked to finalize the tech stack
- Before breaking down implementation tasks
## Mandatory Workflow
### Phase 1: Technology Evaluation (Inputs Required)
1. **Approved Data Model** (Gate 5 passed) - data structures defined
2. **Approved API Design** (Gate 4 passed) - contracts specified
3. **Approved TRD** (Gate 3 passed) - architecture patterns locked
4. **Map each TRD component** to specific technology candidates
5. **Map Data Model entities** to storage technologies
6. **Map API contracts** to protocol implementations
7. **Check team expertise** for proposed technologies
8. **Estimate costs** for infrastructure and services
### Phase 2: Stack Selection
For each technology choice:
1. **Specify exact version** (not "latest", not range unless justified)
2. **List alternatives considered** with trade-offs
3. **Verify compatibility** with other dependencies
4. **Check security** for known vulnerabilities
5. **Validate licenses** for compliance
6. **Calculate costs** (infrastructure + services + support)
### Phase 3: Gate 6 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding to Task Breakdown:
- [ ] All dependencies have explicit versions
- [ ] No version conflicts exist
- [ ] No critical security vulnerabilities
- [ ] All licenses are compliant
- [ ] Team has expertise or learning path
- [ ] Costs are acceptable and documented
- [ ] Compatibility matrix verified
- [ ] All TRD components have dependencies mapped
- [ ] All API contracts have protocol implementations selected
- [ ] All Data Model entities have storage technologies selected
## Explicit Rules
### ✅ DO Include in Dependency Map
- Exact package names with explicit versions (go.uber.org/zap@v1.27.0)
- Technology stack with version constraints (Go 1.24+, PostgreSQL 16)
- Infrastructure services with specifications (Valkey 8, MinIO latest-stable)
- External service SDKs with versions
- Development tool requirements (Go 1.24+, Docker 24+)
- Security dependencies (crypto libraries, scanners)
- Monitoring/observability tools (specific products)
- Compatibility matrices (package A requires package B >= X)
- License summary for all dependencies
- Cost analysis (infrastructure + services)
### ❌ NEVER Include in Dependency Map
- Implementation code or examples
- How to use the dependencies
- Task breakdown or work units
- Step-by-step setup instructions (those go in subtasks)
- Architectural patterns (those were in TRD)
- Business requirements (those were in PRD)
### Version Specification Rules
1. **Explicit versions**: `@v1.27.0` not `@latest` or `^1.0.0`
2. **Justified ranges**: If using `>=`, document why (e.g., security patches)
3. **Lock file referenced**: `go.mod`, `package-lock.json`, etc.
4. **Upgrade constraints**: Document why version is locked/capped
5. **Compatibility**: Document known conflicts or requirements
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "Latest version is always best" | Latest is untested in your context. Pick specific, validate. |
| "I'll use flexible version ranges" | Ranges cause non-reproducible builds. Lock versions. |
| "Version numbers don't matter much" | They matter critically. Specify or face build failures. |
| "We can update versions later" | Document constraints now. Future you needs context. |
| "The team knows the stack already" | Document it anyway. Teams change, memories fade. |
| "Security scanning can happen in CI" | Security analysis must happen before committing. Do it now. |
| "We'll figure out costs in production" | Costs must be estimated before building. Calculate now. |
| "Compatibility issues will surface in tests" | Validate compatibility NOW. Don't wait for failures. |
| "License compliance is legal's problem" | You're responsible for your dependencies. Check licenses. |
| "I'll just use what the project template has" | Templates may be outdated/insecure. Validate explicitly. |
## Red Flags - STOP
If you catch yourself writing any of these in a Dependency Map, **STOP**:
- Version placeholders: `@latest`, `@next`, `^X.Y.Z` without justification
- Vague descriptions: "latest stable", "current version", "newest"
- Missing version numbers: Just package names without versions
- Unchecked compatibility: Not verifying version conflicts
- Unvetted security: Not checking vulnerability databases
- Unknown licenses: Not documenting license types
- Estimated costs as "TBD" or "unknown"
- "We'll use whatever is default" (no default without analysis)
**When you catch yourself**: Stop and specify the exact version after proper analysis.
## Gate 6 Validation Checklist
Before proceeding to Task Breakdown, verify:
**Compatibility**:
- [ ] All dependencies have explicit versions documented
- [ ] Version compatibility matrix is complete
- [ ] No known conflicts between dependencies
- [ ] Runtime requirements specified (OS, hardware)
- [ ] Upgrade path exists and is documented
**Security**:
- [ ] All dependencies scanned for vulnerabilities
- [ ] No critical (9.0+) or high (7.0-8.9) CVEs present
- [ ] Security update policy documented
- [ ] Supply chain verified (official sources only)
**Feasibility**:
- [ ] Team has expertise or documented learning path
- [ ] All tools are available/accessible
- [ ] Licensing allows commercial use
- [ ] Costs fit within budget
**Completeness**:
- [ ] Every TRD component has dependencies mapped
- [ ] Development environment fully specified
- [ ] CI/CD dependencies documented
- [ ] Monitoring/observability stack complete
**Documentation**:
- [ ] License summary created
- [ ] Cost analysis completed with estimates
- [ ] Known constraints documented
- [ ] Alternative technologies listed with rationale
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to Tasks
- ⚠️ **CONDITIONAL**: Resolve conflicts, add missing versions → Re-validate
-**FAIL**: Critical vulnerabilities or incompatibilities → Re-evaluate choices
## Common Violations and Fixes
### Violation 1: Vague Version Specifications
**Wrong**:
```yaml
Core Dependencies:
- Fiber (latest)
- PostgreSQL driver (current)
- Zap (newest stable)
```
**Correct**:
```yaml
Core Dependencies:
- gofiber/fiber/v2@v2.52.0
Purpose: HTTP router and middleware
Alternatives Considered: net/http (too low-level), gin (less active)
Trade-offs: Accepting Express-like API for Go
- lib/pq@v1.10.9
Purpose: PostgreSQL driver
Alternatives Considered: pgx (more complex than needed)
Constraint: Must remain compatible with database/sql
- go.uber.org/zap@v1.27.0
Purpose: Structured logging
Alternatives Considered: logrus (slower), slog (Go 1.21+ only)
Trade-offs: Accepting Uber's opinionated API
```
### Violation 2: Missing Security Analysis
**Wrong**:
```yaml
JWT Library: golang-jwt/jwt@v5.0.0
```
**Correct**:
```yaml
JWT Library:
Package: golang-jwt/jwt@v5.2.0
Purpose: JWT token generation and validation
Security:
- CVE Check: Clean (no known vulnerabilities as of 2024-01-15)
- OWASP: Follows best practices for token handling
- Updates: Security patches applied within 24h historically
Alternatives: cristalhq/jwt (no community), lestrrat-go/jwx (complex)
```
### Violation 3: Undefined Infrastructure
**Wrong**:
```yaml
Infrastructure:
- Some database (probably Postgres)
- Cache (Redis or Valkey)
- Storage for files
```
**Correct**:
```yaml
Infrastructure:
Database:
Product: PostgreSQL 16.1
Rationale: ACID guarantees, proven stability, team expertise
Configuration: Single primary + 2 read replicas
Cost: $450/month (managed service) or $120/month (self-hosted)
Cache:
Product: Valkey 8.0
Rationale: Redis fork, OSS license, compatible APIs
Configuration: 3-node cluster, 16GB RAM total
Cost: $90/month (managed) or $45/month (self-hosted)
Object Storage:
Product: MinIO (latest stable release branch)
Rationale: S3-compatible, self-hosted, no vendor lock
Configuration: 4-node distributed setup, 4TB storage
Cost: $200/month (infrastructure only)
```
## Dependency Resolution Patterns
### For LerianStudio/Midaz Projects (Required)
```yaml
Mandatory Dependencies:
- lib-commons: @latest (LerianStudio shared library)
- lib-auth: @latest (Midaz authentication) - Midaz projects only
- Hexagonal structure via boilerplate
- Fiber v2.52+ (web framework standard)
- Zap v1.27+ (logging standard)
Prohibited Choices:
- Heavy ORMs (use sqlc or raw SQL)
- Custom auth implementations (use lib-auth)
- Direct panic() calls in production code
- Application-level proxies (handled at cloud/infrastructure)
Version Constraints:
- Go: 1.24+ (for latest stdlib features)
- PostgreSQL: 16+ (for JSONB improvements)
- Valkey: 8+ (Redis 7.0 API compatibility)
```
### General Best Practices
```yaml
Prefer:
- Semantic versioned packages (major.minor.patch)
- Well-maintained packages (commits within 6 months)
- Minimal dependency trees (avoid transitive bloat)
- Standard library when sufficient
Avoid:
- Deprecated packages (marked or unmaintained >1 year)
- Single-maintainer critical dependencies
- Packages with >100 transitive dependencies
- GPL licenses unless compliance is certain
```
## License Compliance
Document all licenses:
```yaml
License Summary:
MIT: 45 packages
- Permissive, commercial use allowed
- Attribution required in binary distributions
Apache 2.0: 23 packages
- Patent grant included
- Attribution required
BSD-3-Clause: 12 packages
- Permissive, attribution required
Commercial/Proprietary: 2 packages
- cloud-vendor-sdk: Covered under service agreement
- monitoring-agent: Free tier for <100 hosts
Compliance Actions:
- [ ] Attribution file created for distributions
- [ ] Legal team notified of commercial dependencies
- [ ] GPL dependencies: None (verified ✓)
```
## Cost Analysis Template
```yaml
Infrastructure Costs (Monthly):
Compute:
- Production: 4 containers × $50 = $200
- Staging: 2 containers × $25 = $50
- Total: $250/month
Storage:
- Database: $450 (managed PostgreSQL 16)
- Cache: $90 (managed Valkey cluster)
- Object: $200 (self-hosted MinIO on $50 VMs)
- Total: $740/month
Network:
- Data transfer: ~$30/month (estimated)
- Load balancer: $20/month
- Total: $50/month
Third-Party Services:
- Auth provider: $100/month (10k MAU)
- Email service: $50/month
- Monitoring: $0 (self-hosted)
- Total: $150/month
Grand Total: $1,190/month base
Scaling Cost: +$150 per 1000 additional users
Cost Validation:
- Budget: $2,000/month available
- Margin: 40% buffer for growth
- Status: ✅ Within budget
```
## Confidence Scoring
Use this to adjust your interaction with the user:
```yaml
Confidence Factors:
Technology Familiarity: [0-30]
- Stack used successfully before: 30
- Similar stack with variations: 20
- Novel technology choices: 10
Compatibility Verification: [0-25]
- All dependencies verified compatible: 25
- Most dependencies checked: 15
- Limited verification: 5
Security Assessment: [0-25]
- Full CVE scan completed: 25
- Basic security check done: 15
- No security review: 5
Cost Analysis: [0-20]
- Detailed cost breakdown: 20
- Rough estimates: 12
- No cost analysis: 5
Total: [0-100]
Action:
80+: Generate complete dependency map autonomously
50-79: Present alternatives for key dependencies
<50: Ask about team expertise and constraints
```
## Output Location
**Always output to**: `docs/pre-development/dependencies/dep-map-[feature-name].md`
## After Dependency Map Approval
1. ✅ Lock all versions - update only with documented justification
2. 🎯 Create lock files (go.mod, package-lock.json, etc.)
3. 🔒 Set up Dependabot or equivalent for security updates
4. 📋 Proceed to task breakdown with full stack context
## Quality Self-Check
Before declaring Dependency Map complete, verify:
- [ ] Every dependency has explicit version (no @latest)
- [ ] All version conflicts resolved and documented
- [ ] Security scan completed (CVE database checked)
- [ ] All licenses documented and compliant
- [ ] Cost analysis completed with monthly estimates
- [ ] Team expertise verified or learning plan exists
- [ ] Compatibility matrix complete
- [ ] Upgrade constraints documented
- [ ] All TRD components have dependencies mapped
- [ ] Gate 6 validation checklist 100% complete
## The Bottom Line
**If you wrote a Dependency Map without explicit versions, add them now or start over.**
Every dependency must be specific. Period. No @latest. No version ranges without justification. No "we'll figure it out later".
Vague dependencies cause:
- Non-reproducible builds that work on your machine but fail elsewhere
- Security vulnerabilities from unvetted versions
- Incompatibilities discovered during implementation (too late)
- Impossible debugging when "it worked yesterday"
**Be explicit. Be specific. Lock your versions.**
Your deployment engineer will thank you. Your future debugging self will thank you. Your security team will thank you.

View File

@@ -0,0 +1,506 @@
---
name: pre-dev-feature-map
description: |
Gate 2: Feature relationship map - visualizes feature landscape, groupings,
and interactions at business level before technical architecture.
trigger: |
- PRD passed Gate 1 validation
- Multiple features with complex interactions
- Need to understand feature scope and relationships
- Large Track workflow (2+ day features)
skip_when: |
- Small Track workflow (<2 days) → skip to TRD
- Single simple feature → TRD directly
- PRD not validated → complete Gate 1 first
sequence:
after: [pre-dev-prd-creation]
before: [pre-dev-trd-creation]
---
# Feature Map Creation - Understanding the Feature Landscape
## Foundational Principle
**Feature relationships and boundaries must be mapped before architectural decisions.**
Jumping from PRD to TRD without mapping creates:
- Architectures that don't match feature interaction patterns
- Missing integration points discovered late
- Poor module boundaries that cross feature concerns
- Difficulty prioritizing work without understanding dependencies
**The Feature Map answers**: How do features relate, group, and interact at a business level?
**The Feature Map never answers**: How we'll technically implement those features (that's TRD).
## When to Use This Skill
Use this skill when:
- PRD has passed Gate 1 validation
- About to start technical architecture (TRD)
- Need to understand feature scope and relationships
- Multiple features have complex interactions
- Unclear how to group or prioritize features
## Mandatory Workflow
### Phase 1: Feature Analysis (Inputs Required)
1. **Approved PRD** (Gate 1 passed) - business requirements locked
2. **Extract all features** from PRD
3. **Identify user journeys** across features
4. **Map feature interactions** and dependencies
### Phase 2: Feature Mapping
1. **Categorize features** (Core/Supporting/Enhancement/Integration)
2. **Group into domains** (logical business groupings)
3. **Map user journeys** (how users flow through features)
4. **Identify integration points** (where features interact)
5. **Define boundaries** (what each feature owns)
6. **Visualize relationships** (diagrams or structured text)
7. **Prioritize by value** (core vs. nice-to-have)
### Phase 3: Gate 2 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding to TRD:
- [ ] All PRD features are mapped
- [ ] Feature categories are clearly defined
- [ ] Domain groupings are logical and cohesive
- [ ] User journeys are complete (start to finish)
- [ ] Integration points are identified
- [ ] Feature boundaries are clear (no overlap)
- [ ] Priority levels support phased delivery
- [ ] No technical implementation details included
## Explicit Rules
### ✅ DO Include in Feature Map
- Feature list (extracted from PRD)
- Feature categories (Core/Supporting/Enhancement/Integration)
- Domain groupings (logical business areas)
- User journey maps (how users move through features)
- Feature interactions (which features depend on/trigger others)
- Integration points (where features exchange data/events)
- Feature boundaries (what each feature owns)
- Priority levels (MVP vs. future phases)
- Scope visualization (what's in/out of each phase)
### ❌ NEVER Include in Feature Map
- Technical architecture or component design
- Technology choices or framework decisions
- Database schemas or API specifications
- Implementation approaches or algorithms
- Infrastructure or deployment concerns
- Code structure or file organization
- Protocol choices or data formats
### Feature Categorization Rules
1. **Core**: Must have for MVP, blocks other features
2. **Supporting**: Enables core features, medium priority
3. **Enhancement**: Improves existing features, nice-to-have
4. **Integration**: Connects to external systems, varies by need
### Domain Grouping Rules
1. Group features by business capability (not technical layer)
2. Each domain should have cohesive, related features
3. Minimize cross-domain dependencies
4. Name domains by business function (User Management, Payment Processing)
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "Feature relationships are obvious" | Obvious to you ≠ documented for team. Map them. |
| "We can figure out groupings during TRD" | TRD architecture follows feature structure. Define it first. |
| "This feels like extra work" | Skipping this causes rework when architecture mismatches features. |
| "The PRD already has this info" | PRD lists features; map shows relationships. Different views. |
| "I'll just mention the components" | Components are technical (TRD). This is business groupings only. |
| "User journeys are in the PRD" | PRD has stories; map shows cross-feature flows. Different levels. |
| "Integration points are technical" | Points WHERE features interact = business. HOW = technical (TRD). |
| "Priorities can be set later" | Priority affects architecture decisions. Set them before TRD. |
| "Boundaries will be clear in code" | Code structure follows feature boundaries. Define them first. |
| "This is just a simple feature" | Even simple features have interactions. Map them. |
## Red Flags - STOP
If you catch yourself writing any of these in a Feature Map, **STOP**:
- Technology names (APIs, databases, frameworks)
- Component names (AuthService, PaymentProcessor)
- Technical terms (microservices, endpoints, schemas)
- Implementation details (how data flows technically)
- Architecture diagrams (system components)
- Code organization (packages, modules, files)
- Protocol specifications (REST, GraphQL, gRPC)
**When you catch yourself**: Remove the technical detail. Focus on WHAT features do and HOW they relate at a business level.
## Gate 2 Validation Checklist
Before proceeding to TRD, verify:
**Feature Completeness**:
- [ ] All PRD features are included in map
- [ ] Each feature has clear description and purpose
- [ ] Feature categories are assigned (Core/Supporting/Enhancement/Integration)
- [ ] No features are missing or overlooked
**Grouping Clarity**:
- [ ] Domains are logically cohesive (related business capabilities)
- [ ] Domain boundaries are clear (no overlapping responsibilities)
- [ ] Cross-domain dependencies are minimized
- [ ] Domain names reflect business function (not technical layer)
**Journey Mapping**:
- [ ] Primary user journeys are documented (start to finish)
- [ ] Journeys show which features users touch
- [ ] Happy path and error scenarios covered
- [ ] Handoff points between features identified
**Integration Points**:
- [ ] All feature interactions are identified
- [ ] Data/event exchange points are marked
- [ ] Directional dependencies are clear (A depends on B)
- [ ] Circular dependencies are flagged and resolved
**Priority & Phasing**:
- [ ] MVP features clearly identified
- [ ] Priority rationale is documented
- [ ] Phasing supports incremental value delivery
- [ ] Dependencies don't block MVP delivery
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to TRD
- ⚠️ **CONDITIONAL**: Clarify ambiguous boundaries → Re-validate
-**FAIL**: Features poorly grouped or missing → Rework
## Feature Map Template
Use this structure:
```markdown
# Feature Map: [Project/Feature Name]
## Overview
- **PRD Reference**: [Link to approved PRD]
- **Last Updated**: [Date]
- **Status**: Draft / Under Review / Approved
## Feature Inventory
### Core Features (MVP)
| Feature ID | Feature Name | Description | User Value | Dependencies |
|------------|--------------|-------------|------------|--------------|
| F-001 | [Name] | [Brief description] | [What users gain] | [Other features] |
| F-002 | [Name] | [Brief description] | [What users gain] | [Other features] |
### Supporting Features
| Feature ID | Feature Name | Description | User Value | Dependencies |
|------------|--------------|-------------|------------|--------------|
| F-101 | [Name] | [Brief description] | [What users gain] | [Other features] |
### Enhancement Features (Post-MVP)
| Feature ID | Feature Name | Description | User Value | Dependencies |
|------------|--------------|-------------|------------|--------------|
| F-201 | [Name] | [Brief description] | [What users gain] | [Other features] |
### Integration Features
| Feature ID | Feature Name | Description | User Value | Dependencies |
|------------|--------------|-------------|------------|--------------|
| F-301 | [Name] | [Brief description] | [What users gain] | [Other features] |
## Domain Groupings
### Domain 1: [Business Domain Name]
**Purpose**: [What business capability this domain provides]
**Features**:
- F-001: [Feature name]
- F-002: [Feature name]
- F-101: [Feature name]
**Boundaries**:
- **Owns**: [What data/processes this domain is responsible for]
- **Consumes**: [What it needs from other domains]
- **Provides**: [What it offers to other domains]
**Integration Points**:
- → Domain 2: [What/why they interact]
- ← Domain 3: [What/why they interact]
### Domain 2: [Business Domain Name]
[Same structure as Domain 1]
## User Journeys
### Journey 1: [Journey Name]
**User Type**: [Primary persona from PRD]
**Goal**: [What user wants to accomplish]
**Path**:
1. **[Feature F-001]**: [User action and feature response]
- Integration: [If interacts with another feature]
2. **[Feature F-002]**: [User action and feature response]
3. **[Feature F-003]**: [User action and feature response]
- Success: [What happens on success]
- Failure: [What happens on failure, which feature handles]
**Cross-Domain Interactions**:
- Domain 1 → Domain 2: [What data/event passes between]
- Domain 2 → Domain 3: [What data/event passes between]
### Journey 2: [Journey Name]
[Same structure]
## Feature Interaction Map
### High-Level Relationships
```
[Visual or structured text showing feature relationships]
Example:
┌─────────────────┐
│ User Auth │ (Core)
│ F-001 │
└────────┬────────┘
│ provides identity to
┌─────────────────┐ ┌──────────────────┐
│ User Profile │────→│ File Upload │
│ F-002 │ │ F-003 │
└─────────────────┘ └──────────────────┘
(Core) (Supporting)
│ triggers
┌─────────────────┐
│ Notifications │
│ F-101 │
└─────────────────┘
(Supporting)
```
### Dependency Matrix
| Feature | Depends On | Blocks | Optional |
|---------|-----------|--------|----------|
| F-001 | None | F-002, F-003 | - |
| F-002 | F-001 | F-101 | F-003 |
| F-003 | F-001 | None | F-002 |
| F-101 | F-002 | None | - |
## Phasing Strategy
### Phase 1 - MVP (Core Features)
**Goal**: [Minimum viable product goal]
**Timeline**: [Estimated timeframe]
**Features**:
- F-001: User Auth
- F-002: User Profile
- F-003: File Upload
**User Value**: [What users can do after Phase 1]
**Success Criteria**: [How we measure Phase 1 success]
### Phase 2 - Enhancement
**Goal**: [Enhancement goal]
**Triggers**: [What conditions trigger Phase 2]
**Features**:
- F-101: Notifications
- F-102: Advanced Search
- F-201: Social Sharing
**User Value**: [What users gain in Phase 2]
### Phase 3 - Integration
**Goal**: [Integration goal]
**Features**:
- F-301: External Payment Gateway
- F-302: Third-party Analytics
**User Value**: [What users gain in Phase 3]
## Scope Boundaries
### In Scope (This Feature Map)
- [Feature area 1]
- [Feature area 2]
- [Feature area 3]
### Out of Scope (Future / Other Projects)
- [Feature area X] - Rationale: [Why out of scope]
- [Feature area Y] - Rationale: [Why out of scope]
### Assumptions
- [Assumption 1 about features or interactions]
- [Assumption 2]
### Constraints
- [Business constraint 1 that affects feature scope]
- [Business constraint 2]
## Risk Assessment
### Feature Complexity Risks
| Feature | Complexity | Risk | Mitigation |
|---------|-----------|------|------------|
| F-001 | High | User adoption | [How to mitigate] |
| F-003 | Medium | Scope creep | [How to mitigate] |
### Integration Risks
| Integration Point | Risk | Impact | Mitigation |
|-------------------|------|--------|------------|
| F-001 → F-002 | [Risk description] | High | [Mitigation] |
| F-002 → F-101 | [Risk description] | Medium | [Mitigation] |
## Gate 2 Validation
**Validation Date**: [Date]
**Validated By**: [Person/team]
- [ ] All PRD features mapped
- [ ] Domain groupings are logical
- [ ] User journeys are complete
- [ ] Integration points identified
- [ ] Priorities support phased delivery
- [ ] No technical details included
- [ ] Ready for TRD architecture design
**Approval**: ☐ Approved | ☐ Needs Revision | ☐ Rejected
**Next Step**: Proceed to TRD Creation (pre-dev-trd-creation)
```
## Common Violations and Fixes
### Violation 1: Technical Details in Feature Descriptions
**Wrong**:
```markdown
### F-001: User Authentication
- Description: JWT-based auth with PostgreSQL session storage
- Dependencies: Database, Redis cache, OAuth2 service
```
**Correct**:
```markdown
### F-001: User Authentication
- Description: Users can create accounts and securely log in
- User Value: Access to personalized features
- Dependencies: None (foundational)
- Blocks: F-002 (Profile), F-003 (Upload)
```
### Violation 2: Technical Components in Domain Groupings
**Wrong**:
```markdown
### Domain: Auth Services
- AuthService component
- TokenValidator component
- SessionManager component
```
**Correct**:
```markdown
### Domain: User Identity
**Purpose**: Managing user accounts, authentication, and session lifecycle
**Features**:
- F-001: User Registration
- F-002: User Login
- F-003: Session Management
- F-004: Password Recovery
**Boundaries**:
- **Owns**: User credentials, session state, login history
- **Provides**: Identity verification to all other domains
- **Consumes**: Email service for notifications
```
### Violation 3: Implementation in Integration Points
**Wrong**:
```markdown
### Integration Points
- User Auth → Profile: REST API call to /api/profile with JWT
- Profile → Storage: S3 upload via pre-signed URL
```
**Correct**:
```markdown
### Integration Points
- User Auth → Profile: Provides verified user identity
- Profile → File Storage: Requests secure file storage for user uploads
- File Storage → Notifications: Triggers notification when upload completes
```
## Confidence Scoring
```yaml
Confidence Factors:
Feature Coverage: [0-25]
- All PRD features mapped: 25
- Most features mapped: 15
- Some features missing: 5
Relationship Clarity: [0-25]
- All interactions documented: 25
- Most interactions clear: 15
- Relationships unclear: 5
Domain Cohesion: [0-25]
- Domains are logically cohesive: 25
- Domains mostly cohesive: 15
- Poor domain boundaries: 5
Journey Completeness: [0-25]
- All user paths mapped: 25
- Primary paths mapped: 15
- Journeys incomplete: 5
Total: [0-100]
Action:
80+: Feature map complete, proceed to TRD
50-79: Address gaps, re-validate
<50: Rework groupings and relationships
```
## Output Location
**Always output to**: `docs/pre-development/feature-map/feature-map-[feature-name].md`
## After Feature Map Approval
1. ✅ Lock the Feature Map - feature scope and relationships are now reference
2. 🎯 Use Feature Map as input for TRD (next phase)
3. 🚫 Never add technical architecture to Feature Map retroactively
4. 📋 Keep business features separate from technical components
## Quality Self-Check
Before declaring Feature Map complete, verify:
- [ ] All PRD features are included and categorized
- [ ] Domain groupings are cohesive and logical
- [ ] User journeys show cross-feature flows
- [ ] Integration points are identified (WHAT interacts, not HOW)
- [ ] Feature boundaries are clear (no overlap)
- [ ] Priority levels support phased delivery
- [ ] Dependencies don't create circular blocks
- [ ] Zero technical implementation details present
- [ ] Gate 2 validation checklist 100% complete
## The Bottom Line
**If you wrote a Feature Map with technical architecture details, remove them.**
The Feature Map is business-level feature relationships only. Period. No components. No APIs. No databases.
Technical architecture goes in TRD. That's the next phase. Wait for it.
Violating this separation means:
- You're constraining architecture before understanding feature interactions
- Feature groupings become coupled to technical layers
- You can't objectively design architecture that matches business needs
- Boundaries become technical instead of business-driven
**Map the features. Understand relationships. Then architect in TRD.**

View File

@@ -0,0 +1,341 @@
---
name: pre-dev-prd-creation
description: |
Gate 1: Business requirements document - defines WHAT/WHY before HOW.
Creates PRD with problem definition, user stories, success metrics.
trigger: |
- Starting new product or major feature
- User asks to "plan", "design", or "architect"
- About to write code without documented requirements
- Asked to create PRD or requirements document
skip_when: |
- PRD already exists and validated → proceed to Gate 2
- Pure technical task without business impact → TRD directly
- Bug fix → systematic-debugging
sequence:
before: [pre-dev-feature-map, pre-dev-trd-creation]
---
# PRD Creation - Business Before Technical
## Foundational Principle
**Business requirements (WHAT/WHY) must be fully defined before technical decisions (HOW/WHERE).**
Mixing business and technical concerns creates:
- Requirements that serve implementation convenience, not user needs
- Technical constraints that limit product vision
- Inability to evaluate alternatives objectively
- Cascade failures when requirements change
**The PRD answers**: WHAT we're building and WHY it matters to users and business.
**The PRD never answers**: HOW we'll build it or WHERE components will live.
## When to Use This Skill
Use this skill when:
- Starting a new product or major feature
- User asks to "plan", "design", or "architect" something
- About to write code without documented requirements
- Tempted to add technical details to business requirements
- Asked to create a PRD or requirements document
## Mandatory Workflow
### Phase 0: Load Research Findings (if available)
Before writing the PRD, check for research output from Gate 0:
```bash
# Check if research.md exists
ls docs/pre-dev/{feature-name}/research.md 2>/dev/null
```
**If research.md exists:**
1. Load `docs/pre-dev/{feature-name}/research.md`
2. Review codebase patterns (from repo-research-analyst)
3. Review best practices (from best-practices-researcher)
4. Review framework constraints (from framework-docs-researcher)
**Required PRD Enhancements from Research:**
- Reference existing patterns with `file:line` notation where relevant
- Cite knowledge base findings from `docs/solutions/` if applicable
- Note any constraints discovered that affect requirements
- Include external best practice URLs as references
**If research.md doesn't exist:** Proceed without research context (not recommended for complex features).
### Phase 1: Problem Discovery
1. **Define the problem** without solution bias
2. **Identify users** specifically (not "users" generally)
3. **Quantify pain** with metrics or qualitative evidence
### Phase 2: Business Requirements
1. **Write Executive Summary** (problem + solution + impact in 3 sentences)
2. **Create User Personas** with real goals and frustrations
3. **Write User Stories** in format: "As [persona], I want [action] so that [benefit]"
4. **Define Success Metrics** that are measurable
5. **Set Scope Boundaries** (in/out explicitly)
### Phase 3: Gate 1 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding to Feature Map:
- [ ] Problem is clearly articulated
- [ ] Impact is quantified or qualified
- [ ] Users are specifically identified
- [ ] Features address core problem
- [ ] Success metrics are measurable
- [ ] In/out of scope is explicit
## Explicit Rules
### ✅ DO Include in PRD
- Problem definition and user pain points
- User personas with demographics, goals, frustrations
- User stories with acceptance criteria
- Feature requirements (WHAT it does, not HOW)
- Success metrics (user adoption, satisfaction, business KPIs)
- Scope boundaries (in/out explicitly)
- Go-to-market considerations
### ❌ NEVER Include in PRD
- Architecture diagrams or component design
- Technology choices (languages, frameworks, databases)
- Implementation approaches or algorithms
- Database schemas or API specifications
- Code examples or package dependencies
- Infrastructure needs or deployment strategies
- System integration patterns
### Separation Rules
1. **If it's a technology name** → Not in PRD (goes in Dependency Map)
2. **If it's a "how to build"** → Not in PRD (goes in TRD)
3. **If it's implementation** → Not in PRD (goes in Tasks/Subtasks)
4. **If it describes system behavior** → Not in PRD (goes in TRD)
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "Just a quick technical note won't hurt" | Technical details constrain business thinking. Keep them separate. |
| "Stakeholders need to know it's feasible" | Feasibility comes in TRD after business requirements are locked. |
| "The implementation is obvious" | Obvious to you ≠ obvious to everyone. Separate concerns. |
| "I'll save time by combining PRD and TRD" | You'll waste time rewriting when requirements change. |
| "This is a simple feature, no need for formality" | Simple features still need clear requirements. Follow the process. |
| "I can skip Gate 1, I know it's good" | Gates exist because humans are overconfident. Validate. |
| "The problem is obvious, no need for personas" | Obvious to you ≠ validated with users. Document it. |
| "Success metrics can be defined later" | Defining metrics later means building without targets. Do it now. |
| "I'll just add this one API endpoint detail" | API design is technical architecture. Stop. Keep it in TRD. |
| "But we already decided on PostgreSQL" | Technology decisions come after business requirements. Wait. |
| "CEO/CTO says it's a business constraint" | Authority doesn't change what's technical. Abstract it anyway. |
| "Investors need to see specific vendors/tech" | Show phasing and constraints abstractly. Vendors go in TRD. |
| "This is product scoping, not technical design" | Scope = capabilities. Technology = implementation. Different things. |
| "Mentioning Stripe shows we're being practical" | Mentioning "payment processor" shows the same. Stay abstract. |
| "PRDs can mention tech when it's a constraint" | PRDs mention capabilities needed. TRD maps capabilities to tech. |
| "Context matters - this is for exec review" | Context doesn't override principles. Executives get abstracted version. |
## Red Flags - STOP
If you catch yourself writing or thinking any of these in a PRD, **STOP**:
- Technology product names (PostgreSQL, Redis, Kafka, AWS, etc.)
- Framework or library names (React, Fiber, Express, etc.)
- Words like: "architecture", "component", "service", "endpoint", "schema"
- Phrases like: "we'll use X to do Y" or "the system will store data in Z"
- Code examples or API specifications
- "How we'll implement" or "Technical approach"
- Database table designs or data models
- Integration patterns or protocols
**When you catch yourself**: Move that content to a "technical notes" section to transfer to TRD later. Keep PRD pure business.
## Gate 1 Validation Checklist
Before proceeding to TRD, verify:
**Problem Definition**:
- [ ] Problem is clearly articulated in 1-2 sentences
- [ ] Impact is quantified (metrics) or qualified (evidence)
- [ ] Users are specifically identified (not just "users")
- [ ] Current workarounds are documented
**Solution Value**:
- [ ] Features address the core problem (not feature creep)
- [ ] Success metrics are measurable and specific
- [ ] ROI case is reasonable and documented
- [ ] User value is clear for each feature
**Scope Clarity**:
- [ ] In-scope items are explicitly listed
- [ ] Out-of-scope items are explicitly listed with rationale
- [ ] Assumptions are documented
- [ ] Dependencies are identified (business, not technical)
**Market Fit**:
- [ ] Differentiation from alternatives is clear
- [ ] User value proposition is validated
- [ ] Business case is sound
- [ ] Go-to-market approach outlined
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to Feature Map (`pre-dev-feature-map`)
- ⚠️ **CONDITIONAL**: Address specific gaps → Re-validate
-**FAIL**: Multiple issues → Return to discovery
## Common Violations and Fixes
### Violation 1: Technical Details in Features
**Wrong**:
```markdown
**FR-001: User Authentication**
- Use JWT tokens for session management
- Store passwords with bcrypt
- Implement OAuth2 with Google/GitHub providers
```
**Correct**:
```markdown
**FR-001: User Authentication**
- Description: Users can create accounts and securely log in
- User Value: Access personalized content without re-entering credentials
- Success Criteria: 95% of users successfully authenticate on first attempt
- Priority: Must-have
```
### Violation 2: Implementation in User Stories
**Wrong**:
```markdown
As a user, I want to store my data in PostgreSQL
so that queries are fast.
```
**Correct**:
```markdown
As a user, I want to see my dashboard load in under 2 seconds
so that I can quickly access my information.
```
### Violation 3: Architecture in Problem Definition
**Wrong**:
```markdown
**Problem**: Our microservices architecture doesn't support
real-time notifications, so users miss important updates.
```
**Correct**:
```markdown
**Problem**: Users miss important updates because they must
manually refresh the page. 78% of users report missing
time-sensitive information.
```
### Violation 4: Authority-Based Technical Bypass
**Wrong** (CEO requests):
```markdown
## MVP Scope
MVP (3 months):
- Stripe for payment processing (fastest integration)
- Support EUR, GBP, JPY
- Store conversions in PostgreSQL (we already use it)
Phase 2:
- Maybe switch to Adyen if Stripe doesn't scale
```
**Correct** (abstracted):
```markdown
## MVP Scope
Phase 1 - Market Validation (0-3 months):
- **Payment Processing**: Integrate with existing payment vendor (2-week integration timeline)
- **Currency Support**: EUR, GBP, JPY (covers 65% of international traffic)
- **Data Storage**: Leverage existing database infrastructure (zero operational overhead)
- **Success Criteria**: 100 transactions in 30 days, <5% failure rate
Phase 2 - Scale & Optimize (4-6 months):
- **Trigger**: >1,000 monthly transactions OR processing costs >$50k/month
- **Scope**: Additional currencies based on Phase 1 demand data
- **Optimization**: Re-evaluate payment processor if fees exceed 3% of revenue
**Constraint Rationale**: Phase 1 prioritizes speed-to-market over flexibility.
Technical decisions will be documented in TRD with specific vendor selection.
```
**Key Principle**: Authority figures (CEO, CTO, investors) may REQUEST technical specifics, but your job is to ABSTRACT them. "We'll use Stripe" becomes "existing payment vendor". "PostgreSQL" becomes "existing database infrastructure". The capability is documented; the implementation waits for TRD.
## Confidence Scoring
Use this to adjust your interaction with the user:
```yaml
Confidence Factors:
Market Validation: [0-25]
- Direct user feedback: 25
- Market research: 15
- Assumptions: 5
Problem Clarity: [0-25]
- Quantified pain: 25
- Qualitative evidence: 15
- Hypothetical: 5
Solution Fit: [0-25]
- Proven pattern: 25
- Adjacent pattern: 15
- Novel approach: 5
Business Value: [0-25]
- Clear ROI: 25
- Indirect value: 15
- Uncertain: 5
Total: [0-100]
Action:
80+: Generate complete PRD autonomously
50-79: Present options for user selection
<50: Ask discovery questions
```
## Output Location
**Always output to**: `docs/pre-development/prd/prd-[feature-name].md`
## After PRD Approval
1. ✅ Lock the PRD - no more changes without formal amendment
2. 🎯 Use PRD as input for Feature Map (next phase: `pre-dev-feature-map`)
3. 🚫 Never add technical details to PRD retroactively
4. 📋 Keep business/technical concerns strictly separated
## Quality Self-Check
Before declaring PRD complete, verify:
- [ ] Zero technical implementation details present
- [ ] All technology names removed
- [ ] User needs clearly articulated
- [ ] Success metrics are measurable and specific
- [ ] Scope boundaries are explicit and justified
- [ ] Business value is clearly justified
- [ ] User journeys are complete (current vs. proposed)
- [ ] Risks are identified with business impact
- [ ] Gate 1 validation checklist 100% complete
## The Bottom Line
**If you wrote a PRD with technical details, delete it and start over.**
The PRD is business-only. Period. No exceptions. No "just this once". No "but it's relevant".
Technical details go in TRD. That's the next phase. Wait for it.
Violating this separation means:
- You're optimizing for technical convenience, not user needs
- Requirements will change and break your technical assumptions
- You can't objectively evaluate technical alternatives
- The business case becomes coupled to implementation choices
**Follow the separation. Your future self will thank you.**

View File

@@ -0,0 +1,323 @@
---
name: pre-dev-research
description: |
Gate 0 research phase for pre-dev workflow. Dispatches 3 parallel research agents
to gather codebase patterns, external best practices, and framework documentation
BEFORE creating PRD/TRD. Outputs research.md with file:line references.
trigger: |
- Before any pre-dev workflow (Gate 0)
- When planning new features or modifications
- Invoked by /ring-pm-team:pre-dev-full and /ring-pm-team:pre-dev-feature
skip_when: |
- Trivial changes that don't need planning
- Research already completed (research.md exists and is recent)
sequence:
before: [pre-dev-prd-creation, pre-dev-feature-map]
related:
complementary: [pre-dev-prd-creation, pre-dev-trd-creation]
research_modes:
greenfield:
description: "New feature with no existing patterns to follow"
primary_agents: [best-practices-researcher, framework-docs-researcher]
secondary_agents: [repo-research-analyst]
focus: "External best practices and framework patterns"
modification:
description: "Changing or extending existing functionality"
primary_agents: [repo-research-analyst]
secondary_agents: [best-practices-researcher, framework-docs-researcher]
focus: "Existing codebase patterns and conventions"
integration:
description: "Connecting systems or adding external dependencies"
primary_agents: [framework-docs-researcher, best-practices-researcher, repo-research-analyst]
secondary_agents: []
focus: "API documentation and integration patterns"
---
# Pre-Dev Research Skill (Gate 0)
**Purpose:** Gather comprehensive research BEFORE writing planning documents, ensuring PRDs and TRDs are grounded in codebase reality and industry best practices.
## The Research-First Principle
```
Traditional: Request → PRD → Discover problems during implementation
Research-First: Request → Research → Informed PRD → Smoother implementation
Research prevents:
- Reinventing patterns that already exist
- Ignoring project conventions
- Missing framework constraints
- Repeating solved problems
```
---
## Step 1: Determine Research Mode
**BLOCKING GATE:** Before dispatching agents, determine the research mode.
Ask the user or infer from context:
| Mode | When to Use | Example |
|------|-------------|---------|
| **greenfield** | No existing patterns to follow | "Add GraphQL API" (when project has none) |
| **modification** | Extending existing functionality | "Add pagination to user list API" |
| **integration** | Connecting external systems | "Integrate Stripe payments" |
**If unclear, ask:**
```
Before starting research, I need to understand the context:
1. **Greenfield** - This is a completely new capability with no existing patterns
2. **Modification** - This extends or changes existing functionality
3. **Integration** - This connects with external systems/APIs
Which mode best describes this feature?
```
**Mode affects agent priority:**
- Greenfield → Web research is primary (best-practices, framework-docs)
- Modification → Codebase research is primary (repo-research)
- Integration → All agents equally weighted
---
## Step 2: Dispatch Research Agents
**Run 3 agents in PARALLEL** (single message, 3 Task calls):
```markdown
## Dispatching Research Agents
Research mode: [greenfield|modification|integration]
Feature: [feature description]
Launching parallel research:
1. ring-pm-team:repo-research-analyst
2. ring-pm-team:best-practices-researcher
3. ring-pm-team:framework-docs-researcher
```
**Agent Prompts:**
### repo-research-analyst
```
Research the codebase for patterns relevant to: [feature description]
Research mode: [mode]
- If modification: This is your PRIMARY focus - find all existing patterns
- If greenfield: Focus on conventions and project structure
- If integration: Look for existing integration patterns
Search docs/solutions/ knowledge base for prior related solutions.
Return findings with exact file:line references.
```
### best-practices-researcher
```
Research external best practices for: [feature description]
Research mode: [mode]
- If greenfield: This is your PRIMARY focus - go deep on industry standards
- If modification: Focus on specific patterns for this feature type
- If integration: Emphasize API design and integration patterns
Use Context7 for framework documentation.
Use WebSearch for industry best practices.
Return findings with URLs.
```
### framework-docs-researcher
```
Analyze tech stack and fetch documentation for: [feature description]
Research mode: [mode]
- If greenfield: Focus on framework setup and project structure patterns
- If modification: Focus on specific APIs being used/modified
- If integration: Focus on SDK/API documentation for external services
Detect versions from manifest files.
Use Context7 as primary documentation source.
Return version constraints and official patterns.
```
---
## Step 3: Aggregate Research Findings
After all agents return, create unified research document:
**File:** `docs/pre-dev/{feature-name}/research.md`
```markdown
---
date: [YYYY-MM-DD]
feature: [feature name]
research_mode: [greenfield|modification|integration]
agents_dispatched:
- repo-research-analyst
- best-practices-researcher
- framework-docs-researcher
---
# Research: [Feature Name]
## Executive Summary
[2-3 sentences synthesizing key findings across all agents]
## Research Mode: [Mode]
[Why this mode was selected and what it means for the research focus]
---
## Codebase Research (repo-research-analyst)
[Paste agent output here]
---
## Best Practices Research (best-practices-researcher)
[Paste agent output here]
---
## Framework Documentation (framework-docs-researcher)
[Paste agent output here]
---
## Synthesis & Recommendations
### Key Patterns to Follow
1. [Pattern from codebase] - `file:line`
2. [Best practice] - [URL]
3. [Framework pattern] - [doc reference]
### Constraints Identified
1. [Version constraint]
2. [Convention requirement]
3. [Integration limitation]
### Prior Solutions to Reference
1. `docs/solutions/[category]/[file].md` - [relevance]
### Open Questions for PRD
1. [Question that research couldn't answer]
2. [Decision that needs stakeholder input]
```
---
## Step 4: Gate 0 Validation
**BLOCKING CHECKLIST** - All must pass before proceeding to Gate 1:
```
Gate 0 Research Validation:
□ Research mode determined and documented
□ All 3 agents dispatched and returned
□ research.md created in docs/pre-dev/{feature}/
□ At least one file:line reference (if modification mode)
□ At least one external URL (if greenfield mode)
□ docs/solutions/ knowledge base searched
□ Tech stack versions documented
□ Synthesis section completed with recommendations
```
**If validation fails:**
- Missing agent output → Re-run that specific agent
- No codebase patterns found (modification mode) → Escalate, may need mode change
- No external docs found (greenfield mode) → Try different search terms
---
## Integration with Pre-Dev Workflow
### In pre-dev-full (9-gate workflow):
```
Gate 0: Research Phase (NEW - this skill)
Gate 1: PRD Creation (reads research.md)
Gate 2: Feature Map
Gate 3: TRD Creation (reads research.md)
...remaining gates
```
### In pre-dev-feature (4-gate workflow):
```
Gate 0: Research Phase (NEW - this skill)
Gate 1: PRD Creation (reads research.md)
Gate 2: TRD Creation
Gate 3: Task Breakdown
```
---
## Research Document Usage
**In Gate 1 (PRD Creation):**
```markdown
Before writing the PRD, load:
- docs/pre-dev/{feature}/research.md
Required in PRD:
- Reference existing patterns with file:line notation
- Cite knowledge base findings from docs/solutions/
- Include external URLs for best practices
- Note framework constraints that affect requirements
```
**In Gate 3 (TRD Creation):**
```markdown
The TRD must reference:
- Implementation patterns from research.md
- Version constraints from framework analysis
- Similar implementations from codebase research
```
---
## Quick Reference
| Research Mode | Primary Agent(s) | Secondary Agent(s) |
|---------------|-----------------|-------------------|
| greenfield | best-practices, framework-docs | repo-research |
| modification | repo-research | best-practices, framework-docs |
| integration | all equally weighted | none |
| Validation Check | Required For |
|-----------------|--------------|
| file:line reference | modification, integration |
| external URL | greenfield, integration |
| docs/solutions/ searched | all modes |
| version documented | all modes |
---
## Anti-Patterns
1. **Skipping research for "simple" features**
- Even simple features benefit from convention checks
- "Simple" often becomes complex during implementation
2. **Not using research mode appropriately**
- Greenfield with heavy codebase research wastes time
- Modification without codebase research misses patterns
3. **Ignoring docs/solutions/ knowledge base**
- Prior solutions are gold - always search first
- Prevents repeating mistakes
4. **Vague references without file:line**
- "There's a pattern somewhere" is not useful
- Exact locations enable quick reference during implementation

View File

@@ -0,0 +1,465 @@
---
name: pre-dev-subtask-creation
description: |
Gate 8: Zero-context implementation steps - 2-5 minute atomic subtasks with
complete code, exact commands, TDD pattern. Large Track only.
trigger: |
- Tasks passed Gate 7 validation
- Need absolute implementation clarity
- Creating work for engineers with zero codebase context
- Large Track workflow (2+ day features)
skip_when: |
- Small Track workflow → execute tasks directly
- Tasks simple enough without breakdown
- Tasks not validated → complete Gate 7 first
sequence:
after: [pre-dev-task-breakdown]
before: [executing-plans, subagent-driven-development]
---
# Subtask Creation - Bite-Sized, Zero-Context Steps
## Overview
Write comprehensive implementation subtasks assuming the engineer has zero context for our codebase. Each subtask breaks down into 2-5 minute steps following RED-GREEN-REFACTOR. Complete code, exact commands, explicit verification. **DRY. YAGNI. TDD. Frequent commits.**
**Announce at start:** "I'm using the pre-dev-subtask-creation skill to create implementation subtasks."
**Context:** This should be run after Gate 7 validation (approved tasks exist).
**Save subtasks to:** `docs/pre-development/subtasks/T-[task-id]/ST-[task-id]-[number]-[description].md`
## When to Use
Use this skill when:
- Tasks have passed Gate 7 validation
- About to write implementation instructions
- Tempted to write "add validation here..." (placeholder)
- Tempted to say "update the user service" (which part?)
- Creating work units for developers or AI agents
**When NOT to use:**
- Before Gate 7 validation
- For trivial changes (<10 minutes total)
- When engineer has full context (rare)
## Foundational Principle
**Every subtask must be completable by anyone with zero context about the system.**
Requiring context creates bottlenecks, onboarding friction, and integration failures.
**Subtasks answer**: Exactly what to create/modify, with complete code and verification.
**Subtasks never answer**: Why the system works this way (context is removed).
## Bite-Sized Step Granularity
**Each step is one action (2-5 minutes):**
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
## Subtask Document Header
**Every subtask MUST start with this header:**
```markdown
# ST-[task-id]-[number]: [Subtask Name]
> **For Agents:** REQUIRED SUB-SKILL: Use ring-default:executing-plans to implement this subtask step-by-step.
**Goal:** [One sentence describing what this builds]
**Prerequisites:**
```bash
# Verification commands
cd /path/to/project
npm list dependency-name
# Expected output: dependency@version
```
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
---
```
## Step Structure (TDD Cycle)
```markdown
### Step 1: Write the failing test
```typescript
// tests/exact/path/test.ts
import { functionName } from '../src/module';
describe('FeatureName', () => {
it('should do specific behavior', () => {
const result = functionName(input);
expect(result).toBe(expected);
});
});
```
### Step 2: Run test to verify it fails
```bash
npm test tests/exact/path/test.ts
# Expected output: FAIL - "functionName is not defined"
```
### Step 3: Write minimal implementation
```typescript
// src/exact/path/module.ts
export function functionName(input: string): string {
return expected;
}
```
### Step 4: Run test to verify it passes
```bash
npm test tests/exact/path/test.ts
# Expected output: PASS - 1 test passed
```
### Step 5: Commit
```bash
git add tests/exact/path/test.ts src/exact/path/module.ts
git commit -m "feat: add specific feature"
```
```
## Explicit Rules
### ✅ DO Include in Subtasks
- Exact file paths (absolute or from root)
- Complete file contents (if creating)
- Complete code snippets (if modifying)
- All imports and dependencies
- Step-by-step TDD cycle (numbered)
- Verification commands (copy-pasteable)
- Expected output (exact)
- Rollback procedures (exact commands)
- Prerequisites (what must exist first)
### ❌ NEVER Include in Subtasks
- Placeholders: "...", "TODO", "implement here"
- Vague instructions: "update the service", "add validation"
- Assumptions: "assuming setup is done"
- Context requirements: "you need to understand X first"
- Incomplete code: "add the rest yourself"
- Missing imports: "import necessary packages"
- Undefined success: "make sure it works"
- No verification: "test it manually"
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "The developer will figure out imports" | Imports are context. Provide them explicitly. |
| "TODO comments are fine for simple parts" | TODOs require decisions. Make them now. |
| "They'll know which service to update" | They won't. Specify the exact file path. |
| "The verification steps are obvious" | Obvious ≠ documented. Write exact commands. |
| "Rollback isn't needed for simple changes" | Simple changes fail too. Always provide rollback. |
| "This needs system understanding" | Then you haven't removed context. Simplify more. |
| "I'll provide the template, they fill it" | Templates are incomplete. Provide full code. |
| "The subtask description explains it" | Descriptions need interpretation. Give exact steps. |
| "They can look at similar code for reference" | That's context. Make subtask self-contained. |
| "This is too detailed, we're not that formal" | Detailed = parallelizable = faster. Be detailed. |
| "Steps are too small, feels like hand-holding" | Small steps = verifiable progress. Stay small. |
## Red Flags - STOP
If you catch yourself writing any of these in a subtask, **STOP and rewrite**:
- Code placeholders: `...`, `// TODO`, `// implement X here`
- Vague file references: "the user service", "the auth module"
- Assumption phrases: "assuming you have", "make sure you"
- Incomplete imports: "import required packages"
- Missing paths: Not specifying where files go
- Undefined verification: "test that it works"
- Steps longer than 5 minutes
- Context dependencies: "you need to understand X"
- No TDD cycle in implementation steps
**When you catch yourself**: Expand the subtask until it's completely self-contained.
## Gate 8 Validation Checklist
Before declaring subtasks ready:
**Atomicity:**
- [ ] Each step has single responsibility (2-5 minutes)
- [ ] No step depends on understanding system architecture
- [ ] Subtasks can be assigned to anyone (developer or AI)
**Completeness:**
- [ ] All code provided in full (no placeholders)
- [ ] All file paths are explicit and exact
- [ ] All imports listed explicitly
- [ ] All prerequisites documented
- [ ] TDD cycle followed in every implementation
**Verifiability:**
- [ ] Test commands are copy-pasteable
- [ ] Expected output is exact (not subjective)
- [ ] Commands run from project root (or specify directory)
**Reversibility:**
- [ ] Rollback commands provided
- [ ] Rollback doesn't require system knowledge
**Gate Result:**
- ✅ **PASS**: All checkboxes checked → Ready for implementation
- ⚠️ **CONDITIONAL**: Add missing details → Re-validate
- ❌ **FAIL**: Too much context required → Decompose further
## Example Subtask
```markdown
# ST-001-01: Create User Model with Validation
> **For Agents:** REQUIRED SUB-SKILL: Use ring-default:executing-plans to implement this subtask step-by-step.
**Goal:** Create a User model class with email and password validation in the auth service.
**Prerequisites:**
```bash
cd /path/to/project
npm list zod bcrypt
# Expected: zod@3.22.4, bcrypt@5.1.1
```
**Files:**
- Create: `src/domain/entities/User.ts`
- Create: `src/domain/entities/__tests__/User.test.ts`
- Modify: `src/domain/entities/index.ts`
---
### Step 1: Write the failing test
Create file: `src/domain/entities/__tests__/User.test.ts`
```typescript
import { UserModel } from '../User';
describe('UserModel', () => {
const validUserData = {
email: 'test@example.com',
password: 'securePassword123',
firstName: 'John',
lastName: 'Doe'
};
it('should create user with valid data', () => {
const user = new UserModel(validUserData);
expect(user.getData().email).toBe(validUserData.email);
});
it('should throw on invalid email', () => {
const invalidData = { ...validUserData, email: 'invalid' };
expect(() => new UserModel(invalidData)).toThrow('Invalid email format');
});
});
```
### Step 2: Run test to verify it fails
```bash
npm test src/domain/entities/__tests__/User.test.ts
# Expected: FAIL - "Cannot find module '../User'"
```
### Step 3: Write minimal implementation
Create file: `src/domain/entities/User.ts`
```typescript
import { z } from 'zod';
import bcrypt from 'bcrypt';
export const UserSchema = z.object({
id: z.string().uuid(),
email: z.string().email('Invalid email format'),
password: z.string().min(8, 'Password must be at least 8 characters'),
firstName: z.string().min(1, 'First name is required'),
lastName: z.string().min(1, 'Last name is required'),
createdAt: z.date().default(() => new Date()),
updatedAt: z.date().default(() => new Date())
});
export type User = z.infer<typeof UserSchema>;
export class UserModel {
private data: User;
constructor(data: Partial<User>) {
this.data = UserSchema.parse({
...data,
id: data.id || crypto.randomUUID(),
createdAt: data.createdAt || new Date(),
updatedAt: data.updatedAt || new Date()
});
}
async hashPassword(): Promise<void> {
const saltRounds = 10;
this.data.password = await bcrypt.hash(this.data.password, saltRounds);
}
async comparePassword(candidatePassword: string): Promise<boolean> {
return bcrypt.compare(candidatePassword, this.data.password);
}
getData(): User {
return this.data;
}
}
```
### Step 4: Run test to verify it passes
```bash
npm test src/domain/entities/__tests__/User.test.ts
# Expected: PASS - 2 tests passed
```
### Step 5: Update exports
Modify file: `src/domain/entities/index.ts`
Add or append:
```typescript
export { UserModel, UserSchema, type User } from './User';
```
### Step 6: Verify type checking
```bash
npm run typecheck
# Expected: No errors
```
### Step 7: Commit
```bash
git add src/domain/entities/User.ts src/domain/entities/__tests__/User.test.ts src/domain/entities/index.ts
git commit -m "feat: add User model with validation
- Add Zod schema for user validation
- Implement password hashing with bcrypt
- Add comprehensive tests"
```
### Rollback
If issues occur:
```bash
rm src/domain/entities/User.ts
rm src/domain/entities/__tests__/User.test.ts
git checkout -- src/domain/entities/index.ts
git status
```
```
## Confidence Scoring
Use this to adjust your interaction with the user:
```yaml
Confidence Factors:
Step Atomicity: [0-30]
- All steps 2-5 minutes: 30
- Most steps appropriately sized: 20
- Steps too large or vague: 10
Code Completeness: [0-30]
- Zero placeholders, all code complete: 30
- Mostly complete with minor gaps: 15
- Significant placeholders or TODOs: 5
Context Independence: [0-25]
- Anyone can execute without questions: 25
- Minor context needed: 15
- Significant domain knowledge required: 5
TDD Coverage: [0-15]
- All implementation follows RED-GREEN-REFACTOR: 15
- Most steps include tests: 10
- Limited test coverage: 5
Total: [0-100]
Action:
80+: Generate complete subtasks autonomously
50-79: Present approach options for complex steps
<50: Ask about codebase structure and patterns
```
## Execution Handoff
After creating subtasks, offer execution choice:
**"Subtasks complete and saved to `docs/pre-development/subtasks/T-[id]/`. Two execution options:**
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per subtask, review between subtasks, fast iteration
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
**Which approach?"**
**If Subagent-Driven chosen:**
- **REQUIRED SUB-SKILL:** Use ring-default:subagent-driven-development
- Stay in this session
- Fresh subagent per subtask + code review
**If Parallel Session chosen:**
- Guide them to open new session in worktree
- **REQUIRED SUB-SKILL:** New session uses ring-default:executing-plans
## Quality Self-Check
Before declaring subtasks complete, verify:
- [ ] Every step is truly atomic (2-5 minutes)
- [ ] Zero context required to complete any step
- [ ] All code is complete (no "...", "TODO", placeholders)
- [ ] All file paths are explicit (absolute or from root)
- [ ] All imports are listed explicitly
- [ ] TDD cycle followed (test → fail → implement → pass → commit)
- [ ] Verification steps included with exact commands
- [ ] Expected output specified for every command
- [ ] Rollback plans provided with exact commands
- [ ] Prerequisites documented (what must exist first)
- [ ] Gate 8 validation checklist 100% complete
## The Bottom Line
**If you wrote a subtask with "TODO" or "..." or "add necessary imports", delete it and rewrite with complete code.**
Subtasks are not instructions. Subtasks are complete, copy-pasteable implementations following TDD.
- "Add validation" is not a step. [Complete validation code with test] is a step.
- "Update the service" is not a step. [Exact file path + exact code changes with test] is a step.
- "Import necessary packages" is not a step. [Complete list of imports] is a step.
Every subtask must be completable by someone who:
- Just joined the team yesterday
- Has never seen the codebase before
- Doesn't know the business domain
- Won't ask questions (you're unavailable)
- Follows TDD religiously
If they can't complete it with zero questions while following RED-GREEN-REFACTOR, **it's not atomic enough.**
**Remember: DRY. YAGNI. TDD. Frequent commits.**

View File

@@ -0,0 +1,502 @@
---
name: pre-dev-task-breakdown
description: |
Gate 7: Implementation tasks - value-driven decomposition into working increments
that deliver measurable user value.
trigger: |
- PRD passed Gate 1 (required)
- TRD passed Gate 3 (required)
- All Large Track gates passed (if applicable)
- Ready to create sprint/iteration tasks
skip_when: |
- PRD or TRD not validated → complete earlier gates
- Tasks already exist → proceed to Subtask Creation
- Trivial change → direct implementation
sequence:
after: [pre-dev-trd-creation, pre-dev-dependency-map]
before: [pre-dev-subtask-creation, executing-plans]
---
# Task Breakdown - Value-Driven Decomposition
## Foundational Principle
**Every task must deliver working software that provides measurable user value.**
Creating technical-only or oversized tasks creates:
- Work that doesn't ship until "everything is done"
- Teams working on pieces that don't integrate
- No early validation of value or technical approach
- Waterfall development disguised as iterative process
**Tasks answer**: What working increment will be delivered?
**Tasks never answer**: How to implement that increment (that's Subtasks).
## When to Use This Skill
Use this skill when:
- PRD has passed Gate 1 validation (REQUIRED)
- TRD has passed Gate 3 validation (REQUIRED)
- Feature Map has passed Gate 2 validation (OPTIONAL - use if exists)
- API Design has passed Gate 4 validation (OPTIONAL - use if exists)
- Data Model has passed Gate 5 validation (OPTIONAL - use if exists)
- Dependency Map has passed Gate 6 validation (OPTIONAL - use if exists)
- About to break down work for sprints/iterations
- Tempted to create "Setup Infrastructure" as a task
- Asked to estimate or plan implementation work
- Before creating subtasks
## Mandatory Workflow
### Phase 1: Task Identification (Inputs Required)
**Required Inputs:**
1. **Approved PRD** (Gate 1 passed) - business requirements and priorities (REQUIRED - check `docs/pre-dev/<feature-name>/prd.md`)
2. **Approved TRD** (Gate 3 passed) - architecture patterns documented (REQUIRED - check `docs/pre-dev/<feature-name>/trd.md`)
**Optional Inputs (use if exists for richer context):**
3. **Approved Feature Map** (Gate 2 passed) - feature relationships mapped (check `docs/pre-dev/<feature-name>/feature-map.md`)
4. **Approved API Design** (Gate 4 passed) - contracts specified (check `docs/pre-dev/<feature-name>/api-design.md`)
5. **Approved Data Model** (Gate 5 passed) - data structures defined (check `docs/pre-dev/<feature-name>/data-model.md`)
6. **Approved Dependency Map** (Gate 6 passed) - tech stack locked (check `docs/pre-dev/<feature-name>/dependency-map.md`)
**Analysis:**
7. **Identify value streams** - what delivers user value first?
### Phase 2: Decomposition
For each TRD component or PRD feature:
1. **Define deliverable** - what working software ships?
2. **Set success criteria** - how do we know it's done?
3. **Map dependencies** - what must exist first?
4. **Estimate effort** - T-shirt size (S/M/L/XL, max is XL = 2 weeks)
5. **Plan testing** - how will we verify it works?
6. **Identify risks** - what could go wrong?
### Phase 3: Gate 7 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding to Subtasks:
- [ ] All TRD components covered by tasks
- [ ] Every task delivers working software
- [ ] Each task has measurable success criteria
- [ ] Dependencies are correctly mapped
- [ ] No task exceeds 2 weeks effort (XL max)
- [ ] Testing strategy defined for each task
- [ ] Risks identified with mitigations
- [ ] Delivery sequence optimizes value
## Explicit Rules
### ✅ DO Include in Tasks
- Task ID, title, type (Foundation/Feature/Integration/Polish)
- Deliverable: What working software ships?
- User value: What can users do after this?
- Technical value: What does this enable?
- Success criteria (testable, measurable)
- Dependencies (blocks/requires/optional)
- Effort estimate (S/M/L/XL with points)
- Testing strategy (unit/integration/e2e)
- Risk identification with mitigations
- Definition of Done checklist
### ❌ NEVER Include in Tasks
- Implementation details (file paths, code examples)
- Step-by-step instructions (those go in subtasks)
- Technical-only tasks with no user value
- Tasks exceeding 2 weeks effort (break them down)
- Vague success criteria ("improve performance")
- Missing dependency information
- Undefined testing approach
### Task Sizing Rules
1. **Small (S)**: 1-3 points, 1-3 days, single component
2. **Medium (M)**: 5-8 points, 3-5 days, few dependencies
3. **Large (L)**: 13 points, 1-2 weeks, multiple components
4. **XL (over 2 weeks)**: BREAK IT DOWN - too large to be atomic
### Value Delivery Rules
1. **Foundation tasks** enable other work (database setup, core services)
2. **Feature tasks** deliver user-facing capabilities
3. **Integration tasks** connect to external systems
4. **Polish tasks** optimize or enhance (nice-to-have)
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "This 3-week task is fine" | Tasks >2 weeks hide complexity. Break it down. |
| "Setup tasks don't need value" | Setup enables value. Define what it enables. |
| "Success criteria are obvious" | Obvious to you ≠ testable. Document explicitly. |
| "Dependencies will be clear later" | Later is too late. Map them now. |
| "We don't need detailed estimates" | Without estimates, no planning possible. Size them. |
| "Technical tasks can skip user value" | Even infrastructure enables users. Define the connection. |
| "Testing strategy can be decided during" | Testing affects design. Plan it upfront. |
| "Risks aren't relevant at task level" | Risks compound across tasks. Identify them early. |
| "DoD is the same for all tasks" | Different tasks need different criteria. Specify. |
| "We can combine multiple features" | Combining hides value delivery. Keep tasks focused. |
## Red Flags - STOP
If you catch yourself writing any of these in a task, **STOP**:
- Task estimates over 2 weeks
- Tasks named "Setup X" without defining what X enables
- Success criteria like "works" or "complete" (not measurable)
- No dependencies listed (every task depends on something)
- No testing strategy (how will you verify?)
- "Technical debt" as a task type (debt reduction must deliver value)
- Vague deliverables ("improve", "optimize", "refactor")
- Missing Definition of Done
**When you catch yourself**: Refine the task until it's concrete, valuable, and testable.
## Gate 7 Validation Checklist
Before proceeding to Subtasks, verify:
**Task Completeness**:
- [ ] All TRD components have tasks covering them
- [ ] All PRD features have tasks delivering them
- [ ] Each task is appropriately sized (no XL+)
- [ ] Task boundaries are clear and logical
**Delivery Value**:
- [ ] Every task delivers working software
- [ ] User value is explicit (even for foundation)
- [ ] Technical value is clear (what it enables)
- [ ] Sequence optimizes value delivery
**Technical Clarity**:
- [ ] Success criteria are measurable and testable
- [ ] Dependencies are correctly mapped (blocks/requires)
- [ ] Testing approach is defined (unit/integration/e2e)
- [ ] Definition of Done is comprehensive
**Team Readiness**:
- [ ] Skills required match team capabilities
- [ ] Effort estimates are realistic (validated by similar past work)
- [ ] Capacity is available or planned
- [ ] Handoffs are minimized
**Risk Management**:
- [ ] Risks identified for each task
- [ ] Mitigations are defined
- [ ] High-risk tasks scheduled early
- [ ] Fallback plans exist
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to Subtasks (`pre-dev-subtask-creation`)
- ⚠️ **CONDITIONAL**: Refine oversized/vague tasks → Re-validate
-**FAIL**: Too many issues → Re-decompose
## Common Violations and Fixes
### Violation 1: Technical-Only Tasks
**Wrong**:
```markdown
## T-001: Setup PostgreSQL Database
- Install PostgreSQL 16
- Configure connection pooling
- Create initial schema
```
**Correct**:
```markdown
## T-001: User Data Persistence Foundation
### Deliverable
Working database layer that persists user accounts and supports authentication queries with <100ms latency.
### User Value
Enables user registration and login (T-002, T-003 depend on this).
### Technical Value
- Foundation for all data persistence
- Multi-tenant isolation strategy implemented
- Performance baseline established
### Success Criteria
- [ ] Users table created with multi-tenant schema
- [ ] Connection pooling configured (min 5, max 50 connections)
- [ ] Query performance <100ms for auth queries (verified with test data)
- [ ] Migrations framework operational
- [ ] Rollback procedures tested
### Dependencies
- **Blocks**: T-002 (Registration), T-003 (Login), T-004 (Permissions)
- **Requires**: Infrastructure (networking, compute)
- **Optional**: None
### Effort: Medium (M) - 5 points, 3-5 days
### Testing: Integration tests for queries, performance benchmarks
```
### Violation 2: Oversized Tasks
**Wrong**:
```markdown
## T-005: Complete User Management System
- Registration, login, logout
- Profile management
- Password reset
- Email verification
- Two-factor authentication
- Session management
- Permissions system
Estimate: 6 weeks
```
**Correct** (broken into multiple tasks):
```markdown
## T-005: Basic Authentication (Register + Login)
- Deliverable: Users can create accounts and log in with JWT tokens
- User Value: Access to personalized features
- Effort: Large (L) - 13 points, 1-2 weeks
- Dependencies: Requires T-001 (Database)
## T-006: Password Management (Reset + Email)
- Deliverable: Users can reset forgotten passwords via email
- User Value: Account recovery without support tickets
- Effort: Medium (M) - 8 points, 3-5 days
- Dependencies: Requires T-005, Email service configured
## T-007: Two-Factor Authentication
- Deliverable: Users can enable 2FA with TOTP
- User Value: Enhanced account security
- Effort: Medium (M) - 8 points, 3-5 days
- Dependencies: Requires T-005
## T-008: Permissions System
- Deliverable: Role-based access control operational
- User Value: Admin can assign roles, users have appropriate access
- Effort: Large (L) - 13 points, 1-2 weeks
- Dependencies: Requires T-005
```
### Violation 3: Vague Success Criteria
**Wrong**:
```markdown
Success Criteria:
- [ ] Feature works
- [ ] Tests pass
- [ ] Code reviewed
```
**Correct**:
```markdown
Success Criteria:
Functional:
- [ ] Users can upload files up to 100MB
- [ ] Supported formats: JPEG, PNG, PDF, DOCX
- [ ] Files stored with unique IDs, retrievable via API
- [ ] Upload progress shown to user
Technical:
- [ ] API response time <2s for uploads <10MB
- [ ] Files encrypted at rest with KMS
- [ ] Virus scanning completes before storage
Operational:
- [ ] Monitoring: Upload success rate >99.5%
- [ ] Logging: All upload attempts logged with user_id
- [ ] Alerts: Notify if success rate drops below 95%
Quality:
- [ ] Unit tests: 90%+ coverage for upload logic
- [ ] Integration tests: End-to-end upload scenarios
- [ ] Security: OWASP file upload best practices followed
```
## Task Template
Use this template for every task:
```markdown
## T-[XXX]: [Task Title - What It Delivers]
### Deliverable
[One sentence: What working software ships?]
### Scope
**Includes**:
- [Specific capability 1]
- [Specific capability 2]
- [Specific capability 3]
**Excludes** (future tasks):
- [Out of scope item 1] (T-YYY)
- [Out of scope item 2] (T-ZZZ)
### Success Criteria
- [ ] [Testable criterion 1]
- [ ] [Testable criterion 2]
- [ ] [Testable criterion 3]
### User Value
[What can users do after this that they couldn't before?]
### Technical Value
[What does this enable? What other tasks does this unblock?]
### Technical Components
From TRD:
- [Component 1]
- [Component 2]
From Dependencies:
- [Package/service 1]
- [Package/service 2]
### Dependencies
- **Blocks**: [Tasks that need this] (T-AAA, T-BBB)
- **Requires**: [Tasks that must complete first] (T-CCC)
- **Optional**: [Nice-to-haves] (T-DDD)
### Effort Estimate
- **Size**: [S/M/L/XL]
- **Points**: [1-3 / 5-8 / 13 / 21]
- **Duration**: [1-3 days / 3-5 days / 1-2 weeks]
- **Team**: [Backend / Frontend / Full-stack / etc.]
### Risks
**Risk 1: [Description]**
- Impact: [High/Medium/Low]
- Probability: [High/Medium/Low]
- Mitigation: [How we'll address it]
- Fallback: [Plan B if mitigation fails]
### Testing Strategy
- **Unit Tests**: [What logic to test]
- **Integration Tests**: [What APIs/components to test together]
- **E2E Tests**: [What user flows to test]
- **Performance Tests**: [What to benchmark]
- **Security Tests**: [What threats to validate against]
### Definition of Done
- [ ] Code complete and peer reviewed
- [ ] All tests passing (unit + integration + e2e)
- [ ] Documentation updated (API docs, README, etc.)
- [ ] Security scan clean (no high/critical issues)
- [ ] Performance targets met (benchmarks run)
- [ ] Deployed to staging environment
- [ ] Product owner acceptance received
- [ ] Monitoring/logging configured
```
## Delivery Sequencing
Optimize task order for value:
```yaml
Sprint 1 - Foundation:
Goal: Enable core workflows
Tasks:
- T-001: Database foundation (blocks all)
- T-002: Auth foundation (start, high value)
Sprint 2 - Core Features:
Goal: Ship minimum viable feature
Tasks:
- T-002: Auth foundation (complete)
- T-005: User dashboard (depends on T-002)
- T-010: Basic API endpoints (high value)
Sprint 3 - Enhancements:
Goal: Polish and extend
Tasks:
- T-006: Password reset (medium value)
- T-011: Advanced search (nice-to-have)
- T-015: Performance optimization (polish)
Critical Path: T-001 → T-002 → T-005 → T-010
Parallel Work: After T-001, T-003 and T-004 can run parallel to T-002
```
## Anti-Patterns to Avoid
**Technical Debt Tasks**: "Refactor authentication" (no user value)
**Giant Tasks**: 3+ week efforts (break them down)
**Vague Tasks**: "Improve performance" (not measurable)
**Sequential Bottlenecks**: Everything depends on one task
**Missing Value**: Tasks that don't ship working software
**Good Task Names**:
- "Users can register and log in with email" (clear value)
- "API responds in <500ms for 95th percentile" (measurable)
- "Admin dashboard shows real-time metrics" (working software)
## Confidence Scoring
Use this to adjust your interaction with the user:
```yaml
Confidence Factors:
Task Decomposition: [0-30]
- All tasks appropriately sized: 30
- Most tasks well-scoped: 20
- Tasks too large or vague: 10
Value Clarity: [0-25]
- Every task delivers working software: 25
- Most tasks have clear value: 15
- Value connections unclear: 5
Dependency Mapping: [0-25]
- All dependencies documented: 25
- Most dependencies clear: 15
- Dependencies ambiguous: 5
Estimation Quality: [0-20]
- Estimates based on past work: 20
- Reasonable educated guesses: 12
- Wild speculation: 5
Total: [0-100]
Action:
80+: Generate complete task breakdown autonomously
50-79: Present sizing options and sequences
<50: Ask about team velocity and complexity
```
## Output Location
**Always output to**: `docs/pre-development/tasks/tasks-[feature-name].md`
## After Task Breakdown Approval
1. ✅ Tasks become sprint backlog
2. 🎯 Use tasks as input for atomic subtasks (next phase: `pre-dev-subtask-creation`)
3. 📊 Track progress per task (not per subtask)
4. 🚫 No implementation yet - that's in subtasks
## Quality Self-Check
Before declaring task breakdown complete, verify:
- [ ] Every task delivers working software (not just "progress")
- [ ] All tasks have measurable success criteria
- [ ] Dependencies are mapped (blocks/requires/optional)
- [ ] Effort estimates are realistic (S/M/L/XL, no >2 weeks)
- [ ] Testing strategy defined for each task
- [ ] Risks identified with mitigations
- [ ] Definition of Done is comprehensive for each
- [ ] Delivery sequence optimizes value (high-value tasks early)
- [ ] No technical-only tasks without user connection
- [ ] Gate 7 validation checklist 100% complete
## The Bottom Line
**If you created tasks that don't deliver working software, rewrite them.**
Tasks are not technical activities. Tasks are working increments.
"Setup database" is not a task. "User data persists correctly" is a task.
"Implement OAuth" is not a task. "Users can log in with Google" is a task.
"Write tests" is not a task. Tests are part of Definition of Done for other tasks.
Every task must answer: **"What working software can I demo to users?"**
If you can't demo it, it's not a task. It's subtask implementation detail.
**Deliver value. Ship working software. Make tasks demoable.**

View File

@@ -0,0 +1,330 @@
---
name: pre-dev-trd-creation
description: |
Gate 3: Technical architecture document - defines HOW/WHERE with technology-agnostic
patterns before concrete implementation choices.
trigger: |
- PRD passed Gate 1 (required)
- Feature Map passed Gate 2 (if Large Track)
- About to design technical architecture
- Tempted to specify "PostgreSQL" instead of "Relational Database"
skip_when: |
- PRD not validated → complete Gate 1 first
- Architecture already documented → proceed to API Design
- Pure business requirement change → update PRD
sequence:
after: [pre-dev-prd-creation, pre-dev-feature-map]
before: [pre-dev-api-design, pre-dev-task-breakdown]
---
# TRD Creation - Architecture Before Implementation
## Foundational Principle
**Architecture decisions (HOW/WHERE) must be technology-agnostic patterns before concrete implementation choices.**
Specifying technologies in TRD creates:
- Vendor lock-in before evaluating alternatives
- Architecture coupled to specific products
- Inability to adapt to better options discovered later
- Technology decisions made without full dependency analysis
**The TRD answers**: HOW we'll architect the solution and WHERE components will live.
**The TRD never answers**: WHAT specific products, frameworks, versions, or packages we'll use.
## When to Use This Skill
Use this skill when:
- PRD has passed Gate 1 validation (REQUIRED)
- Feature Map has passed Gate 2 validation (optional - use if exists)
- About to design technical architecture
- Tempted to specify "PostgreSQL" instead of "Relational Database"
- Asked to create technical design or architecture document
- Before making technology choices
## Mandatory Workflow
### Phase 1: Technical Analysis (Inputs Required)
1. **Approved PRD** (Gate 1 passed) - business requirements locked (REQUIRED)
2. **Approved Feature Map** (Gate 2 passed) - feature relationships mapped (OPTIONAL - check `docs/pre-dev/<feature-name>/feature-map.md`)
3. **Identify non-functional requirements** (performance, security, scalability)
4. **Map domains to components**:
- If Feature Map exists: Map Feature Map domains to architectural components
- If no Feature Map: Map PRD features directly to architectural components
### Phase 2: Architecture Definition
1. **Choose Architecture Style** (Microservices, Modular Monolith, Serverless)
2. **Design Components** with clear boundaries and responsibilities
3. **Define Interfaces** (inbound/outbound, contracts)
4. **Model Data Architecture** (ownership, flows, relationships)
5. **Plan Integration Patterns** (sync/async, protocols)
6. **Design Security Architecture** (layers, threat model)
### Phase 3: Gate 3 Validation
**MANDATORY CHECKPOINT** - Must pass before proceeding:
- [ ] All Feature Map domains mapped to components (if Feature Map exists)
- [ ] All PRD features mapped to components (REQUIRED)
- [ ] Component boundaries are clear and logical
- [ ] Interfaces are well-defined and technology-agnostic
- [ ] Data ownership is explicit
- [ ] Quality attributes are achievable
- [ ] Integration patterns are selected
- [ ] No specific technology products named
## Explicit Rules
### ✅ DO Include in TRD
- System architecture style (pattern names, not products)
- Component design with responsibilities
- Data architecture (ownership, flows, models - conceptual)
- API design (contracts, not specific protocols)
- Security architecture (layers, threat model)
- Integration patterns (sync/async, not specific tools)
- Performance targets and scalability strategy
- Deployment topology (logical, not specific cloud services)
### ❌ NEVER Include in TRD
- Specific technology products (PostgreSQL, Redis, Kafka, RabbitMQ)
- Framework versions (Fiber v2, React 18, Express 5)
- Programming language specifics (Go 1.24, Node.js 20)
- Cloud provider services (AWS RDS, Azure Functions, GCP Pub/Sub)
- Package/library names (bcrypt, zod, sqlc, prisma)
- Container orchestration specifics (Kubernetes, ECS, Lambda)
- CI/CD pipeline details
- Infrastructure-as-code specifics (Terraform, CloudFormation)
### Technology Abstraction Rules
1. **Database**: Say "Relational Database" not "PostgreSQL 16"
2. **Cache**: Say "In-Memory Cache" not "Redis" or "Valkey"
3. **Message Queue**: Say "Message Broker" not "RabbitMQ"
4. **Object Storage**: Say "Blob Storage" not "MinIO" or "S3"
5. **Web Framework**: Say "HTTP Router" not "Fiber" or "Express"
6. **Auth**: Say "JWT-based Authentication" not "specific library"
## Rationalization Table
| Excuse | Reality |
|--------|---------|
| "Everyone knows we use PostgreSQL" | Assumptions prevent proper evaluation. Stay abstract. |
| "Just mentioning the tech stack for context" | Context belongs in Dependency Map. Keep TRD abstract. |
| "The team needs to know what we're using" | They'll know in Dependency Map. TRD is patterns only. |
| "It's obvious we need Redis here" | Obvious ≠ documented. Abstract to "cache", decide later. |
| "I'll save time by specifying frameworks now" | You'll waste time when better options emerge. Wait. |
| "But our project template requires X" | Templates are implementation. TRD is architecture. Separate. |
| "The dependency is critical to the design" | Then describe the *capability* needed, not the product. |
| "Stakeholders expect to see technology choices" | Stakeholders see them in Dependency Map. Not here. |
| "Architecture decisions depend on technology X" | Then your architecture is too coupled. Redesign abstractly. |
| "We already decided on the tech stack" | Decisions without analysis are assumptions. Validate later. |
## Red Flags - STOP
If you catch yourself writing any of these in a TRD, **STOP**:
- Specific product names with version numbers
- Package manager commands (npm install, go get, pip install)
- Cloud provider service names (RDS, Lambda, Cloud Run, etc.)
- Framework-specific terms (Fiber middleware, React hooks, Express routers)
- Container/orchestration specifics (Docker, K8s, ECS)
- Programming language version constraints
- Infrastructure service names (CloudFront, Cloudflare, Fastly)
- CI/CD tool names (GitHub Actions, CircleCI, Jenkins)
**When you catch yourself**: Replace the product name with the capability it provides. "PostgreSQL 16" → "Relational Database with ACID guarantees"
## Gate 3 Validation Checklist
Before proceeding to API/Contract Design, verify:
**Architecture Completeness**:
- [ ] All PRD features mapped to architectural components
- [ ] Component boundaries follow domain-driven design
- [ ] Responsibilities are single and clear
- [ ] Interfaces are well-defined and stable
**Data Design**:
- [ ] Data ownership is explicitly assigned to components
- [ ] Data models support all PRD requirements
- [ ] Consistency strategy is defined (eventual vs. strong)
- [ ] Data flows are documented
**Quality Attributes**:
- [ ] Performance targets are set and achievable
- [ ] Security requirements are addressed in architecture
- [ ] Scalability path is clear (horizontal/vertical)
- [ ] Reliability targets are defined
**Integration Readiness**:
- [ ] External dependencies are identified (by capability)
- [ ] Integration patterns are selected (not specific tools)
- [ ] Error scenarios are considered
- [ ] Contract versioning strategy exists
**Technology Agnostic**:
- [ ] Zero specific product names in document
- [ ] All capabilities described abstractly
- [ ] Patterns named, not implementations
- [ ] Can swap technologies without redesign
**Gate Result**:
-**PASS**: All checkboxes checked → Proceed to API/Contract Design (`pre-dev-api-design`)
- ⚠️ **CONDITIONAL**: Remove product names → Re-validate
-**FAIL**: Architecture too coupled → Redesign
## Common Violations and Fixes
### Violation 1: Specific Technologies in Architecture
**Wrong**:
```yaml
System Architecture:
Language: Go 1.24+
Framework: Fiber v2.52+
Database: PostgreSQL 16
Cache: Valkey 8
Storage: MinIO
```
**Correct**:
```yaml
System Architecture:
Style: Modular Monolith
Pattern: Hexagonal Architecture
Data Tier:
- Relational database for transactional data
- Key-value store for session/cache
- Object storage for files/media
```
### Violation 2: Framework Details in Component Design
**Wrong**:
```markdown
**Auth Component**
- Fiber middleware for JWT validation
- bcrypt for password hashing
- OAuth2 integration with passport.js
```
**Correct**:
```markdown
**Auth Component**
- **Purpose**: User authentication and session management
- **Inbound**: HTTP API endpoints for login/register/logout
- **Outbound**: User data persistence, email notifications
- **Security**: Token-based authentication, password hashing with industry-standard algorithms
```
### Violation 3: Cloud Services in Deployment
**Wrong**:
```yaml
Deployment:
Compute: AWS ECS Fargate
Database: AWS RDS PostgreSQL
Cache: AWS ElastiCache
Load Balancer: AWS ALB
```
**Correct**:
```yaml
Deployment Topology:
Compute: Container-based stateless services
Data Tier: Managed database services with backup
Performance: Distributed caching layer
Traffic: Load balanced with health checks
```
## Architecture Decision Records (ADRs)
When making major decisions, document them abstractly:
```markdown
**ADR-001: Hexagonal Architecture Pattern**
- **Context**: Need clear separation between business logic and external dependencies
- **Options Considered**:
- Layered Architecture: Simple but couples layers
- Hexagonal: Clear boundaries, testable, flexible
- Event-Driven: Eventual consistency complexity
- **Decision**: Hexagonal Architecture
- **Rationale**:
- Business logic independent of frameworks
- Easy to swap adapters (database, HTTP, etc.)
- Testable without external dependencies
- **Consequences**:
- More initial structure needed
- Team needs to understand ports/adapters
- Clear boundaries prevent technical debt
```
**Note**: Still no technology products mentioned. Pattern names only.
## Confidence Scoring
```yaml
Component: "[Architecture Element]"
Confidence Factors:
Pattern Match: [0-40]
- Exact pattern used before: 40
- Similar pattern adapted: 25
- Novel but researched: 10
Complexity Management: [0-30]
- Simple, proven approach: 30
- Moderate complexity: 20
- High complexity: 10
Risk Level: [0-30]
- Low risk, proven path: 30
- Moderate risk, mitigated: 20
- High risk, accepted: 10
Total: [0-100]
Action:
80+: Present architecture autonomously
50-79: Present multiple options
<50: Request clarification on requirements
```
## Output Location
**Always output to**: `docs/pre-development/trd/trd-[feature-name].md`
## After TRD Approval
1. ✅ Lock the TRD - architecture patterns are now reference
2. 🎯 Use TRD as input for API/Contract Design (next phase: `pre-dev-api-design`)
3. 🚫 Never add specific technologies to TRD retroactively
4. 📋 Keep architecture/implementation strictly separated
## Quality Self-Check
Before declaring TRD complete, verify:
- [ ] All Feature Map domains have architectural representation
- [ ] All PRD requirements have architectural solution
- [ ] Zero product names or versions present
- [ ] All capabilities described abstractly
- [ ] Component boundaries follow DDD principles
- [ ] Interfaces are technology-agnostic
- [ ] Data ownership is explicit and documented
- [ ] Security is designed into architecture (not bolted on)
- [ ] Performance targets are set and achievable
- [ ] Scalability path is clear and logical
- [ ] Gate 3 validation checklist 100% complete
## The Bottom Line
**If you wrote a TRD with specific technology products, delete those sections and rewrite abstractly.**
The TRD is architecture patterns only. Period. No product names. No versions. No frameworks.
Technology choices go in Dependency Map. That's the next phase. Wait for it.
Violating this separation means:
- You're committing to technologies before evaluating alternatives
- Architecture becomes coupled to specific vendors
- You can't objectively compare technology options later
- Costs, licensing, and compatibility aren't analyzed
- Team loses flexibility to adapt
**Stay abstract. Stay flexible. Make technology decisions in the next phase with full analysis.**

View File

@@ -0,0 +1,417 @@
---
name: using-pm-team
description: |
10 pre-dev workflow skills + 3 research agents organized into Small Track (4 gates, <2 days) and
Large Track (9 gates, 2+ days) for systematic feature planning with research-first approach.
trigger: |
- Starting any feature implementation
- Need systematic planning before coding
- User requests "plan a feature"
skip_when: |
- Quick exploratory work → brainstorming may suffice
- Bug fix with known solution → direct implementation
- Trivial change (<1 hour) → skip formal planning
---
# Using Ring Team-Product: Pre-Dev Workflow
The ring-pm-team plugin provides 10 pre-development planning skills and 3 research agents. Use them via `Skill tool: "ring-pm-team:gate-name"` or via slash commands.
**Remember:** Follow the **ORCHESTRATOR principle** from `using-ring`. Dispatch pre-dev workflow to handle planning; plan thoroughly before coding.
---
## Pre-Dev Philosophy
**Before you code, you plan. Every time.**
Pre-dev workflow ensures:
- ✅ Requirements are clear (WHAT/WHY)
- ✅ Architecture is sound (HOW)
- ✅ APIs are contracts (boundaries)
- ✅ Data models are explicit (entities)
- ✅ Dependencies are known (tech choices)
- ✅ Tasks are atomic (2-5 min each)
- ✅ Implementation is execution, not design
---
## Two Tracks: Choose Your Path
### Small Track (4 Gates) <2 Day Features
**Use when ALL criteria met:**
- ✅ Implementation: <2 days
- ✅ No new external dependencies
- ✅ No new data models
- ✅ No multi-service integration
- ✅ Uses existing architecture
- ✅ Single developer
**Gates:**
| # | Gate | Skill | Output |
|---|------|-------|--------|
| 0 | **Research Phase** | pre-dev-research | research.md |
| 1 | Product Requirements | pre-dev-prd-creation | PRD.md |
| 2 | Technical Requirements | pre-dev-trd-creation | TRD.md |
| 3 | Task Breakdown | pre-dev-task-breakdown | tasks.md |
**Planning time:** 45-75 minutes
**Examples:**
- Add logout button
- Fix email validation bug
- Add rate limiting to endpoint
---
### Large Track (9 Gates) ≥2 Day Features
**Use when ANY criteria met:**
- ❌ Implementation: ≥2 days
- ❌ New external dependencies
- ❌ New data models/entities
- ❌ Multi-service integration
- ❌ New architecture patterns
- ❌ Team collaboration needed
**Gates:**
| # | Gate | Skill | Output |
|---|------|-------|--------|
| 0 | **Research Phase** | pre-dev-research | research.md |
| 1 | Product Requirements | pre-dev-prd-creation | PRD.md |
| 2 | Feature Map | pre-dev-feature-map | feature-map.md |
| 3 | Technical Requirements | pre-dev-trd-creation | TRD.md |
| 4 | API Design | pre-dev-api-design | API.md |
| 5 | Data Model | pre-dev-data-model | data-model.md |
| 6 | Dependencies | pre-dev-dependency-map | dependencies.md |
| 7 | Task Breakdown | pre-dev-task-breakdown | tasks.md |
| 8 | Subtask Creation | pre-dev-subtask-creation | subtasks.md |
**Planning time:** 2.5-4.5 hours
**Examples:**
- Add user authentication
- Implement payment processing
- Add file upload with CDN
- Multi-service integration
---
## 10 Pre-Dev Skills + 3 Research Agents
### Gate 0: Research Phase (NEW)
**Skill:** `pre-dev-research`
**Output:** `docs/pre-dev/{feature}/research.md`
**What:** Parallel research before planning
**Covers:**
- Existing codebase patterns (file:line references)
- External best practices (URLs)
- Framework documentation (version-specific)
- Knowledge base search (docs/solutions/)
**Research Modes:**
| Mode | Primary Focus | When to Use |
|------|---------------|-------------|
| **greenfield** | Web research, best practices | New capability, no existing patterns |
| **modification** | Codebase patterns | Extending existing functionality |
| **integration** | API docs, SDK docs | Connecting external systems |
**Dispatches 3 agents in PARALLEL:**
1. `repo-research-analyst` - Codebase patterns, docs/solutions/
2. `best-practices-researcher` - Web search, Context7
3. `framework-docs-researcher` - Tech stack, versions
**Use when:**
- Starting any feature (always recommended)
- Need to understand existing patterns
- Greenfield feature needs best practices research
---
### Gate 1: Product Requirements
**Skill:** `pre-dev-prd-creation`
**Output:** `docs/pre-dev/{feature}/PRD.md`
**What:** Business requirements document
**Covers:**
- Goal & success criteria
- User stories & use cases
- Business value & priority
- Constraints & assumptions
- Non-functional requirements
**Use when:**
- Starting any feature
- Need clarity on WHAT/WHY
---
### Gate 2: Feature Map (Large Track Only)
**Skill:** `pre-dev-feature-map`
**Output:** `docs/pre-dev/{feature}/feature-map.md`
**What:** Feature relationship diagram
**Covers:**
- Feature breakdown into components
- Dependencies between features
- Sequencing & prerequisites
- Integration points
- Deployment order
**Use when:**
- Complex feature with multiple parts
- Need to understand relationships
- Team coordination required
---
### Gate 3: Technical Requirements
**Skill:** `pre-dev-trd-creation`
**Output:** `docs/pre-dev/{feature}/TRD.md`
**What:** Technical architecture document
**Covers:**
- System design & architecture
- Technology selection
- Implementation approach
- Scalability & performance targets
- Security requirements
**Use when:**
- Understanding HOW to implement
- Need architecture clarity
- Before any coding
---
### Gate 4: API Design (Large Track Only)
**Skill:** `pre-dev-api-design`
**Output:** `docs/pre-dev/{feature}/API.md`
**What:** API contracts & boundaries
**Covers:**
- Endpoint specifications
- Request/response schemas
- Error handling
- Versioning strategy
- Integration patterns
**Use when:**
- Service boundaries need definition
- Multiple services collaborate
- Contract-driven development
---
### Gate 5: Data Model (Large Track Only)
**Skill:** `pre-dev-data-model`
**Output:** `docs/pre-dev/{feature}/data-model.md`
**What:** Entity relationships & schemas
**Covers:**
- Entity definitions
- Relationships & cardinality
- Database schemas
- Migration strategy
- Data persistence
**Use when:**
- New entities/tables needed
- Data structure complex
- Migrations required
---
### Gate 6: Dependencies (Large Track Only)
**Skill:** `pre-dev-dependency-map`
**Output:** `docs/pre-dev/{feature}/dependencies.md`
**What:** Technology & library selection
**Covers:**
- External dependencies
- Library choices & alternatives
- Version compatibility
- License implications
- Risk assessment
**Use when:**
- New libraries needed
- Tech choices unclear
- Alternatives to evaluate
---
### Gate 7: Task Breakdown
**Skill:** `pre-dev-task-breakdown`
**Output:** `docs/pre-dev/{feature}/tasks.md`
**What:** Implementation tasks
**Covers:**
- Atomic work units (2-5 min each)
- Execution order
- Dependencies between tasks
- Verification steps
- Completeness checklist
**Use when:**
- Ready to create implementation plan
- Need task granularity
- Before assigning work
---
### Gate 8: Subtask Creation (Large Track Only)
**Skill:** `pre-dev-subtask-creation`
**Output:** `docs/pre-dev/{feature}/subtasks.md`
**What:** Ultra-atomic task breakdown
**Covers:**
- Sub-unit decomposition
- Exact file paths
- Code snippets
- Verification commands
- Expected outputs
**Use when:**
- Need absolute clarity
- Complex task needs detail
- Zero-context execution required
---
## Using Pre-Dev Workflow
### Via Slash Commands (Easy)
**Small feature:**
```
/ring-pm-team:pre-dev-feature logout-button
```
**Large feature:**
```
/ring-pm-team:pre-dev-full payment-system
```
These run all gates sequentially and create artifacts in `docs/pre-dev/{feature}/`.
---
### Via Skills (Manual)
Run individually or sequence:
```
Skill tool: "ring-pm-team:pre-dev-prd-creation"
(Review output)
Skill tool: "ring-pm-team:pre-dev-trd-creation"
(Review output)
Skill tool: "ring-pm-team:pre-dev-task-breakdown"
(Review output)
```
---
## Pre-Dev Output Structure
```
docs/pre-dev/{feature}/
├── PRD.md # Gate 1: Business requirements
├── feature-map.md # Gate 2: Feature relationships (large only)
├── TRD.md # Gate 3: Technical architecture
├── API.md # Gate 4: API contracts (large only)
├── data-model.md # Gate 5: Entity schemas (large only)
├── dependencies.md # Gate 6: Tech choices (large only)
├── tasks.md # Gate 7: Implementation tasks
└── subtasks.md # Gate 8: Ultra-atomic tasks (large only)
```
---
## Decision: Small or Large Track?
**When in doubt: Use Large Track.**
Better to over-plan than discover mid-implementation that feature is larger.
**You can switch:** If Small Track feature grows, pause and complete Large Track gates.
---
## Integration with Other Planning
**Pre-dev workflow provides:**
- ✅ Complete planning artifacts
- ✅ Atomic tasks ready to execute
- ✅ Zero-context handoff capability
- ✅ Clear implementation boundaries
**Combined with:**
- `ring-default:execute-plan` Run tasks in batches
- `ring-default:write-plan` Generate from scratch
- `ring-dev-team:*-engineer` Specialist review of design
- `ring-default:requesting-code-review` Post-implementation review
---
## ORCHESTRATOR Principle
Remember:
- **You're the orchestrator** Dispatch pre-dev skills, don't plan manually
- **Don't skip gates** Each gate adds clarity
- **Don't code without planning** Plan first, code second
- **Use agents for specialist review** Dispatch backend-engineer-golang to review TRD
### Good Example (ORCHESTRATOR):
> "I need to plan payment system. Let me run /ring-pm-team:pre-dev-full to get organized, then dispatch backend-engineer-golang to review the architecture."
### Bad Example (OPERATOR):
> "I'll start coding and plan as I go."
---
## Available in This Plugin
**Skills:**
- pre-dev-research (Gate 0) ← NEW
- pre-dev-prd-creation (Gate 1)
- pre-dev-feature-map (Gate 2)
- pre-dev-trd-creation (Gate 3)
- pre-dev-api-design (Gate 4)
- pre-dev-data-model (Gate 5)
- pre-dev-dependency-map (Gate 6)
- pre-dev-task-breakdown (Gate 7)
- pre-dev-subtask-creation (Gate 8)
- using-pm-team (this skill)
**Research Agents:**
- repo-research-analyst (codebase patterns, docs/solutions/)
- best-practices-researcher (web search, Context7)
- framework-docs-researcher (tech stack, versions)
**Commands:**
- `/ring-pm-team:pre-dev-feature` Small track (4 gates)
- `/ring-pm-team:pre-dev-full` Large track (9 gates)
**Note:** If skills are unavailable, check if ring-pm-team is enabled in `.claude-plugin/marketplace.json`.
---
## Integration with Other Plugins
- **using-ring** (default) ORCHESTRATOR principle for ALL tasks
- **using-dev-team** Developer specialists for reviewing designs
- **using-finops-team** Regulatory compliance planning
- **using-pm-team** Pre-dev workflow (this skill)
Dispatch based on your need:
- General code review → default plugin agents
- Regulatory compliance → ring-finops-team agents
- Specialist review of design → ring-dev-team agents
- Feature planning → ring-pm-team skills