Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:27 +08:00
commit 37774aa937
131 changed files with 31137 additions and 0 deletions

View File

@@ -0,0 +1,206 @@
# Automatic Analysis Guide
Guide for automatic analysis of project materials and researching 2025 best practices.
---
## Section 1: Analyzing Project Materials
### When to Analyze
Ask user: *"Do you have project materials to analyze? (files, diagrams, docs, code)"*
### Files to Search (use Glob + Read)
**Package managers**: `package.json`, `requirements.txt`, `go.mod`, `pom.xml`, `Gemfile`
**Docker**: `Dockerfile`, `docker-compose.yml`, `docker-compose.test.yml`
**Config**: `tsconfig.json`, `*.env.example`, `.nvmrc`
**Docs**: `README.md`, architecture diagrams
**Code structure**: `src/`, `api/`, `services/`, `tests/`
### Information to Extract
From **package.json / requirements.txt / go.mod**:
- Runtime version (Node 18, Python 3.11, Go 1.21)
- Dependencies → frameworks, databases, auth, cache
- Pre-populate: Q9, Q12
From **Dockerfile**:
- Base image → runtime version
- Multi-stage structure → build optimization
- Pre-populate: Q9, Q12
From **docker-compose.yml**:
- Services → app + db + cache + queue
- Images → database/cache versions
- Volumes → hot-reload setup
- Pre-populate: Q9, Q11
From **docker-compose.test.yml**:
- Test services → db-test, cache-test (isolated)
- Volumes → ./src, ./tests (hot-reload)
- Tmpfs → in-memory test databases
- Command → test framework
- Pre-populate: Q12 (test setup)
### Output Format
```
✓ Analyzed project materials
**Detected**:
- Runtime: [runtime + version]
- Framework: [framework + version]
- Database: [database]
- Architecture: [hints from docker-compose]
**Pre-populated**: Q9, Q12 (partial)
```
---
## Section 2: Researching Best Practices 2025
### When to Research
During **Phase 2, Stage 2** for questions Q9, Q11-Q13.
Ask user first: *"Research best practices automatically? (Y/N)"*
### Research Tools
**MCP Ref** (`mcp__Ref__ref_search_documentation`):
- Query: `"[framework] latest version 2025"`
- Use for: Official docs, version numbers, features
- Then Read: `mcp__Ref__ref_read_url` for details
**WebSearch**:
- Query patterns:
- `"[Tech A] vs [Tech B] 2025 comparison"`
- `"best practices [technology] 2025"`
- `"[pattern] architecture pros cons 2025"`
- Use for: Comparisons, best practices, trends
### Research Strategy by Question
**Q9: Technology Decisions**
1. Check analyzed versions vs 2025 latest
2. MCP Ref: latest stable versions
3. WebSearch: security vulnerabilities, release notes
4. Recommend upgrades if: EOL, security issues, LTS available
**Q11: Architectural Patterns**
1. Identify project type + scale from Stage 1
2. WebSearch: `"[project type] architecture patterns 2025"`
3. Consider scale:
- Small (< 10K users) → Monolith
- Medium (10K-100K) → Microservices
- Large (100K+) → Microservices + Event-Driven
**Q12: Libraries and Frameworks**
1. Based on Q9 + Q11
2. MCP Ref: latest versions for each component
3. WebSearch: compatibility, comparisons
4. Check: ORM, testing framework, validation library
5. Verify compatibility matrix
**Q13: Integrations**
1. Identify needs from Q5 (IN SCOPE)
2. WebSearch comparisons:
- Payments: `"Stripe vs PayPal 2025"`
- Email: `"SendGrid vs AWS SES 2025"`
- Auth: `"Auth0 vs Clerk 2025"`
- Storage: `"AWS S3 vs Cloudinary 2025"`
3. Consider: pricing, DX, compliance, popularity
### Dockerfile Generation
Based on Q12 runtime + framework:
- Latest stable base image
- Multi-stage build (dev + prod)
- Security: non-root user, minimal image
- Generate docker-compose.yml with services from Q11
---
## Section 3: Transition to Interactive Mode
### When to Ask User
**Pause research when**:
1. **Multiple alternatives** (React vs Vue) → present both, ask preference
2. **Insufficient info** (no files found) → ask directly
3. **Unclear goals** (vague Q5) → ask clarifying questions
4. **Always interactive**: Q10, Q14-Q19 (org-specific)
### Alternative Presentation Template
```
"Researched [Category]:
**Option A**: [Tech A]
Pros: [key benefits]
Cons: [key drawbacks]
**Option B**: [Tech B]
Pros: [key benefits]
Cons: [key drawbacks]
Recommendation: [A/B] because [reason]
Which do you prefer? (A/B/Other)"
```
### Fallback to Full Interactive
If no materials OR user declines research → ask all Q9-Q19 interactively
---
## Section 4: Quality Guidelines
### Verification Checklist
- [ ] Version is 2025-current (< 1 year old)
- [ ] Stable release (not beta)
- [ ] No critical security vulnerabilities
- [ ] Compatible with other tech
- [ ] Active community (GitHub stars, updates)
- [ ] Official docs available
### Red Flags (Don't Recommend)
- Last updated > 2 years ago
- Unpatched security vulnerabilities
- Incompatible with stack
- Beta/experimental (unless requested)
- Obscure (<1000 GitHub stars)
### Rationale Format
```
Recommendation: [Technology]
Rationale:
1. [Technical reason]
2. [Ecosystem reason]
3. [Project fit reason]
4. [Industry adoption]
```
---
## Section 5: Execution Flow
### Phase 1.5: Material Analysis
```
User provides materials? (Y/N)
├─ Y: Glob + Read files → Extract info → Report findings
└─ N: Skip to Phase 2
```
### Phase 2, Stage 2: Research & Design
```
Stage 1 complete (Q1-Q8 answered)
Research automatically? (Y/N)
├─ Y: Research Q9, Q11-Q13 → Present recommendations → User accepts/modifies
└─ N: Ask Q9-Q13 interactively
Always ask Q10, Q14-Q19 interactively
```
---
**Version:** 1.0.0
**Last Updated:** 2025-10-29

View File

@@ -0,0 +1,307 @@
# Technical Questions for Project Documentation
These 23 technical questions MUST be answered before creating project documentation. They are grouped into 6 categories for structured technical discovery.
**Focus**: This document is **purely technical** - no business metrics, stakeholder management, or budget planning. For technical teams documenting architecture, requirements, and implementation details.
## Priority Order: Context First, Questions Last
**CRITICAL**: Interactive questions are the LAST RESORT. Follow this priority order:
1. **Auto-Discovery (Phase 1.2)** - ALWAYS attempt first
- Search for: package.json, Dockerfile, docker-compose.yml, requirements.txt, pyproject.toml, pom.xml, build.gradle, go.mod, README.md
- Extract: Q9 (runtime versions), Q12 (frameworks, databases, dependencies)
- Benefits: Zero user effort, reduces questions by 2-4
2. **User Materials (Phase 1.3)** - MANDATORY question
- ALWAYS ask: "Do you have existing materials I should review? (design docs, specs, requirements, legacy docs, diagrams, API contracts)"
- If YES: Read and extract answers for Q1-Q23 from materials
- Benefits: Can reduce questions by 10-18 depending on material completeness
3. **Best Practices Research (Phase 1.4)** - AUTO research
- Use MCP Ref to verify 2025 library versions (supplements Q12)
- Use WebSearch for architectural patterns (supplements Q11)
- Use WebSearch for integration best practices (supplements Q13)
- Benefits: Provides current best practices, reduces technical decision questions
4. **Interactive Questions (Phase 2)** - ONLY for remaining questions
- Ask ONLY questions that could NOT be auto-discovered, extracted from materials, or researched
- Show progress: "Already gathered X/23 questions, asking Y remaining questions"
- Benefits: Minimizes user effort, maximizes efficiency
**Example Impact**:
- Optimal scenario (with materials): 18/23 answered automatically → ask only 5 questions
- Typical scenario (no materials): 4/23 answered automatically → ask 19 questions
- Greenfield project: 6/23 answered from materials + research → ask 17 questions
## Question Metadata (3-Stage Discovery)
| Question | Category | Stage | Mode | Research Tools |
|----------|----------|-------|------|----------------|
| Q1-Q3 | Requirements | 1 | interactive | - |
| Q5-Q8 | Scope | 1 | interactive | - |
| Q9 | Tech Stack | 2 | auto-discoverable | package.json, Dockerfile, docker-compose.yml |
| Q10 | Tech Stack | 2 | interactive | - |
| Q11 | Tech Stack | 2 | auto-researchable | WebSearch |
| Q12 | Tech Stack | 2 | auto-discoverable + researchable | package.json + MCP Ref |
| Q13 | Tech Stack | 2 | auto-researchable | WebSearch |
| D1-D6 | Design Guidelines | 2 | interactive | - |
| O1-O3 | Operations | 2 | semi-auto | Dockerfile, docker-compose.yml |
| R1-R2 | Risks | 2 | interactive | - |
**Stage 0 (Context Research - BEFORE asking questions)**: ALWAYS attempt auto-discovery (Q9, Q12 from package.json/Dockerfile), ALWAYS ask for user materials (may answer Q1-Q23), ALWAYS research best practices (Q11, Q13 via MCP Ref/WebSearch). This stage can reduce interactive questions by 20-75%.
**Stage 1 (Understand Requirements)**: Ask REMAINING questions from Q1-Q3, Q5-Q8 that were NOT answered in Stage 0. Skip questions already extracted from materials.
**Stage 2 (Research & Design)**: Ask REMAINING questions from Q10, D1-D6, O1-O3, R1-R2. Questions Q9, Q11, Q12, Q13 likely already answered in Stage 0 (auto-discovery + research).
---
## Category 1: Requirements (3 questions)
### Q1: What are the high-level technical acceptance criteria?
**Why important**: Defines what "done" looks like from a technical perspective.
**Example answer**: "Users can register via JWT auth, search products with <500ms latency, complete checkout with Stripe payment webhook handling"
### Q2: What is the Minimum Viable Product (MVP) from a technical standpoint?
**Why important**: Defines Phase 1 technical scope and fastest path to functional system.
**Example answer**: "REST API with auth, CRUD for products, shopping cart in Redis, Stripe payment integration, PostgreSQL database"
### Q3: Are all functional requirements technically defined and agreed?
**Why important**: Prevents mid-project requirement discovery and scope changes.
**Example answer**: "Yes, 15 functional requirements documented with IDs (FR-UM-001: User Registration, FR-PM-001: Product Listing, etc.) and technical acceptance criteria"
---
## Category 2: Scope (4 questions)
### Q5: What is technically IN SCOPE?
**Why important**: Defines technical boundaries and prevents misunderstandings about what will be built.
**Example answer**: "Microservices architecture (Product, Order, Payment services), PostgreSQL + Redis, REST API, JWT auth, Stripe integration, AWS ECS deployment"
### Q6: What is technically OUT OF SCOPE?
**Why important**: Manages expectations and prevents technical feature creep.
**Example answer**: "No mobile native apps (web-responsive only), no AI/ML recommendations, no cryptocurrency payments, no GraphQL (REST only), no Kubernetes (ECS Fargate only)"
### Q7: Where are the technical boundaries and integration points?
**Why important**: Clarifies interfaces with external systems and services.
**Example answer**: "External APIs: Stripe (payments), SendGrid (emails), AWS S3 (images). Internal: API Gateway routes to 4 microservices, Redis Pub/Sub for events"
### Q8: Who are the technical user roles and what are their permissions?
**Why important**: Defines authentication and authorization requirements.
**Example answer**: "3 roles: Customer (browse, cart, checkout), Vendor (manage own products, view sales), Admin (platform config, user management)"
---
## Category 3: Technology Stack (5 questions)
### Q9: What technology decisions have already been made?
**Why important**: Identifies constraints and pre-existing technical commitments.
**Example answer**: "Must use: AWS (company standard), PostgreSQL (existing DBA expertise), Node.js (team skillset). Cannot use: MongoDB (no in-house experience)"
### Q10: What are the hard technical constraints?
**Why important**: Defines non-negotiable limitations that affect architecture.
**Example answer**: "Must deploy to AWS us-east-1 (company policy), must comply with PCI DSS Level 1 (no card storage), cannot use Kubernetes (team lacks experience - use ECS Fargate), must integrate with legacy SOAP API (blocking dependency), cannot use serverless (compliance restrictions)"
### Q11: What architectural patterns will be used?
**Why important**: Defines overall system structure and design approach.
**Example answer**: "Microservices architecture (4 services), Event-Driven communication (Redis Pub/Sub), REST API, Stateless services for horizontal scaling"
### Q12: What libraries and frameworks will be used?
**Why important**: Defines technical stack and team training needs.
**Example answer**: "Frontend: React 18 + Next.js 14 + Tailwind CSS. Backend: Node.js 20 + Express + Prisma ORM. Testing: Jest + Supertest + Playwright"
### Q13: What integrations with existing systems are required?
**Why important**: Identifies dependencies and integration complexity.
**Example answer**: "Integrate with: Legacy inventory system (SOAP API), Stripe (REST API), SendGrid (REST API), AWS S3 (SDK), existing user database (read-only PostgreSQL replica)"
---
## Category 4: Design Guidelines (6 questions)
### D1: What typography and color system should be used?
**Why important**: Defines visual consistency and brand identity for frontend projects.
**Example answer**: "Primary font: Inter (body), Poppins (headings). Colors: Primary #FF6B35 (orange), Secondary #004E89 (blue), Accent #1A936F (teal). WCAG 2.1 AA contrast ratios (4.5:1 text)"
### D2: What component library and UI patterns should be implemented?
**Why important**: Ensures consistent user experience and speeds up development.
**Example answer**: "Buttons: Primary/Secondary/Text variants. Forms: Input fields with validation states. Cards: Default/Hover/Interactive. Modals: Backdrop + centered content. Tables: Sortable headers + pagination"
### D3: What are the responsive breakpoints?
**Why important**: Defines how UI adapts across devices.
**Example answer**: "Mobile (<768px): 1-column stacked, Tablet (768-1024px): 2-column grid, Desktop (>1024px): 3-column grid + sidebar. Min tap target: 44x44px"
### D4: What accessibility standards must be met?
**Why important**: Ensures inclusive design for users with disabilities.
**Example answer**: "WCAG 2.1 Level AA compliance: Keyboard navigation (all features), Screen reader support (ARIA labels), Color contrast 4.5:1 (text), Focus indicators (ring-2 ring-primary)"
### D5: What branding and imagery guidelines apply?
**Why important**: Maintains consistent brand identity across the platform.
**Example answer**: "Logo: Min 32px height, clear space equals logo height. Hero images: 16:9 aspect ratio, min 1920x1080px. Stock photos: Professional, diverse, authentic (no clichés)"
### D6: What design system or inspiration should be referenced?
**Why important**: Provides design direction and accelerates UI development.
**Example answer**: "Primary reference: Airbnb Design System (professional, user-friendly). Secondary influences: Material Design (components), Tailwind CSS (utility-first), Carbon Design (enterprise patterns)"
---
## Category 5: Operations (3 questions)
### O1: What is the development environment setup?
**Why important**: Defines how developers run the project locally.
**Example answer**: "Docker Compose with 5 services: app (Node.js), db (PostgreSQL), cache (Redis), queue (RabbitMQ), nginx (reverse proxy). Commands: `docker compose up -d`, `docker compose exec app npm run migrate`"
### O2: What are the deployment procedures?
**Why important**: Defines how code reaches production safely.
**Example answer**: "CI/CD: GitHub Actions → Build Docker image → Push to ECR → Deploy to ECS Fargate (rolling update). Environments: Dev (auto-deploy main), Staging (manual approval), Production (manual approval + rollback plan)"
### O3: What monitoring and troubleshooting tools are used?
**Why important**: Enables rapid incident detection and resolution.
**Example answer**: "Logs: CloudWatch Logs (centralized). Metrics: CloudWatch + Grafana dashboards (latency, errors, throughput). Alerts: PagerDuty (>5% error rate, p95 >500ms). SSH: Bastion host access for production debugging"
---
## Category 6: Technical Risks (2 questions)
### R1: What are the key technical risks?
**Why important**: Identifies potential technical failures that need mitigation.
**Example answer**: "Risk 1: Stripe outage blocks transactions (mitigation: retry logic + queue). Risk 2: Database becomes bottleneck (mitigation: read replicas + Redis caching). Risk 3: Microservice network failures (mitigation: circuit breakers + timeouts)"
### R2: What are the critical technical dependencies?
**Why important**: Identifies external factors that could block or delay development.
**Example answer**: "Hard dependencies: AWS account approval (1 week), Stripe merchant account (2 weeks), Legacy API documentation (blocking integration). Team dependencies: 1 senior Node.js dev (key person risk)"
---
## Question Priority Levels
All 23 questions are **MUST-ANSWER** for complete technical documentation. Some questions (D1-D6) are frontend-specific and can be skipped for backend-only or API-only projects.
---
## How to Use This Document
**IMPORTANT**: Follow Priority Order (Context First, Questions Last) defined above.
1. **Stage 0 - Context Research (Phase 1)**:
- **1.2 Auto-Discovery**: Use Glob to find package.json/Dockerfile/docker-compose.yml → Extract Q9, Q12
- **1.3 User Materials**: ALWAYS ask "Do you have materials to review?" → Read and extract Q1-Q23
- **1.4 Research**: Use MCP Ref for Q12 (2025 versions), WebSearch for Q11 (patterns), Q13 (integrations)
- **Track**: Mark which questions answered in Stage 0
2. **Stage 1-2 - Interactive Questions (Phase 2)**:
- **Review**: Display "Already gathered: X/23 questions" to user
- **Ask ONLY remaining**: Skip questions answered in Stage 0
- **Batch questions**: Show progress "Category X of 6 (asking Y remaining questions)"
- **Capture Answers**: Record with question IDs for traceability
3. **Map to Documents**: Use answers (from Stage 0 or Stage 1-2) to populate:
- `requirements.md` ← Q1, Q3
- `architecture.md` ← Q5, Q6, Q7, Q9, Q10, Q11, Q12, Q13
- `tech_stack.md` ← Q9, Q10, Q11, Q12
- `api_spec.md` ← Q8, Q13
- `database_schema.md` ← Q9, Q12
- `design_guidelines.md` ← D1-D6
- `runbook.md` ← O1-O3
4. **Validate Completeness**: Ensure all relevant questions have clear answers (skip D1-D6 for backend-only projects)
5. **Iterate if Needed**: If answers are unclear or incomplete, ask follow-up questions
---
## Example Discovery Flow (Context-First Approach)
**Stage 0 - Context Research (BEFORE asking questions)**
**Step 1: Auto-Discovery**
```
Assistant: "Starting context research...
✓ Auto-discovered from package.json:
- Q9: Node.js 20, React 18
- Q12: Express, Prisma ORM, PostgreSQL, Jest, Tailwind CSS
✓ Auto-discovered from Dockerfile:
- Q9: Production runtime: Node.js 20 Alpine
```
**Step 2: Request User Materials**
```
Assistant: "Do you have existing materials I should review before asking questions?
Examples: design documents, technical specifications, requirements docs, legacy system documentation, architectural diagrams, API contracts."
User: "Yes, here's our design doc and technical spec."
```
**Step 3: Extract from Materials**
```
Assistant: "Reading materials...
✓ Extracted from design doc:
- Q1: Acceptance criteria defined (user registration, product search <500ms, Stripe checkout)
- Q5: In scope (microservices, PostgreSQL, REST API, JWT auth, Stripe, AWS ECS)
- Q6: Out of scope (no mobile apps, no AI/ML, no cryptocurrency, no GraphQL, no Kubernetes)
- Q7: Integrations: Stripe (payments), SendGrid (emails), AWS S3 (images)
- D1-D6: Full design system documented
✓ Extracted from technical spec:
- Q3: All functional requirements defined with IDs (15 requirements)
- Q10: Hard constraints (AWS us-east-1, PCI DSS Level 1, no serverless)
- Q11: Microservices architecture
- R1-R2: Technical risks and dependencies documented
Total extracted: 18/23 questions"
```
**Step 4: Research Best Practices**
```
Assistant: "Researching 2025 best practices...
✓ Researched via MCP Ref:
- Q12: Verified 2025 versions (Express 5.0, Prisma 6.0, Jest 30.0, Tailwind 4.0)
✓ Researched via WebSearch:
- Q11: Microservices best practices 2025 (event-driven, API Gateway patterns)
- Q13: Stripe API 2025 best practices (webhook security, idempotency)
Total gathered in Stage 0: 21/23 questions"
```
**Stage 1-2 - Interactive Questions (ONLY remaining)**
**Batch 1 - Remaining Questions**
```
Assistant: "✓ Context Research complete!
Information already gathered (21/23 questions):
- Auto-discovered: Q9, Q12 (from package.json/Dockerfile)
- Extracted from materials: Q1, Q3, Q5, Q6, Q7, Q10, Q11, D1-D6, R1-R2
- Researched: Q11, Q12, Q13
Remaining questions: 2 questions
Category 1 of 6: Requirements (asking 1 remaining question)
Q2: What is the Minimum Viable Product (MVP) from a technical standpoint?
Category 2 of 6: Scope (asking 1 remaining question)
Q8: Who are the technical user roles and what are their permissions?
Please provide your answers."
```
---
**Key Insight**: Context-first approach reduced interactive questions from 23 → 2 (91% reduction), saving significant user time and ensuring data accuracy from authoritative sources.
---
**Version:** 4.0.0 (MAJOR: Added Priority Order and Stage 0 Context Research)
**Last Updated:** 2025-11-17
**Changes from v3.0.0:**
- Added "Priority Order: Context First, Questions Last" section at top
- Added Stage 0 (Context Research) before Stage 1-2
- Updated "Question Metadata" to "3-Stage Discovery"
- Updated Stage descriptions to emphasize REMAINING questions only
- Updated "How to Use This Document" with Stage 0 workflow
- Updated examples to show context-first approach (91% question reduction)
- Clarified that interactive questions are LAST RESORT

View File

@@ -0,0 +1,120 @@
# Example Usage
This document provides detailed examples of using the project-documentation-creator skill in different scenarios.
## Example 1: New Project with Materials
**User Request**:
```
"I want to create technical documentation for an existing Node.js e-commerce API"
```
**Skill Response (Phase 1)**:
```
"I'll help you create comprehensive technical documentation.
1. What type of project is this? → web app, API
2. Is this a new project, rewrite, or enhancement? → existing
```
**Skill Response (Phase 2)**:
```
"Do you have project materials to analyze? (files, diagrams, docs, code)"
User: "Yes, I have package.json, Dockerfile, and docker-compose.yml"
"Analyzing project materials..."
✓ Analyzed project materials
**Detected**:
- Runtime: Node.js 20 LTS
- Framework: Express 4.18
- Database: PostgreSQL 15 (from docker-compose.yml)
- Cache: Redis 7.2 (from docker-compose.yml)
- ORM: Prisma 5.6 (from package.json)
**Pre-populated**: Q9 (partial), Q12 (partial)
"Let's proceed to Stage 1: Understanding Requirements."
```
**Skill Response (Phase 3, Stage 1)**:
```
"Stage 1: Understanding Requirements - Category 1 of 2: Requirements
Q1: What are the high-level technical acceptance criteria?
Q2: What is the Minimum Viable Product (MVP) from a technical standpoint?
Q3: Are all functional requirements technically defined?
Please answer these questions."
```
**Skill Response (Phase 3, Stage 2)**:
```
"Stage 1 complete! Research best practices automatically? (Y/N)"
User: "Y"
"Researching 2025 best practices for your e-commerce API...
**Researched Technology Stack**:
**Runtime**: Node.js 20 LTS (detected, confirmed latest)
**Framework**: Express 4.18 → Recommend upgrade to 4.19 (latest stable, security fixes)
**Database**: PostgreSQL 15 → Recommend PostgreSQL 16 (Sep 2023, 5-year support, performance improvements)
**ORM**: Prisma 5.6 → Recommend Prisma 5.7 (latest stable, bug fixes)
**Cache**: Redis 7.2 (confirmed latest stable)
**Architectural Pattern**: RESTful API with Layered Architecture (Controller → Service → Repository)
Rationale: Matches your current Express setup, clear separation of concerns, testable
**Additional Libraries**:
- Validation: Zod 3.22 (type-safe schemas)
- Testing: Jest 29 + Supertest 6 (unit + integration)
- Security: helmet 7 + express-rate-limit 7
Do you accept these recommendations? (Y/N/Modify)"
```
**After All Phases Complete**:
```
"Technical documentation generation complete!
**Generated Files**:
- docs/project/requirements.md (15 FR)
- docs/project/architecture.md (with C4 diagrams)
- docs/project/technical_specification.md (includes auto-generated Dockerfile + docker-compose.yml)
- docs/project/adrs/ (5 ADRs: Express, PostgreSQL, Prisma, Redis, REST API)
**Next Steps**:
1. Review the generated documentation with your technical team
2. Use these documents as input for epic-creator to start creating Epics
Would you like to review any specific document or make adjustments?"
```
## Example 2: New Project without Materials
**User Request**:
```
"I want to create technical documentation for a new mobile app project"
```
**Skill Response**:
```
"I'll help you create comprehensive technical documentation.
1. What type of project is this? → mobile app
2. Is this a new project, rewrite, or enhancement? → new project
Do you have project materials to analyze? (files, diagrams, docs, code)"
User: "No"
"Let's begin Stage 1: Understanding Requirements."
```
*[Proceeds with fully interactive Q1-Q19 dialog, with automatic research option for Q9, Q11-Q13]*
---
**Version:** 2.0.0
**Last Updated:** 2025-01-31

View File

@@ -0,0 +1,285 @@
# Template Mappings Reference
This document details how collected answers from Phase 2 (Core Documents Generation) map to specific sections in the 7 generated documentation templates.
## Document 1: requirements.md
**Template File**: `references/templates/requirements_template.md`
**Structure**:
- Functional Requirements (FR) ONLY organized by feature groups
- NO Non-Functional Requirements (NFR removed per project policy)
- Each requirement includes: ID, Description, Priority (MoSCoW), Acceptance Criteria
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 1.2 Scope | Q1 | High-level technical acceptance criteria |
| Section 2.1 Product Perspective | Q2 | MVP technical scope |
| Section 3 FR by feature groups | Q1, Q2, Q3 | Organize functional requirements by user-facing features |
| Section 5 Constraints | Q10 | Technical and regulatory constraints |
**Format Example**:
```markdown
### FR-AUTH-001: User Registration
**Priority**: MUST
**Description**: Users can register with email and password
**Acceptance Criteria**:
- Email validation (RFC 5322)
- Password strength requirements (min 12 chars, uppercase, lowercase, number, symbol)
- Confirmation email sent within 5 seconds
```
---
## Document 2: architecture.md
**Template File**: `references/templates/architecture_template.md`
**Structure**:
- 11 sections following arc42 (simplified) + C4 Model
- Includes Mermaid diagrams (C4 Context, Container, Component)
- NO Deployment diagrams (moved to runbook.md)
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 1.1 Requirements Overview | Q1, Q2 | Brief summary linking to requirements.md |
| Section 1.2 Quality Goals | R1 | Top 3-5 quality attributes (performance, security, scalability) |
| Section 2 Constraints | Q10 | Technical, organizational, regulatory constraints |
| Section 3.1 Business Context | Q5, Q6 | External actors and business boundaries |
| Section 3.2 Technical Context | Q7, Q13 | External systems, integrations, interfaces |
| Section 4.1 Technology Decisions | Q9, Q11, Q12 | High-level tech choices with ADR links |
| Section 4.2 Top-Level Decomposition | Q11 | Architecture pattern (layered, microservices, etc.) |
| Section 8 ADRs | Q9, Q11, Q12 | List of all ADR links |
| Section 10 Risks | R2 | Known technical risks and mitigation |
**Diagram Mappings**:
| Diagram Type | Generated From | Purpose |
|-------------|---------------|---------|
| C4 Context | Q7 (boundaries), Q13 (integrations) | System + external actors/systems |
| C4 Container | Q11 (architecture), Q9 (database/cache) | Frontend, Backend, Database, Cache, Queue |
| C4 Component | Q11 (pattern), Q12 (frameworks) | API application breakdown (controllers, services, repositories) |
**Note**: Deployment diagrams removed (now in runbook.md). Quality Goals derived from R1 (risks), NOT from NFR questions.
---
## Document 3: tech_stack.md
**Template File**: `references/templates/tech_stack_template.md`
**Structure**:
- 4 sections: Overview, Technology Stack table, Docker configuration, Naming conventions
- NO API endpoints, NO database schema (those in separate docs)
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 2.1 Stack Overview Table | Q9, Q11, Q12 | Detailed version table with rationale and ADR links |
| Section 3.1 Dockerfile | Q9, Q12 (auto-discovered) | Auto-generated multi-stage Dockerfile |
| Section 3.2 docker-compose.yml | Q9, Q12 (auto-discovered) | Auto-generated from package.json + researched versions |
| Section 3.3 docker-compose.test.yml | Q9, Q12 | Test environment configuration |
| Section 4 Naming Conventions | Q8 | File structure, naming patterns |
**Stack Table Format**:
```markdown
| Layer | Technology | Version | Rationale | ADR |
|-------|-----------|---------|-----------|-----|
| Frontend | Next.js | 14.0 | SSR for SEO, team expertise | ADR-001 |
| Backend | Node.js | 20 LTS | JavaScript fullstack, async I/O | ADR-002 |
| Database | PostgreSQL | 16 | ACID, JSON support, maturity | ADR-003 |
```
**Auto-Discovery**: Phase 1.3a extracts Q9 (runtime versions) and Q12 (frameworks) from package.json/Dockerfile before asking user.
---
## Document 4: api_spec.md *(conditional: API/Backend projects only)*
**Template File**: `references/templates/api_spec_template.md`
**Structure**:
- 6 sections: Overview, Authentication, API Endpoints, Request/Response schemas, Error Codes, Rate Limiting
- OpenAPI 3.0 compatible structure
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 2 Authentication | Q11, Q12 | JWT/OAuth2/API keys implementation |
| Section 3 API Endpoints | Q1, Q3 (derived) | RESTful endpoints derived from functional requirements |
| Section 5 Error Codes | - | Standard error code taxonomy |
| Section 6 Rate Limiting | Q10 | API rate limits from constraints |
**Endpoint Table Format**:
```markdown
| Method | Endpoint | Description | Auth Required | Request Body | Response |
|--------|----------|-------------|---------------|--------------|----------|
| POST | /auth/register | Register new user | No | RegisterDTO | UserDTO |
| GET | /products | List products | No | - | ProductDTO[] |
| POST | /orders | Create order | Yes | OrderDTO | OrderDTO |
```
**Auto-Generation**: Endpoints derived from FR requirements in Q1, Q3 (e.g., "User Registration" → POST /auth/register).
---
## Document 5: database_schema.md *(conditional: projects with database)*
**Template File**: `references/templates/database_schema_template.md`
**Structure**:
- 8 sections: Overview, ER Diagram, Data Dictionary (tables), Indexes, Migrations, Relationships, Constraints, Seed Data
- Mermaid ER diagrams
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 2 ER Diagram | Q3 (derived) | Entities + relationships from functional requirements |
| Section 3 Data Dictionary | Q3 (derived) | Table definitions with columns, types, constraints |
| Section 4 Indexes | Q3, Q9 | Performance optimization indexes |
| Section 5 Migrations | Q9 | Migration strategy (Prisma/TypeORM/Alembic) |
| Section 8 Seed Data | Q3 | Sample data for development |
**ER Diagram Example**:
```mermaid
erDiagram
USERS ||--o{ ORDERS : places
ORDERS ||--|{ ORDER_ITEMS : contains
PRODUCTS ||--o{ ORDER_ITEMS : "ordered in"
USERS {
uuid id PK
string email UK
string password_hash
timestamp created_at
}
```
**Auto-Generation**: Entities derived from Q3 analysis (e.g., "User Registration" → USERS table, "Product Catalog" → PRODUCTS table).
---
## Document 6: design_guidelines.md *(conditional: Frontend/Full-stack projects only)*
**Template File**: `references/templates/design_guidelines_template.md`
**Structure**:
- 6 sections: Overview, Core Design Elements (typography, colors, spacing, components), Accessibility, Responsive Design, Brand Assets, Design Tokens
- Based on WCAG 2.1 Level AA standards
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 2.1 Typography | D1 | Font families, sizes, weights, line heights |
| Section 2.2 Color System | D1 | Primary/secondary/semantic colors with hex codes |
| Section 2.3 Spacing System | D1, D3 | 8px base grid, spacing scale (4, 8, 12, 16, 24, 32, 48, 64) |
| Section 2.4 Component Library | D2 | Buttons, Forms, Cards, Modals with Tailwind/MUI classes |
| Section 3 Accessibility | D4 | WCAG compliance, ARIA labels, keyboard navigation |
| Section 4 Responsive Design | D3 | Breakpoints (mobile, tablet, desktop) |
| Section 5 Brand Assets | D5 | Logo usage, imagery guidelines |
| Section 6 Design Tokens | D6 | CSS variables or design system reference |
**Component Example**:
```markdown
#### Buttons
| Variant | Classes | Usage |
|---------|---------|-------|
| Primary | bg-primary text-white hover:bg-primary-dark px-6 py-3 rounded-lg | Primary CTAs |
| Secondary | bg-secondary text-gray-800 hover:bg-secondary-dark px-6 py-3 rounded-lg | Secondary actions |
```
**Skipped for**: Backend-only projects (no frontend).
---
## Document 7: runbook.md *(conditional: Docker-based projects)*
**Template File**: `references/templates/runbook_template.md`
**Structure**:
- 9 sections covering ALL environments: local development, testing, production operations
- Includes Docker commands, troubleshooting, SSH access, deployment procedures
**Key Mappings**:
| Template Section | Source Questions | Notes |
|-----------------|------------------|-------|
| Section 2.1 Required Tools | O1 (auto-discovered) | Docker, Docker Compose, Node.js, Git versions |
| Section 3 Local Development | O1, Q9 (auto-discovered) | Docker commands extracted from docker-compose.yml |
| Section 4 Testing | O1 | Test commands (unit, integration, e2e) |
| Section 5 Build & Deployment | O2 | Production build and deployment procedures |
| Section 6 Production Operations | O2, O3 | SSH access, health checks, monitoring, logs |
| Section 7 Troubleshooting | O3 | Common issues and resolutions |
| Appendix A | O1 (auto-discovered) | Environment variables from .env.example |
**Docker Commands Example**:
```bash
# Start all services
docker compose up -d
# Rebuild after code changes
docker compose down
docker compose build --no-cache app
docker compose up -d
# View logs
docker compose logs -f app
```
**Auto-Discovery**: Docker commands, environment variables, and service names extracted from Dockerfile and docker-compose.yml in Phase 1.3a.
---
## Template Placeholder Format
All templates use explicit placeholder mapping with comments for traceability:
```markdown
**Technology Stack:** {{TECHNOLOGY_STACK}}
<!-- From Q9: What database technology will you use? (auto-discovered from package.json) -->
**Architecture Pattern:** {{ARCHITECTURE_PATTERN}}
<!-- From Q11: What architectural patterns will be used? (auto-researched via WebSearch) -->
**Color System:** {{COLOR_SYSTEM}}
<!-- From D1: What typography and color system should be used? -->
**Docker Commands:** {{DOCKER_COMMANDS}}
<!-- From O1: What is the development environment setup? (auto-discovered from docker-compose.yml) -->
```
This format allows clear traceability from questions to documentation sections. Auto-discovery annotations indicate when data is extracted automatically in Phase 1.3a.
---
## Appendix: ADRs (Auto-Generated)
**Note**: ADRs are still auto-generated as part of the documentation suite, but are NOT one of the 7 main documents.
**Template File**: `references/templates/adr_template.md`
**Generated ADRs** (3-5 per project):
| ADR | Title | Source Questions | When Generated |
|-----|-------|------------------|---------------|
| ADR-001 | Frontend Framework Choice | Q11, Q12 | If frontend framework specified |
| ADR-002 | Backend Framework Choice | Q11, Q12 | Always (every project has backend) |
| ADR-003 | Database Choice | Q9 | If database specified |
| ADR-004 | Additional Technology 1 | Q12 | If significant library chosen (ORM, cache, queue) |
| ADR-005 | Additional Technology 2 | Q12 | If multiple significant choices made |
**Format**: Michael Nygard's ADR format (Context, Decision, Rationale, Consequences, Alternatives Considered)
**Location**: `docs/reference/adrs/adr-NNN-*.md`
---
**Version:** 2.0.0 (BREAKING: Updated for 7-document structure. Added tech_stack, api_spec, database_schema, design_guidelines, runbook mappings. Removed NFR mappings. Updated question IDs: Q1-Q13, D1-D6, O1-O3, R1-R2.)
**Last Updated:** 2025-11-16

View File

@@ -0,0 +1,41 @@
# Troubleshooting Guide
This document provides solutions to common issues when using the x-docs-creator skill.
## Issue 1: User Doesn't Know Answers
**Problem**: User doesn't know answers to some technical questions during Phase 3 discovery.
**Solution**:
- Mark questions as "TBD" and flag for follow-up
- Generate documents with placeholders (e.g., `{{TODO: Define performance requirements}}`)
- Skill can be re-run later to update documentation once answers are available
- Documents remain valid with TBD placeholders for initial planning
## Issue 2: Project Too Small
**Problem**: Project is very small (1-2 person team, simple app) and 19 questions seem excessive.
**Solution**:
- Skip optional questions that don't apply to small projects
- Generate minimal viable technical documentation:
- Requirements document (simplified FR only - critical functional requirements)
- Simplified Architecture (basic tech stack + deployment diagram)
- Skip detailed technical specifications and ADRs if not needed
- Focus on Q1-Q8 (requirements + scope) as minimum viable documentation
## Issue 3: Auto-Research Returns Outdated Technologies
**Problem**: Phase 3 Stage 2 auto-research recommends outdated or deprecated technologies.
**Solution**:
- **Verify Research Date**: Skill uses current date (2025) for research
- **Check MCP Ref Results**: Review specific library documentation returned
- **Manually Verify**: Cross-check recommendations with official docs
- **Override if Needed**: Select "Modify" option to override recommendations
- **Report Issue**: If persistent, check skill version and update
---
**Version:** 2.0.0
**Last Updated:** 2025-01-31