Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:36:58 +08:00
commit f76ceb15ec
9 changed files with 1891 additions and 0 deletions

View File

@@ -0,0 +1,18 @@
{
"name": "ai-sdlc",
"description": "Comprehensive AI-native software development lifecycle toolkit. Includes story creation, TDD development workflow, code review, documentation generation, backlog grooming, sprint planning, and metrics tracking.",
"version": "1.0.0",
"author": {
"name": "LaunchCG",
"email": "support@launchcg.com"
},
"skills": [
"./skills"
],
"agents": [
"./agents"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# ai-sdlc
Comprehensive AI-native software development lifecycle toolkit. Includes story creation, TDD development workflow, code review, documentation generation, backlog grooming, sprint planning, and metrics tracking.

View File

@@ -0,0 +1,266 @@
---
name: user-story-creation
description: Creates AI-ready user stories from natural language requirements with structured acceptance criteria
model: sonnet
skills:
- story-creator
- acceptance-criteria-generator
- dor-validator
- story-refiner
disallowedTools:
- Bash
- mcp__github__*
---
# User Story Creation Agent
You are an expert Product Owner assistant specializing in creating high-quality, AI-ready user stories. Your role is to transform requirements, conversations, or feature requests into well-structured stories that are suitable for AI-assisted TDD development.
## Your Responsibilities
1. **Parse Requirements**: Extract user needs from natural language input
2. **Generate Stories**: Create structured user stories with proper formatting
3. **Create Acceptance Criteria**: Generate testable AC in Given/When/Then format
4. **Validate Readiness**: Ensure stories meet Definition of Ready (DoR)
5. **Refine Existing Stories**: Improve stories that don't meet quality standards
## Workflow
### Step 1: Understand the Request
When receiving a request to create a user story:
1. Identify the source:
- Natural language requirement
- Conversation transcript
- Feature request
- Existing Jira story needing refinement
2. Extract key information:
- Who is the user?
- What do they need?
- Why do they need it?
- What constraints exist?
### Step 2: Generate the Story
**Invoke the `story-creator` skill:**
- Pass the extracted requirements
- Receive a structured user story
**Story Format:**
```markdown
## User Story: [Action-oriented title]
**As a** [specific user persona]
**I want** [clear capability statement]
**So that** [measurable business value]
### Context
[Background information]
### Technical Approach (if applicable)
[Implementation notes]
### Out of Scope
[What this story does NOT include]
```
### Step 3: Generate Acceptance Criteria
**Invoke the `acceptance-criteria-generator` skill:**
- Pass the generated story
- Receive comprehensive acceptance criteria
**AC Coverage:**
- Happy path scenarios
- Edge cases
- Error handling
- Security scenarios (if applicable)
- Performance requirements (if applicable)
### Step 4: Validate Against DoR
**Invoke the `dor-validator` skill:**
- Pass the complete story with AC
- Receive validation score and recommendations
**DoR Criteria:**
- Clarity (15%)
- Value (15%)
- Acceptance Criteria (20%)
- Scope (15%)
- Dependencies (10%)
- Technical Feasibility (15%)
- AI-Readiness (10%)
### Step 5: Refine if Needed
If DoR score < 80%:
**Invoke the `story-refiner` skill:**
- Pass the story and validation feedback
- Receive improvement suggestions
- Apply improvements
- Re-validate
### Step 6: Present Final Story
Provide the complete story to the user with:
- Full story content
- All acceptance criteria
- DoR validation score
- AI-readiness assessment
- Option to create in Jira
## Output Format
```markdown
# User Story Created
## Story: [Title]
**As a** [persona]
**I want** [capability]
**So that** [value]
### Context
[Background]
### Technical Approach
[Notes]
### Acceptance Criteria
#### Scenario 1: [Happy path]
```gherkin
Given [precondition]
When [action]
Then [outcome]
```
#### Scenario 2: [Edge case]
...
#### Scenario 3: [Error handling]
...
---
## Quality Assessment
| Criterion | Score | Status |
|-----------|-------|--------|
| Clarity | X/5 | [status] |
| Value | X/5 | [status] |
| Acceptance Criteria | X/5 | [status] |
| Scope | X/5 | [status] |
| Dependencies | X/5 | [status] |
| Technical Feasibility | X/5 | [status] |
| AI-Readiness | X/5 | [status] |
**Overall Score:** XX/100
**Status:** [READY / NEEDS_WORK]
---
## Next Steps
1. Review the story
2. [Create in Jira] / [Request modifications]
```
## Handling Common Scenarios
### Scenario 1: Vague Requirements
When requirements are too vague:
1. Generate a draft story with assumptions
2. List questions that need clarification
3. Ask user to confirm or clarify before proceeding
### Scenario 2: Large Scope
When requirements are too large for one story:
1. Identify logical breakdown points
2. Propose an epic with multiple stories
3. Create stories for first sprint with dependencies noted
### Scenario 3: Existing Jira Story
When refining an existing story:
1. Fetch story from Jira using `jira-reader` skill
2. Analyze current state
3. Generate improvements
4. Present side-by-side comparison
### Scenario 4: Technical Story
For technical/infrastructure stories:
1. Use technical story template
2. Focus on problem statement and solution
3. Include technical acceptance criteria
4. Note any dependencies or risks
## Quality Standards
### AI-Ready Story Checklist
- [ ] Clear, unambiguous requirements
- [ ] Specific user persona (not generic "user")
- [ ] Measurable business value
- [ ] Testable acceptance criteria
- [ ] Appropriate scope (1-3 days work)
- [ ] Technical approach outlined
- [ ] Dependencies identified
- [ ] Security considerations noted
- [ ] No external human interaction required during development
### What Makes a Great Story
1. **Specific**: No vague terms like "improve" or "better"
2. **Testable**: Clear pass/fail criteria
3. **Valuable**: Explains why it matters
4. **Sized Right**: Can be completed in a sprint
5. **Independent**: Minimal dependencies on other work
## Error Handling
### Missing Information
```markdown
**I need more information to create a complete story:**
1. [Question about user]
2. [Question about desired outcome]
3. [Question about constraints]
Please provide these details, or I can proceed with assumptions (I'll note what I assumed).
```
### Conflicting Requirements
```markdown
**I've identified conflicting requirements:**
- Requirement A says: [X]
- Requirement B says: [Y]
Which should take priority? Or should I create separate stories?
```
### Technical Uncertainty
```markdown
**This story may require a spike first:**
**Uncertainty:**
[What we don't know]
**Recommended Approach:**
1. Create spike story to investigate [uncertainty]
2. Create implementation story dependent on spike findings
Would you like me to create both stories?
```
## Integration Points
This agent integrates with:
- **Jira MCP**: For creating/updating stories
- **backlog-grooming agent**: For batch story refinement
- **development-cycle agent**: Passes completed stories for implementation
---
When invoked, you will guide users through creating high-quality, AI-ready user stories that set development teams up for success with AI-assisted TDD workflows.

108
commands/create-story.md Normal file
View File

@@ -0,0 +1,108 @@
---
description: Create an AI-ready user story from natural language requirements
---
# Create Story Command
Generate a well-structured, AI-ready user story from natural language requirements.
## Usage
```
/create-story [requirements]
```
## Parameters
**requirements** (required)
- Natural language description of what you need
- Can be a feature request, conversation transcript, or rough notes
- Include as much context as possible
**--project** (optional)
- Jira project key to create story in
- Example: `--project PROJ`
**--type** (optional)
- Story type: feature, bug, technical, spike
- Default: feature
## Examples
```bash
# Create from feature request
/create-story Users need to export their dashboard data to PDF for sharing with stakeholders
# Create with project specified
/create-story --project HR Users need password reset functionality
# Create technical story
/create-story --type technical We need to migrate from MySQL to PostgreSQL for better JSON support
# Create bug fix story
/create-story --type bug The export button doesn't work when there's no data in the dashboard
```
## What This Command Does
1. **Parses Requirements**
- Extracts user persona
- Identifies capability needed
- Determines business value
- Notes any constraints
2. **Generates Story**
- Creates structured user story
- Follows "As a... I want... So that..." format
- Adds context and technical approach
3. **Creates Acceptance Criteria**
- Generates Given/When/Then scenarios
- Covers happy path, edge cases, errors
- Ensures testability
4. **Validates Readiness**
- Checks against Definition of Ready
- Scores on 7 criteria
- Provides improvement suggestions
5. **Offers Jira Creation**
- Option to create in Jira
- Links acceptance criteria
- Adds AI-ready label
## Output
The command produces a complete user story with:
- Structured story content
- Comprehensive acceptance criteria
- DoR validation score
- AI-readiness assessment
## Prerequisites
**Required:**
- None for story generation
## Troubleshooting
### "Requirements too vague"
**Solution:** Add more context about who the user is and what outcome they need
### "Cannot determine user persona"
**Solution:** Specify who will use this feature (e.g., "admin users", "customers", "API consumers")
### "Scope too large"
**Solution:** The command will suggest breaking into multiple stories
## Best Practices
1. **Be Specific:** Include concrete examples of what users need
2. **Explain Why:** Mention the business value or problem being solved
3. **Note Constraints:** Mention any technical or business limitations
4. **Include Context:** Background information helps generate better stories
---
**Agent Invoked:** user-story-creation
**Skills Used:** story-creator, acceptance-criteria-generator, dor-validator

65
plugin.lock.json Normal file
View File

@@ -0,0 +1,65 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:LaunchCG/claude-marketplace-pub:plugins/ai-sdlc",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "bf7c6f7064f016c5d4461e1cf08a9eff104c823d",
"treeHash": "0354eae9d408711355cd1089f2ac470ca232145041a9bbde35fab0cafb73ee53",
"generatedAt": "2025-11-28T10:12:00.722655Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "ai-sdlc",
"description": "Comprehensive AI-native software development lifecycle toolkit. Includes story creation, TDD development workflow, code review, documentation generation, backlog grooming, sprint planning, and metrics tracking.",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "e9c58d678656c9c6f0fb350455cdf7c81145e8c1f0663a0eecf075a97029d340"
},
{
"path": "agents/user-story-creation.md",
"sha256": "d3783239e411f08b84d0dd5fcd5a2c7130f44cb79217878dd1062406f934deb7"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "b8065dd2e4f55d38e4a01dfa0cde8a9496d2edd576872e904f46da98705ce112"
},
{
"path": "commands/create-story.md",
"sha256": "d53c454404c2c42f4c6c6e92ce0532ba5e6d73fe7e216021db50e5766df9f028"
},
{
"path": "skills/story-refiner/SKILL.md",
"sha256": "77d56955e663e5409b5fedd7ebc6bea5d719d99e4eaf0cf690e18e7e9e56f274"
},
{
"path": "skills/story-creator/SKILL.md",
"sha256": "3d065ccc82303ad91c28549767bfe69a68af2039d174e59246482116682f3838"
},
{
"path": "skills/acceptance-criteria-generator/SKILL.md",
"sha256": "f506627abe2486a5b6ab466f43d57dfb27db4f1204fea8c098ea9edb073451ca"
},
{
"path": "skills/dor-validator/SKILL.md",
"sha256": "769651fb1e5b98b9a8681d74ea686e1103b68c800771f979c3e14a677c1a5343"
}
],
"dirSha256": "0354eae9d408711355cd1089f2ac470ca232145041a9bbde35fab0cafb73ee53"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,343 @@
---
name: acceptance-criteria-generator
description: Creates testable acceptance criteria in Given/When/Then format for user stories
allowed-tools: mcp__atlassian__*
mcpServers:
- atlassian
---
# Acceptance Criteria Generator Skill
This skill generates comprehensive, testable acceptance criteria in Given/When/Then (Gherkin) format for user stories, ensuring clarity for both human reviewers and AI-assisted development.
## When This Skill is Invoked
Claude will automatically use this skill when you mention:
- "generate acceptance criteria"
- "create AC for story"
- "write acceptance criteria"
- "add Given/When/Then"
- "make criteria testable"
## Capabilities
### 1. AC Structure Generation
Generate acceptance criteria in standard Gherkin format:
```gherkin
Scenario: [Descriptive scenario name]
Given [precondition/initial state]
When [action/trigger]
Then [expected outcome]
And [additional outcome]
```
### 2. Coverage Categories
Ensure comprehensive test coverage:
| Category | Description | Example |
|----------|-------------|---------|
| Happy Path | Normal successful flow | User completes checkout successfully |
| Edge Cases | Boundary conditions | Empty cart, max items |
| Error Handling | Failure scenarios | Invalid input, timeout |
| Security | Auth/authorization | Unauthorized access attempt |
| Performance | Non-functional | Response under 2 seconds |
## How to Use This Skill
### Step 1: Analyze Story Content
**Extract testable requirements from story:**
```
Story: As a user, I want to export my dashboard to PDF
Key Requirements:
1. User can trigger export
2. PDF contains dashboard data
3. Download happens within reasonable time
4. Error handling for failures
```
### Step 2: Generate Happy Path Scenarios
**Start with the primary success scenario:**
```gherkin
Scenario: Successful dashboard export to PDF
Given I am logged in as a dashboard user
And I am viewing my dashboard with data
When I click the "Export to PDF" button
Then a PDF file should download to my device
And the PDF should contain the dashboard title
And the PDF should contain all visible widgets
And the PDF should include the current date range
```
### Step 3: Generate Edge Case Scenarios
**Cover boundary conditions:**
```gherkin
Scenario: Export empty dashboard
Given I am logged in as a dashboard user
And my dashboard has no data for the selected period
When I click the "Export to PDF" button
Then a PDF file should download
And the PDF should display "No data available for selected period"
Scenario: Export dashboard at maximum data size
Given I am logged in as a dashboard user
And my dashboard contains the maximum allowed data points
When I click the "Export to PDF" button
Then a PDF file should download within 30 seconds
And the PDF should contain all data points
Scenario: Export with special characters in title
Given I am logged in as a dashboard user
And my dashboard title contains special characters "&<>/"
When I click the "Export to PDF" button
Then the PDF filename should sanitize special characters
And the PDF title should display correctly
```
### Step 4: Generate Error Handling Scenarios
**Cover failure cases:**
```gherkin
Scenario: Export fails due to network error
Given I am logged in as a dashboard user
And the network connection is unstable
When I click the "Export to PDF" button
And the export request fails
Then I should see an error message "Export failed. Please try again."
And I should be able to retry the export
Scenario: Export times out
Given I am logged in as a dashboard user
And the server is responding slowly
When I click the "Export to PDF" button
And the export takes longer than 60 seconds
Then I should see a message "Export is taking longer than expected"
And I should have the option to cancel or continue waiting
Scenario: Export fails due to insufficient permissions
Given I am logged in as a guest user
And guest users do not have export permissions
When I click the "Export to PDF" button
Then I should see a message "You don't have permission to export"
And the export should not proceed
```
### Step 5: Generate Security Scenarios (if applicable)
**Cover security requirements:**
```gherkin
Scenario: Export requires authentication
Given I am not logged in
When I attempt to access the export endpoint directly
Then I should be redirected to the login page
And the export should not proceed
Scenario: User cannot export another user's dashboard
Given I am logged in as user A
And I attempt to export user B's dashboard
When I trigger the export
Then I should see a "Permission denied" error
And no PDF should be generated
```
### Step 6: Generate Performance Scenarios (if applicable)
**Cover non-functional requirements:**
```gherkin
Scenario: Export completes within acceptable time
Given I am logged in as a dashboard user
And my dashboard has typical data volume (500 data points)
When I click the "Export to PDF" button
Then the export should complete within 5 seconds
And the progress indicator should be accurate
Scenario: Large export shows progress
Given I am logged in as a dashboard user
And my dashboard has large data volume (5000+ data points)
When I click the "Export to PDF" button
Then I should see a progress indicator
And the progress should update at least every 5 seconds
```
### Step 7: Validate AC Quality
**Checklist for good acceptance criteria:**
- [ ] **Specific:** No ambiguous terms ("should work well")
- [ ] **Measurable:** Can verify pass/fail objectively
- [ ] **Testable:** Can be automated or manually verified
- [ ] **Complete:** Covers happy path, edge cases, errors
- [ ] **Independent:** Each scenario is self-contained
- [ ] **Atomic:** One behavior per Then statement
**Quality Score:**
- 6/6: Excellent - Ready for development
- 4-5/6: Good - Minor improvements needed
- 2-3/6: Fair - Needs refinement
- 0-1/6: Poor - Requires rewrite
## Output Format
Always structure skill output as:
```markdown
# Acceptance Criteria for: [Story Title]
## Happy Path Scenarios
### Scenario 1: [Primary success case]
```gherkin
Given [precondition]
When [action]
Then [outcome]
```
### Scenario 2: [Secondary success case]
...
## Edge Case Scenarios
### Scenario 3: [Edge case 1]
...
## Error Handling Scenarios
### Scenario 4: [Error case 1]
...
## Security Scenarios (if applicable)
### Scenario 5: [Security case 1]
...
## Performance Scenarios (if applicable)
### Scenario 6: [Performance case 1]
...
---
## Quality Assessment
- **Total Scenarios:** X
- **Coverage Score:** X/6 categories covered
- **Testability Score:** X/6 quality criteria met
## Recommendations
- [Any suggestions for additional scenarios]
```
## Common Patterns
### CRUD Operations
```gherkin
# Create
Scenario: Successfully create [entity]
Given I am an authorized user
When I submit valid [entity] data
Then the [entity] should be created
And I should see a success confirmation
# Read
Scenario: View [entity] details
Given an [entity] exists with ID "123"
When I navigate to the [entity] detail page
Then I should see all [entity] information
# Update
Scenario: Update [entity] successfully
Given an [entity] exists
And I have edit permissions
When I modify the [entity] data
And I save changes
Then the changes should be persisted
And I should see an update confirmation
# Delete
Scenario: Delete [entity] with confirmation
Given an [entity] exists
And I have delete permissions
When I click delete
Then I should see a confirmation dialog
When I confirm deletion
Then the [entity] should be removed
```
### Form Validation
```gherkin
Scenario: Submit form with missing required field
Given I am on the [form] page
And I have not filled in the required [field] field
When I click submit
Then I should see a validation error for [field]
And the form should not be submitted
Scenario: Submit form with invalid format
Given I am on the [form] page
And I enter "[invalid value]" in the [field] field
When I click submit
Then I should see "[field] format is invalid"
And the form should not be submitted
```
### API Endpoints
```gherkin
Scenario: API returns success response
Given I have a valid API token
When I send a GET request to /api/[endpoint]
Then the response status should be 200
And the response body should contain [expected data]
Scenario: API returns 401 for invalid token
Given I have an invalid API token
When I send a GET request to /api/[endpoint]
Then the response status should be 401
And the response body should contain "Unauthorized"
```
## Error Handling
### Incomplete Story
```markdown
**Warning:** Story lacks sufficient detail for comprehensive AC.
**Missing Information:**
- [What's missing]
**Generated AC:** Partial (happy path only)
**Recommendation:** Add more context to story for edge case and error scenarios.
```
### Ambiguous Requirements
```markdown
**Warning:** Requirements are ambiguous.
**Ambiguous Terms:**
- "[term]" - could mean [interpretation 1] or [interpretation 2]
**Assumptions Made:**
- Interpreted "[term]" as [chosen interpretation]
**Recommendation:** Clarify with stakeholders.
```
## Integration with Other Skills
This skill works with:
- **story-creator**: Receives story to generate AC for
- **dor-validator**: Validates AC meets DoR requirements
- **ac-verifier**: Verifies code meets AC during review
- **test-generator**: Uses AC to generate test cases
---
When invoked, this skill will analyze the story content and generate comprehensive, testable acceptance criteria suitable for TDD development and automated testing.

View File

@@ -0,0 +1,321 @@
---
name: dor-validator
description: Validates user stories against Definition of Ready checklist to ensure they are ready for development
allowed-tools: mcp__atlassian__*
mcpServers:
- atlassian
---
# Definition of Ready Validator Skill
This skill validates user stories against a comprehensive Definition of Ready (DoR) checklist to ensure stories are fully prepared for AI-assisted development.
## When This Skill is Invoked
Claude will automatically use this skill when you mention:
- "validate story readiness"
- "check definition of ready"
- "is this story ready"
- "DoR validation"
- "story ready for development"
## Capabilities
### 1. DoR Checklist Validation
Evaluate stories against standard DoR criteria:
| Category | Criteria | Weight |
|----------|----------|--------|
| Clarity | Story is understandable by any team member | 15% |
| Value | Business value is clearly stated | 15% |
| Acceptance Criteria | AC are testable and complete | 20% |
| Scope | Story is appropriately sized (1-3 days) | 15% |
| Dependencies | Dependencies are identified and resolved | 10% |
| Technical Feasibility | Implementation approach is clear | 15% |
| AI-Readiness | Story is suitable for AI-assisted development | 10% |
### 2. Validation Scoring
Generate a readiness score with actionable feedback.
## How to Use This Skill
### Step 1: Fetch Story (if not provided)
**Use Atlassian MCP to get story details:**
```
mcp__atlassian__jira_get_issue(
issueKey="PROJ-123",
fields=["summary", "description", "acceptanceCriteria", "storyPoints", "labels", "components", "status", "priority"]
)
```
### Step 2: Validate Each DoR Criterion
#### Criterion 1: Clarity (15%)
**Check:**
- [ ] Story has a clear, descriptive title
- [ ] Description explains WHAT is needed
- [ ] User persona is specific (not generic "user")
- [ ] No jargon or ambiguous terms
- [ ] Any acronyms are defined
**Scoring:**
- 5/5: Perfectly clear, any team member can understand
- 4/5: Minor clarifications might help
- 3/5: Some ambiguity exists
- 2/5: Significant ambiguity
- 1/5: Unclear or confusing
- 0/5: Missing or incomprehensible
#### Criterion 2: Value (15%)
**Check:**
- [ ] "So that" clause explains WHY
- [ ] Business impact is articulated
- [ ] User benefit is measurable
- [ ] Priority is justified
**Scoring:**
- 5/5: Clear, measurable business value
- 4/5: Value stated but not quantified
- 3/5: Value implied but not explicit
- 2/5: Value unclear
- 1/5: No value statement
- 0/5: Appears to have no value
#### Criterion 3: Acceptance Criteria (20%)
**Check:**
- [ ] AC are present
- [ ] AC use Given/When/Then format (or equivalent)
- [ ] Each AC is independently testable
- [ ] Happy path covered
- [ ] Edge cases covered
- [ ] Error scenarios covered
- [ ] Non-functional requirements specified (if applicable)
**Scoring:**
- 5/5: Comprehensive, testable AC covering all scenarios
- 4/5: Good AC, minor gaps in edge cases
- 3/5: Basic AC, missing error scenarios
- 2/5: Minimal AC, only happy path
- 1/5: Vague or incomplete AC
- 0/5: No acceptance criteria
#### Criterion 4: Scope (15%)
**Check:**
- [ ] Story can be completed in 1-3 days
- [ ] Story does ONE thing (single responsibility)
- [ ] Scope boundaries are clear (out of scope noted)
- [ ] No hidden complexity
- [ ] Story points assigned (if team uses them)
**Scoring:**
- 5/5: Well-scoped, 1-3 day effort, clear boundaries
- 4/5: Appropriate scope, minor uncertainty
- 3/5: Slightly large but acceptable
- 2/5: Too large, should be split
- 1/5: Much too large or too vague to estimate
- 0/5: Scope undefined
#### Criterion 5: Dependencies (10%)
**Check:**
- [ ] External dependencies identified
- [ ] Internal dependencies identified
- [ ] Blocking dependencies resolved
- [ ] Required data/APIs available
- [ ] Team members available (if specific skills needed)
**Scoring:**
- 5/5: No dependencies OR all dependencies resolved
- 4/5: Dependencies identified, resolution planned
- 3/5: Dependencies identified, some unresolved
- 2/5: Dependencies partially identified
- 1/5: Dependencies unknown
- 0/5: Critical blocking dependencies
#### Criterion 6: Technical Feasibility (15%)
**Check:**
- [ ] Technical approach is outlined
- [ ] Required technologies are available
- [ ] Patterns to follow are identified
- [ ] Security considerations noted
- [ ] Performance requirements specified
- [ ] No technical unknowns requiring spikes
**Scoring:**
- 5/5: Clear technical path, all considerations addressed
- 4/5: Approach clear, minor unknowns acceptable
- 3/5: Approach outlined, some technical questions
- 2/5: Technical approach unclear
- 1/5: Significant technical unknowns
- 0/5: Technical feasibility not assessed
#### Criterion 7: AI-Readiness (10%)
**Check:**
- [ ] Requirements are unambiguous for AI interpretation
- [ ] Test cases can be derived from AC
- [ ] Code patterns to follow are documented
- [ ] No external human interaction required during development
- [ ] Verification criteria are objective
**Scoring:**
- 5/5: Excellent for AI-assisted development
- 4/5: Good, AI can handle with minor guidance
- 3/5: Acceptable, some human clarification may be needed
- 2/5: Challenging for AI, significant guidance needed
- 1/5: Not suitable for AI development
- 0/5: Requires human-only development
### Step 3: Calculate Overall Score
**Weighted Score Calculation:**
```python
total_score = (
(clarity_score / 5 * 0.15) +
(value_score / 5 * 0.15) +
(ac_score / 5 * 0.20) +
(scope_score / 5 * 0.15) +
(dependencies_score / 5 * 0.10) +
(technical_score / 5 * 0.15) +
(ai_readiness_score / 5 * 0.10)
) * 100
# Classification
if total_score >= 80:
status = "READY"
elif total_score >= 60:
status = "NEEDS_MINOR_WORK"
elif total_score >= 40:
status = "NEEDS_SIGNIFICANT_WORK"
else:
status = "NOT_READY"
```
### Step 4: Generate Recommendations
For each criterion scoring below 4:
- Identify specific gaps
- Provide actionable recommendations
- Suggest specific text/content to add
## Output Format
Always structure skill output as:
```markdown
# DoR Validation Report: [STORY-KEY]
## Summary
- **Overall Score:** XX/100
- **Status:** [READY / NEEDS_MINOR_WORK / NEEDS_SIGNIFICANT_WORK / NOT_READY]
- **Recommendation:** [Proceed / Revise and re-validate]
## Criterion Scores
| Criterion | Score | Status | Weight |
|-----------|-------|--------|--------|
| Clarity | X/5 | [pass/fail emoji] | 15% |
| Value | X/5 | [pass/fail emoji] | 15% |
| Acceptance Criteria | X/5 | [pass/fail emoji] | 20% |
| Scope | X/5 | [pass/fail emoji] | 15% |
| Dependencies | X/5 | [pass/fail emoji] | 10% |
| Technical Feasibility | X/5 | [pass/fail emoji] | 15% |
| AI-Readiness | X/5 | [pass/fail emoji] | 10% |
## Detailed Analysis
### Clarity
**Score: X/5**
- [Specific findings]
- [What's good]
- [What needs improvement]
### Value
**Score: X/5**
- [Specific findings]
[... continue for each criterion ...]
## Required Actions (for status != READY)
### Critical (Must Fix)
1. [Action 1 with specific guidance]
2. [Action 2 with specific guidance]
### Recommended (Should Fix)
1. [Action 1]
2. [Action 2]
### Optional (Nice to Have)
1. [Action 1]
## Suggested Improvements
### For Acceptance Criteria
```
[Specific AC text to add]
```
### For Description
```
[Specific text to add]
```
---
**Validation Timestamp:** [datetime]
**Validator:** DoR Validator Skill v1.0
```
## Special Cases
### Story Already in Progress
```markdown
**Warning:** Story [STORY-KEY] is already in progress.
**Current Status:** In Development
**Recommendation:** DoR validation is typically done before development.
Proceeding with validation for documentation purposes.
[Continue with validation]
```
### Story is a Spike
```markdown
**Note:** Story [STORY-KEY] appears to be a spike/research task.
**Adjusted Criteria:**
- Acceptance Criteria: Research questions instead of Given/When/Then
- Scope: Time-boxed research period
- Technical Feasibility: N/A for discovery
[Adjusted validation]
```
### Story Lacks Critical Information
```markdown
**Error:** Story [STORY-KEY] is missing critical information.
**Missing:**
- [ ] Description is empty
- [ ] No acceptance criteria
**Status:** NOT_READY - Cannot validate
**Required Actions:**
1. Add story description
2. Add acceptance criteria
3. Re-run validation
```
## Integration with Other Skills
This skill works with:
- **story-creator**: Validates newly created stories
- **acceptance-criteria-generator**: Generates AC if missing
- **story-refiner**: Improves stories that fail validation
- **backlog-grooming agent**: Uses for backlog health assessment
---
When invoked, this skill will comprehensively evaluate the story against DoR criteria and provide actionable feedback to ensure the story is ready for AI-assisted development.

View File

@@ -0,0 +1,281 @@
---
name: story-creator
description: Generates structured user stories from natural language requirements with proper formatting and AI-ready content
allowed-tools: mcp__atlassian__*
mcpServers:
- atlassian
---
# Story Creator Skill
This skill transforms natural language requirements, conversations, or feature requests into well-structured user stories that are ready for AI-assisted development.
## When This Skill is Invoked
Claude will automatically use this skill when you mention:
- "create user story"
- "write a story for"
- "turn this into a user story"
- "generate story from requirements"
- "new user story"
## Capabilities
### 1. Requirements Parsing
Extract key information from natural language input:
- **User/Persona**: Who is the user?
- **Capability**: What do they want to do?
- **Value**: Why do they want to do it?
- **Context**: Background information
- **Constraints**: Limitations or requirements
### 2. Story Structure Generation
Generate stories in AI-ready format:
```markdown
## User Story: [TITLE]
**As a** [user type]
**I want** [capability]
**So that** [business value]
### Context
[Background information, current state, pain points]
### Technical Approach (if applicable)
[High-level implementation notes]
### Acceptance Criteria
[Generated by acceptance-criteria-generator skill]
### Definition of Done
- [ ] Code complete and reviewed
- [ ] Unit tests written and passing
- [ ] Integration tests passing
- [ ] Documentation updated
- [ ] No security vulnerabilities
```
## How to Use This Skill
### Step 1: Parse Input Requirements
**Extract from natural language:**
```
Input: "Users are complaining they can't export their dashboard data.
They need to be able to download it as a PDF for sharing with
stakeholders who don't have system access."
Extracted:
- Persona: Dashboard user
- Capability: Export dashboard data as PDF
- Value: Share insights with stakeholders without system access
- Context: Current pain point - no export functionality
- Constraints: Must be PDF format, stakeholders are external
```
### Step 2: Identify Story Type
Determine the appropriate story format:
| Input Pattern | Story Type | Template |
|---------------|------------|----------|
| Feature request | Feature Story | User story format |
| Bug report | Bug Fix Story | Bug template |
| Technical improvement | Technical Story | Technical template |
| Spike/Research | Spike Story | Spike template |
### Step 3: Generate Story Structure
**Feature Story Template:**
```markdown
## User Story: [Action-oriented title]
**As a** [specific user persona]
**I want** [clear capability statement]
**So that** [measurable business value]
### Context
- **Current State:** [How things work now]
- **Problem:** [What's not working]
- **Impact:** [Business/user impact]
### Technical Approach
- [Implementation approach 1]
- [Implementation approach 2]
- [Key considerations]
### Out of Scope
- [What this story does NOT include]
- [Deferred items]
### Dependencies
- [Other stories or systems this depends on]
### Acceptance Criteria
[To be generated by acceptance-criteria-generator skill]
```
**Bug Fix Template:**
```markdown
## Bug Fix: [Clear bug description]
### Current Behavior
[What's happening now - the bug]
### Expected Behavior
[What should happen]
### Steps to Reproduce
1. [Step 1]
2. [Step 2]
3. [Observe bug]
### Root Cause (if known)
[Technical explanation]
### Proposed Fix
[How to resolve]
### Acceptance Criteria
- Bug no longer reproducible
- [Regression criteria]
- [Additional validation]
```
**Technical Story Template:**
```markdown
## Technical Story: [Technical improvement title]
### Problem Statement
[What technical issue exists]
### Proposed Solution
[How to address it]
### Technical Details
- [Specific implementation notes]
- [Libraries/frameworks involved]
- [Data models affected]
### Benefits
- [Performance improvement]
- [Maintainability improvement]
- [Security improvement]
### Risks
- [Potential issues]
- [Migration concerns]
### Acceptance Criteria
[Technical success criteria]
```
### Step 4: Validate AI-Readiness
Ensure story is suitable for AI-assisted development:
**AI-Readiness Checklist:**
- [ ] Clear, unambiguous requirements
- [ ] Testable acceptance criteria possible
- [ ] Scope is well-defined (not too broad)
- [ ] Technical approach is feasible
- [ ] No external dependencies blocking work
- [ ] Security considerations noted
**Scoring:**
- 6/6: Excellent - Ready for AI development
- 4-5/6: Good - Minor clarifications needed
- 2-3/6: Fair - Needs refinement before development
- 0-1/6: Poor - Requires significant rework
### Step 5: Create in Jira (if requested)
**Use Atlassian MCP to create story:**
```
mcp__atlassian__jira_create_issue(
projectKey="PROJ",
issueType="Story",
summary="[Story title]",
description="[Full story content in Jira format]",
labels=["ai-ready"],
customFields={
"acceptanceCriteria": "[AC content]"
}
)
```
## Output Format
Always structure skill output as:
```markdown
# Story Creation Result
## Generated Story
[Full story content]
## AI-Readiness Assessment
- **Score:** X/6
- **Status:** [Excellent/Good/Fair/Poor]
- **Issues:** [List any concerns]
## Recommendations
- [Suggestions for improvement]
## Next Steps
1. Review generated story
2. Invoke acceptance-criteria-generator for AC
3. Validate with dor-validator
4. Create in Jira (if ready)
```
## Best Practices
1. **Be Specific:** Avoid vague language like "improve" or "better"
2. **Focus on Value:** Always explain WHY the feature matters
3. **Keep Scope Manageable:** Stories should be completable in 1-3 days
4. **Include Context:** Background helps AI understand the domain
5. **Note Constraints:** Technical or business limitations
6. **Identify Persona:** Be specific about who the user is
## Error Handling
### Vague Requirements
```markdown
**Warning:** Input requirements are too vague to create a complete story.
**Missing Information:**
- User persona not specified
- Success criteria unclear
- Scope boundaries undefined
**Recommendation:** Please provide:
1. Who is the user?
2. What specific outcome do they need?
3. How will success be measured?
```
### Scope Too Large
```markdown
**Warning:** Requirements scope is too large for a single story.
**Suggested Breakdown:**
1. Story 1: [First capability]
2. Story 2: [Second capability]
3. Story 3: [Third capability]
**Recommendation:** Split into multiple stories for better AI-assisted development.
```
## Integration with Other Skills
This skill works with:
- **acceptance-criteria-generator**: Generate AC for the story
- **dor-validator**: Validate story meets Definition of Ready
- **story-refiner**: Improve existing stories
---
When invoked, this skill will analyze input requirements and generate a well-structured, AI-ready user story suitable for TDD development workflows.

View File

@@ -0,0 +1,486 @@
---
name: story-refiner
description: Analyzes Jira stories and proposes improvements for quality, clarity, acceptance criteria, and completeness
allowed-tools: mcp__atlassian__*
mcpServers:
- atlassian
---
# Story Refiner Skill
This skill analyzes Jira stories and proposes specific, actionable improvements to enhance story quality, clarity, and completeness.
## When This Skill is Invoked
Claude will automatically use this skill when you mention:
- "refine story"
- "improve story quality"
- "enhance acceptance criteria"
- "fix story description"
- "story improvement suggestions"
## Capabilities
### 1. Story Analysis
Fetch and analyze a Jira story to identify improvement opportunities.
**Quality Dimensions Analyzed:**
- **Acceptance Criteria:** Completeness, testability, clarity
- **Description:** User value, context, technical details
- **Story Format:** User story structure (As a... I want... So that...)
- **Refinement:** Story points, labels, components
- **Documentation:** Supporting links, attachments, examples
### 2. Improvement Proposal Generation
Generate specific, actionable proposals for enhancing the story.
**Proposal Structure:**
```markdown
## Proposed Changes for [STORY-KEY]
### Current State Assessment
- **Quality Score:** 65/100 (Good)
- **Main Issues:** Missing acceptance criteria, unclear scope
- **Strengths:** Clear user value, good description length
### Recommended Changes
#### 1. Add Acceptance Criteria (High Priority)
**Current:** No acceptance criteria defined
**Proposed:**
```
Acceptance Criteria:
- Given a logged-in user
When they click "Export Report"
Then a PDF report downloads within 5 seconds
- Report includes all data from selected date range
- Error message shown if date range exceeds 1 year
```
**Impact:** Improves testability and clarity
#### 2. Enhance Description (Medium Priority)
**Current:** "Add export feature"
**Proposed:** Add technical context:
```
Technical Notes:
- Use existing PDF library (pdfkit)
- Add export button to dashboard header
- Store exports in /tmp (auto-cleanup after 1 hour)
- Max file size: 10MB
```
**Impact:** Reduces ambiguity, speeds development
#### 3. Add Story Points (Medium Priority)
**Recommendation:** Estimate as 5 points based on similar stories
**Rationale:** Comparable to PROJ-123 (PDF export feature)
```
### 3. Diff Generation
Show exact before/after for proposed changes to fields.
**Example:**
```diff
=== Description ===
- As a user, I want to export reports
+ As a user, I want to export dashboard reports to PDF format
+ So that I can share insights with stakeholders offline
+
+ Technical Implementation:
+ - Add "Export to PDF" button in dashboard header
+ - Use pdfkit library for PDF generation
+ - Include all visible widgets and date range
+ - Max export size: 10MB
=== Acceptance Criteria ===
+ Acceptance Criteria:
+ - Given a logged-in user viewing the dashboard
+ When they click "Export to PDF"
+ Then a PDF file downloads within 5 seconds
+ - Report includes dashboard title and selected date range
+ - Report includes all visible widgets with current data
+ - Error message displays if export exceeds size limit
```
### 4. Impact Assessment
Estimate the impact of proposed changes on story quality.
**Metrics:**
- **Quality Score Improvement:** Before vs. After
- **Confidence Level:** High/Medium/Low that changes will help
- **Effort to Implement:** Time to apply changes (Low/Medium/High)
## How to Use This Skill
### Step 1: Fetch Story from Jira
**Use Atlassian MCP to get story details:**
```
mcp__atlassian__jira_get_issue(
issueKey="PROJ-123",
fields=["summary", "description", "acceptanceCriteria", "storyPoints", "labels", "components", "status", "priority", "assignee"]
)
```
### Step 2: Analyze Story Quality
**Assess each quality dimension:**
1. **Acceptance Criteria Check:**
- Look for AC in custom fields (acceptanceCriteria, AC, acceptance_criteria, DoD)
- Check description for "Acceptance Criteria:" section
- Evaluate completeness: Are all paths covered?
- Evaluate testability: Are criteria measurable?
2. **Description Quality:**
- Check for user story format (As a... I want... So that...)
- Assess length and detail level (>100 words ideal)
- Verify context explains WHY (user value/problem)
- Check for technical details where appropriate
3. **Refinement Indicators:**
- Story points assigned?
- Labels/components tagged?
- Supporting documentation linked?
- Sprint assigned?
4. **Clarity & Completeness:**
- Is scope clear (what's included/excluded)?
- Are edge cases mentioned?
- Is technical approach defined (if needed)?
- Are dependencies noted?
### Step 3: Generate Improvement Proposals
**For each identified gap, create a specific proposal:**
```markdown
### [Priority Level] Improvement: [What to Fix]
**Current State:**
[What's currently in the story]
**Proposed Change:**
[Specific text/content to add or modify]
**Rationale:**
[Why this change improves the story]
**Impact:**
- Quality Score: +X points
- Development Clarity: High/Medium/Low
- Testing Clarity: High/Medium/Low
```
### Step 4: Calculate Quality Impact
**Before Refinement Score:**
```
Acceptance Criteria: 0/30 (missing)
Description: 15/30 (minimal)
Refinement: 8/15 (partial)
Estimation: 0/15 (no points)
Documentation: 2/10 (minimal)
Total: 25/100 (Poor)
```
**After Refinement Score (Estimated):**
```
Acceptance Criteria: 28/30 (comprehensive)
Description: 26/30 (detailed with context)
Refinement: 14/15 (well-refined)
Estimation: 12/15 (points assigned)
Documentation: 8/10 (examples added)
Total: 88/100 (Excellent)
```
**Improvement: +63 points**
### Step 5: Format Output for User Review
**Structure the response:**
```markdown
# Story Refinement Proposal: [STORY-KEY]
## Executive Summary
- **Current Quality:** 25/100 (Poor)
- **Projected Quality:** 88/100 (Excellent)
- **Improvement:** +63 points
- **Confidence:** High
- **Effort:** 15 minutes to apply changes
## Priority Improvements
### 🔴 Critical: Add Acceptance Criteria
[Details...]
### 🟡 Important: Enhance Description
[Details...]
### 🟢 Optional: Add Technical Context
[Details...]
## Full Diff Preview
[Show complete before/after...]
## Next Steps
1. Review proposed changes
2. Confirm or modify suggestions
3. Apply changes to Jira story
```
## Refinement Templates
### Template 1: User Story Format
```
As a [user type]
I want [capability]
So that [business value]
Context:
[Why this matters, background information]
Technical Approach:
[High-level implementation notes if needed]
Acceptance Criteria:
- Given [precondition]
When [action]
Then [expected outcome]
- [Additional criteria...]
Definition of Done:
- [ ] Code complete and reviewed
- [ ] Unit tests pass
- [ ] Integration tested
- [ ] Documentation updated
```
### Template 2: Technical Story Format
```
Story: [Concise technical summary]
Problem:
[What technical issue or gap exists]
Proposed Solution:
[How to address it]
Technical Details:
- [Specific implementation notes]
- [Dependencies, libraries, approaches]
Acceptance Criteria:
- [Measurable technical outcomes]
- [Performance criteria if applicable]
Risks/Considerations:
- [Potential issues or dependencies]
```
### Template 3: Bug Fix Story Format
```
Bug: [Clear summary of the issue]
Current Behavior:
[What's happening now]
Expected Behavior:
[What should happen]
Steps to Reproduce:
1. [Step 1]
2. [Step 2]
3. [Observe issue]
Root Cause (if known):
[Technical explanation]
Fix Approach:
[How to resolve]
Acceptance Criteria:
- Bug no longer reproducible with original steps
- [Additional validation criteria]
- [Regression tests added]
```
## Common Improvement Patterns
### Pattern 1: Missing Acceptance Criteria
**Detection:** No AC field populated, no "Acceptance Criteria:" in description
**Proposal:**
```
Add specific, testable acceptance criteria:
- Define happy path scenario
- Cover edge cases (empty state, max limits, errors)
- Include non-functional requirements (performance, accessibility)
```
### Pattern 2: Vague Description
**Detection:** Description <50 words, no context on WHY
**Proposal:**
```
Enhance description with:
- User value statement (So that...)
- Background context (current state, pain point)
- Success criteria (what does "done" look like?)
```
### Pattern 3: Missing Technical Context
**Detection:** Complex story without implementation notes
**Proposal:**
```
Add technical section:
- Libraries/frameworks to use
- Integration points (APIs, services)
- Data models affected
- Security/performance considerations
```
### Pattern 4: No Estimation
**Detection:** Story points field empty
**Proposal:**
```
Suggest story points based on:
- Similar completed stories
- Complexity assessment
- Team velocity patterns
Recommend: [X] points
```
### Pattern 5: Poor Story Format
**Detection:** Doesn't follow "As a... I want... So that..." format
**Proposal:**
```
Rewrite in user story format:
- Identify user persona
- State desired capability
- Explain business value
```
## Integration with Agent
**The story-refiner agent will:**
1. Invoke this skill with a story key
2. Receive improvement proposals
3. Format for user review
4. Confirm with user before applying changes
5. Update Jira story with approved changes
**This skill:**
- Fetches story data via Atlassian MCP
- Analyzes quality dimensions
- Generates specific improvement proposals
- Returns structured recommendations
- Does NOT update Jira (agent handles that after confirmation)
## Best Practices
1. **Be Specific:** Don't just say "improve description" - provide exact text
2. **Prioritize Changes:** Mark critical vs. optional improvements
3. **Explain Rationale:** Help user understand WHY each change helps
4. **Show Before/After:** Make it easy to see the difference
5. **Estimate Impact:** Quantify quality improvement
6. **Respect Existing Content:** Build on what's there, don't replace unnecessarily
7. **Use Templates:** Suggest story formats appropriate to type (feature/bug/tech)
## Output Format
Always structure skill output as:
1. **Story Analysis Summary:**
- Current quality score
- Key strengths
- Main weaknesses
2. **Prioritized Improvements:**
- Critical (must-have)
- Important (should-have)
- Optional (nice-to-have)
3. **Detailed Proposals:**
- For each improvement: current state, proposed change, rationale, impact
4. **Full Diff:**
- Complete before/after view of all proposed changes
5. **Quality Projection:**
- Estimated quality score after changes
- Confidence level
6. **Next Steps:**
- Clear actions for user
## Error Handling
### Story Not Found
```markdown
**Error:** Story [STORY-KEY] not found in Jira
**Possible Causes:**
- Story key is incorrect (case-sensitive)
- Story doesn't exist
- No permission to access story
**Solution:** Verify story key and access permissions
```
### Insufficient Permissions
```markdown
**Error:** Cannot read story [STORY-KEY] - permission denied
**Solution:** Ensure ATLASSIAN_API_KEY has read access to this project
```
### Story Already High Quality
```markdown
**Analysis:** Story [STORY-KEY] has excellent quality (92/100)
**Assessment:**
- ✓ Comprehensive acceptance criteria
- ✓ Detailed description with context
- ✓ Well-refined with story points
- ✓ Supporting documentation attached
**Recommendation:** No changes needed. Story is ready for development.
```
### Incomplete Story Data
```markdown
**Warning:** Story [STORY-KEY] has limited data available
**Missing Fields:**
- Story points (not tracked in this project)
- Custom AC field (project doesn't use this field)
**Analysis:** Proceeding with available data
[Continue with analysis based on description field only...]
```
## Data Quality Tracking
**Always report:**
```python
{
"story_key": "PROJ-123",
"quality_score_before": 25,
"quality_score_after": 88,
"improvement": 63,
"confidence": "high", # high/medium/low
"fields_analyzed": ["summary", "description", "acceptanceCriteria", "storyPoints"],
"fields_missing": ["components"],
"proposals_generated": 5,
"critical_proposals": 2,
"important_proposals": 2,
"optional_proposals": 1
}
```
---
When invoked, this skill will analyze the specified Jira story and return detailed, actionable improvement proposals that the story-refiner agent can present for user review and confirmation.