Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:59:51 +08:00
commit 5a4f0900ad
15 changed files with 2857 additions and 0 deletions

View File

@@ -0,0 +1,14 @@
{
"name": "essentials",
"description": "Essential MCP servers, requirements management, and intelligent instruction analysis for enhanced Claude Code development workflow",
"version": "1.1.0",
"author": {
"name": "bandofai"
},
"commands": [
"./commands"
],
"hooks": [
"./hooks"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# essentials
Essential MCP servers, requirements management, and intelligent instruction analysis for enhanced Claude Code development workflow

662
commands/brainstorm.md Normal file
View File

@@ -0,0 +1,662 @@
# Interactive Requirements Brainstorming with Intelligent Analysis v2.0
Start an interactive Q&A session with deep codebase analysis and dynamically generated questions.
**⚠️ CRITICAL: THIS COMMAND ONLY GATHERS REQUIREMENTS - NEVER IMPLEMENTS CODE ⚠️**
## STRICT OPERATING MODE: READ-ONLY ANALYSIS
**YOU MUST NEVER:**
- Write, edit, or modify any code files
- Create new source code files (except in `.requirements/` folder)
- Execute implementation commands
- Make code changes of any kind
- Use Edit, Write, or code modification tools
- Suggest or execute implementation during this workflow
**YOU MAY ONLY:**
- Read and analyze existing code
- Search and explore the codebase
- Research best practices
- Ask questions
- Create requirement documents in `.requirements/` folder
- Generate specifications and recommendations
**GOAL:** Gather complete, actionable requirements so implementation can happen later with `/implement` command.
---
## Full Workflow
### Phase 1: Setup & Initialization
When user runs `/brainstorm <description>`:
1. **Parse the request**
- Extract feature description from arguments
- Generate slug: kebab-case version (e.g., "add user logging" → "user-logging")
2. **Create timestamped requirement folder**
- Format: `.requirements/YYYY-MM-DD-HHMM-[slug]/`
- Example: `.requirements/2025-10-25-1430-user-logging/`
3. **Initialize files**
- `01-initial-request.md` - Save user's original request verbatim
- `metadata.json` - Initialize tracking structure
- Update `.requirements/_current-requirement` with folder name
4. **Initialize metadata.json**
```json
{
"id": "user-logging",
"slug": "user-logging",
"started": "2025-10-25T14:30:00Z",
"lastUpdated": "2025-10-25T14:30:00Z",
"status": "active",
"phase": "discovery",
"originalRequest": "add user logging",
"progress": {
"discovery": { "generated": false, "answered": 0, "total": 0 },
"analysis": { "status": "pending" },
"expert": { "generated": false, "answered": 0, "total": 0 }
},
"contextFiles": [],
"relatedFeatures": [],
"toolsAvailable": ["serena", "context7", "WebSearch"]
}
```
5. **Announce**: "Starting requirements gathering for: [description]"
---
### Phase 2: Generate & Ask Discovery Questions
**IMPORTANT: Questions are NOT hardcoded - they are generated dynamically based on the user's request!**
#### Step 2.1: Generate Discovery Questions
Use **sequential thinking** to generate contextual questions:
```
Task: Generate 5-7 discovery questions to gather requirements for: "[user's request]"
Guidelines:
- Analyze what's ambiguous or unclear in the request
- Focus on WHAT the user wants, not HOW to implement
- Questions should clarify scope, behavior, and expectations
- Each question needs 2-5 numbered options with smart defaults
- Mark one option as [default - brief reason why]
- Include "Other (please specify)" for flexibility
- Mix yes/no with multiple choice appropriately
Focus areas based on request context:
- Scope and boundaries of the feature
- User interactions and workflows
- Data/content being worked with
- Performance or scale expectations
- Security/privacy considerations
- Integration with existing systems
Output format for each question:
## Q[N]: [Question Title]
```
[Question text that makes sense for THIS request]
1. [Option 1] [default - reason based on request context]
2. [Option 2]
3. [Option 3]
4. Other (please specify)
```
Available tools: serena (code analysis), context7 (library docs), WebSearch (best practices)
Save all generated questions to: 02-discovery-questions.md
```
**Example output for "add user logging":**
```markdown
## Q1: What should be logged?
```
What information should the logging system capture?
1. Error details and stack traces [default - most common use case]
2. User actions and events
3. Both errors and user actions
4. Other (please specify)
```
## Q2: Where should logs be stored?
```
What's the preferred log storage solution?
1. Local files (rotating logs) [default - simplest approach]
2. Database (queryable logs)
3. External service (e.g., Datadog, LogRocket)
4. Other (please specify)
```
```
**Example output for "build user dashboard":**
```markdown
## Q1: What data should the dashboard display?
```
What key metrics or information will users see?
1. User activity and engagement [default - common dashboard need]
2. System metrics and performance
3. Business analytics and reports
4. Other (please specify)
```
## Q2: Should the dashboard update in real-time?
```
How current should the dashboard data be?
1. Static/on-page-load [default - simplest implementation]
2. Real-time updates (WebSocket/polling)
3. Refresh on user action
4. Other (please specify)
```
```
#### Step 2.2: Ask Questions One at a Time
**UX Rules:**
- Present ONE question at a time
- Wait for user response before next question
- Accept numbered responses (e.g., "1", "2") or free text for "Other"
- Support revisions: user can say "back", "change Q2", or "restart"
- Show progress after every 2-3 questions: "So far: [brief summary]..."
- If user provides just number, auto-select that option
**After all discovery questions answered:**
- Save answers to `03-discovery-answers.md`
- Update `metadata.json` progress
- Announce: "Discovery complete. Starting intelligent codebase analysis..."
---
### Phase 3: Intelligent Codebase Analysis (Autonomous)
**NO USER INTERACTION - Fully autonomous**
**⚠️ CRITICAL REMINDER: READ-ONLY ANALYSIS ONLY - DO NOT IMPLEMENT ANYTHING ⚠️**
Use **sequential thinking** for intelligent analysis:
```
Task: Analyze codebase to support implementing: "[user's request]"
**CRITICAL CONSTRAINTS:**
- DO NOT write, edit, or modify any code files
- DO NOT create new source code files
- DO NOT use Edit, Write, or any code modification tools
- ONLY read, search, and analyze existing code
- ONLY create requirement documents in .requirements/ folder
- Your role is RESEARCH and SPECIFICATION, not implementation
Context from discovery phase:
[Include summary of all discovery answers]
Your objectives:
1. Understand current architecture and technology stack (READ-ONLY)
2. Find similar features or related implementations (READ-ONLY)
3. Identify specific files that will need modification (DOCUMENT, don't modify)
4. Research best practices for this type of feature (RESEARCH ONLY)
5. Determine integration points with existing code (ANALYZE, don't implement)
6. Generate implementation recommendations with rationale (RECOMMEND, don't execute)
Available tools (READ-ONLY usage):
- serena: For finding symbols, searching patterns, analyzing code structure
* Use find_symbol to locate classes/functions
* Use search_for_pattern for code patterns
* Use find_referencing_symbols for dependencies
* Use get_symbols_overview for file structure
* DO NOT use replace_symbol_body, insert_after_symbol, insert_before_symbol
- context7: For library/framework documentation
* Research best practices for libraries used in codebase
- WebSearch: For industry best practices
* Search for implementation patterns
* Find security considerations
* Discover performance optimization techniques
Output required in 04-context-findings.md:
# Codebase Analysis Findings
## Architecture Overview
[High-level structure - technology stack, patterns used]
## Similar Features Found
- `[file-path]` - [what it does, why similar]
- [Include code snippets showing patterns]
## Patterns to Follow
```[language]
// Example from [file-path]:[line-number]
[relevant code snippet]
```
[Explanation of pattern and why it's relevant]
## Files to Modify/Create
- `[path]` - [what changes needed]
- `[path]` - [what to add]
## Integration Points
[How this feature connects to existing code]
## Best Practices Research
[External best practices found via context7/WebSearch]
## Recommended Approach
[Specific implementation strategy with rationale]
## Technical Constraints
[Limitations, dependencies, considerations]
```
**After Phase 3:**
- Save findings to `04-context-findings.md`
- Update `metadata.json` with `contextFiles` and `relatedFeatures`
- Announce: "✅ Read-only analysis complete. Found [N] related features and [N] files to modify (documentation only - no code was changed). Generating expert questions..."
---
### Phase 4: Generate & Ask Expert Questions
**Again, questions are dynamically generated based on findings!**
#### Step 4.1: Generate Expert Questions
Use **sequential thinking** to generate implementation questions:
```
Task: Generate 5-7 expert implementation questions
Input context:
- Original request: "[user's request]"
- Discovery answers: [summary from Phase 2]
- Codebase findings: [summary from 04-context-findings.md]
Guidelines:
- Reference ACTUAL files and patterns found in Phase 3
- Questions should finalize implementation approach
- Include numbered options with defaults based on codebase patterns
- Explain WHY each default makes sense (cite specific findings)
- Focus on HOW to implement, not WHAT to build
Question types to consider:
- Architecture decisions (extend existing vs create new)
- File/component organization
- Library/framework usage patterns
- Code pattern consistency
- Integration approach
- Testing strategy
Output format for each question:
## Q[N]: [Question Title]
```
[Question text referencing actual findings]
1. [Option 1] [default - reason citing analysis]
2. [Option 2]
3. [Option 3]
**Why this matters:**
[Explanation based on Phase 3 findings - cite files, patterns discovered]
```
Save all generated questions to: 05-expert-questions.md
```
**Example output for "add user logging" after finding existing Logger:**
```markdown
## Q6: Should we extend the existing Logger?
```
Found Logger class at `src/utils/logger.ts`. How should we implement user logging?
1. Extend existing Logger class [default - maintains consistency]
2. Create separate UserLogger class
3. Add logging as middleware
**Why this matters:**
Analysis found that `AuthService` and `PaymentService` both use the
existing Logger at src/utils/logger.ts:15. The Logger already supports
Winston with rotating file transport. Extending it maintains architectural
consistency and reuses existing log rotation configuration.
```
## Q7: Which log format should we use?
```
The existing Logger uses JSON format. Should user logging follow this pattern?
1. Yes, use JSON format [default - matches existing pattern]
2. Plain text format
3. Structured format (custom)
**Why this matters:**
Found 47 log statements across the codebase all using JSON format via
`logger.info({...})` pattern. The log aggregator at scripts/analyze-logs.ts
parses JSON format. Consistency enables easier log analysis.
```
```
#### Step 4.2: Ask Expert Questions One at a Time
Same UX rules as Phase 2:
- ONE question at a time
- Accept numbered responses
- Support revisions
- Show progress summaries
**After all expert questions answered:**
- Save answers to `06-expert-answers.md`
- Update `metadata.json`
- Announce: "Expert questions complete. Generating comprehensive requirements specification..."
---
### Phase 5: Generate Comprehensive Requirements Spec
Create `07-requirements-spec.md` synthesizing all information:
```markdown
# [Feature Name from Request]
**Created:** [ISO-8601 timestamp]
**Status:** draft
**Last Updated:** [ISO-8601 timestamp]
---
## Overview
[2-3 paragraph summary combining:
- Original request
- Discovery answers (what user wants)
- Analysis findings (current state)
- Implementation approach (expert decisions)]
---
## Requirements
### Functional Requirements
[From discovery answers - what the feature must do]
1. [Requirement based on discovery Q&A]
2. [Requirement based on discovery Q&A]
3. [Additional requirements inferred]
### User Interactions
[Detailed workflow based on discovery answers]
---
## Technical Implementation
### Architecture Decision
[Based on expert Q&A - chosen approach with rationale]
### Files to Modify
[From Phase 3 analysis and Phase 4 decisions]
- **`[file-path]`**
- What to change: [description]
- Pattern to follow: [reference to similar code]
- **`[file-path]`**
- What to create: [description]
### Technology Stack
[Libraries/frameworks from analysis and decisions]
### Integration Points
[From Phase 3 analysis - how it connects]
---
## Implementation Guide
### Patterns to Follow
```[language]
// From [similar-feature-file]:[line-number]
[code snippet showing pattern to follow]
```
**Apply this pattern for:** [explanation]
### Similar Features for Reference
- **`[file-path]`** - [what it does similarly, line numbers]
- **`[file-path]`** - [pattern to emulate]
### Step-by-Step Approach
1. [Concrete step from expert decisions]
2. [Concrete step referencing files]
3. [Concrete step with pattern reference]
---
## Testing Strategy
[Based on how similar features are tested]
- Unit tests: [approach based on codebase patterns]
- Integration tests: [if needed]
- Test files: [where to add tests, following existing structure]
---
## Edge Cases & Constraints
### Edge Cases to Handle
[From discovery phase and analysis]
### Technical Constraints
[From Phase 3 analysis]
### Performance Considerations
[From discovery answers if applicable]
### Security Considerations
[From discovery answers and analysis]
---
## Acceptance Criteria
- [ ] [Functional criterion from requirements]
- [ ] [Implementation criterion from expert decisions]
- [ ] [Integration criterion from analysis]
- [ ] [Testing criterion]
- [ ] [Documentation criterion]
---
## Implementation Checklist
- [ ] Review similar implementation at `[file-path]:[line]`
- [ ] [Specific action from expert Q&A]
- [ ] Modify `[file-path]` following pattern from `[reference-file]`
- [ ] Add tests in `[test-file-path]` (follow existing test structure)
- [ ] Update documentation at `[doc-path]`
- [ ] [Any security/performance tasks]
---
## References
- **Codebase Analysis:** `04-context-findings.md`
- **Similar Features:**
- `[file-path]` - [what to learn from it]
- **Best Practices:** [links from WebSearch/context7]
- **Dependencies:** [libraries mentioned in analysis]
---
## Notes
[Any assumptions made, open questions, or alternative approaches considered]
```
**After Phase 5:**
- Update `.requirements/_index.json`:
```json
{
"user-logging": {
"created": "2025-10-25T14:30:00Z",
"status": "draft",
"lastModified": "2025-10-25T15:15:00Z",
"lastWorked": null,
"folder": ".requirements/2025-10-25-1430-user-logging/",
"relatedFeatures": ["auth-service", "payment-logging"],
"filesInvolved": 5
}
}
```
- Show completion summary with **explicit reminder**:
```
✅ Requirements gathering complete!
📁 All documentation saved to: .requirements/[folder]/
⚠️ IMPORTANT: NO CODE WAS MODIFIED
- This was a READ-ONLY analysis phase
- All findings are documented in requirement files
- No source code files were changed
📋 Next Steps:
1. Review requirements: .requirements/[folder]/07-requirements-spec.md
2. When ready to implement: /implement [slug]
This brainstorm session ONLY gathered requirements.
Implementation is a separate step that you control.
```
---
## Best Practices
### Question Flow
- **One at a time** - Never ask multiple questions simultaneously
- **Accept shorthand** - User can respond with just number (e.g., "1")
- **Auto-select defaults** - If user presses enter, use [default]
- **Follow-up intelligently** - Ask clarifying questions based on answers
- **Show progress** - Every 2-3 questions, show summary: "So far we have..."
- **Allow revisions** - User can say "back", "change Q3", or "restart from Q5"
### Analysis Phase
- **CRITICAL: READ-ONLY MODE** - Never use Edit, Write, or code modification tools
- Use sequential thinking for intelligent tool coordination
- Document file paths and line numbers in findings
- Capture code snippets from similar features (for documentation, not implementation)
- Research external best practices
- Generate hypotheses and verify them (through reading, not implementing)
- Track confidence in recommendations
- **STOP if you find yourself about to modify code** - This is requirements gathering only!
### Expert Questions
- Always reference actual files found in analysis
- Explain rationale for defaults using findings
- Give user flexibility to choose different approach
- Use numbered format for consistency
- Include "Why this matters" explanations
### Tool Usage
**ALLOWED TOOLS (Read-Only):**
- ✅ Read - Read existing files
- ✅ Glob - Find files by pattern
- ✅ Grep - Search code content
- ✅ serena (read-only): find_symbol, search_for_pattern, get_symbols_overview, find_referencing_symbols
- ✅ context7 - Research library documentation
- ✅ WebSearch - Research best practices
- ✅ Bash (read-only commands): ls, cat, find, grep, git log, etc.
**FORBIDDEN TOOLS (Implementation):**
- ❌ Edit - NEVER use
- ❌ Write - NEVER use (EXCEPT for creating files in `.requirements/` folder ONLY)
- ❌ NotebookEdit - NEVER use
- ❌ serena (write operations): replace_symbol_body, insert_after_symbol, insert_before_symbol, rename_symbol - NEVER use
- ❌ Bash (write commands): touch, cp, mv (except in `.requirements/`), git commit, npm install - NEVER use
- ❌ Task - NEVER use for implementation agents
- ❌ Any code generation or modification tool
**EXCEPTION:** Write tool is allowed ONLY for creating requirement documents in `.requirements/[timestamp-slug]/` folder. Never write to source code files.
**Guidelines:**
- Check which tools are available before use
- Prefer serena for symbol-level code analysis (READ-ONLY)
- Use context7 for library documentation
- Use WebSearch for industry best practices
- Always document which tools were used in metadata
- **IF YOU CATCH YOURSELF ABOUT TO USE A FORBIDDEN TOOL, STOP IMMEDIATELY**
### User Experience
- Announce phase transitions clearly
- Provide rich context for decisions
- Save progress after each phase
- Support interruption and resumption
- Allow user to review/revise any phase
- Generate actionable, specific outputs
---
## Why This Version Is Optimal
1. ✅ **Adaptive** - Questions change based on what user actually asked for
2. ✅ **Intelligent** - Sequential thinking powers all analysis and generation
3. ✅ **Context-aware** - Expert questions reference actual code found
4. ✅ **Simple tools** - Only serena, context7, WebSearch (no redundant tools)
5. ✅ **User-friendly** - Numbered choices, revisions, progress tracking
6. ✅ **Actionable** - Output includes specific files, patterns, and steps
7. ✅ **Flexible** - Works for any type of request (logging, UI, API, refactoring, etc.)
8. ✅ **Safe** - READ-ONLY mode prevents accidental implementation during requirements gathering
---
## Examples of Adaptive Behavior
### Request: "add logging to error handlers"
- Phase 2 generates questions about: log levels, storage, PII handling, error context
- Phase 3 finds: existing error handlers, current logging setup
- Phase 4 asks: extend current logger? which error handler files to modify?
### Request: "build user dashboard"
- Phase 2 generates questions about: data to show, real-time updates, user personalization
- Phase 3 finds: existing dashboard components, data fetching patterns
- Phase 4 asks: reuse Dashboard layout? follow existing chart library pattern?
### Request: "refactor authentication"
- Phase 2 generates questions about: what to improve, breaking changes okay?, migration path
- Phase 3 finds: current auth implementation, all usage locations
- Phase 4 asks: backward compatibility approach? update all 47 references at once?
**The questions adapt to the request - no more generic "what type of feature is this?"**
---
## Phase Transitions
After each phase, clearly announce:
- ✅ "Phase 1 complete. Generating discovery questions for: [request]..."
- ✅ "Phase 2 complete. Starting READ-ONLY codebase analysis (no code will be modified)..."
- ✅ "Phase 3 complete (READ-ONLY analysis - no code changed). Generating expert questions based on findings..."
- ✅ "Phase 4 complete. Generating comprehensive requirements specification..."
- ✅ "Requirements gathering complete! Review at `.requirements/[folder]/`. NO CODE WAS MODIFIED - implementation is a separate step."
---
## Error Handling
- **If you catch yourself using Edit/Write/modification tools**: STOP IMMEDIATELY, apologize, explain this is read-only mode, continue with read-only analysis
- **If you start implementing code**: STOP IMMEDIATELY, delete any modifications, remind yourself this is requirements gathering only
- If tools unavailable: Skip Phase 3, ask generic expert questions
- If sequential thinking fails: Fall back to manual analysis
- If user abandons mid-session: Save progress in metadata, allow resume
- If folder exists: Ask "Requirement exists. Overwrite, new version, or cancel?"
- **If user asks "can you implement this?"**: Respond "This is requirements gathering phase. Once complete, use `/implement [slug]` to start implementation."

77
commands/continue.md Normal file
View File

@@ -0,0 +1,77 @@
# Continue implementation
Resume implementation of a requirement that was interrupted.
# Instructions
When the user runs `/continue [name]`:
1. **Determine which requirement to continue**
a. **If name is provided**: `/continue user-authentication`
- Use that specific requirement
b. **If no name provided**: `/continue`
- Read `.requirements/_index.json`
- Find the requirement with the most recent `lastWorked` timestamp
- If none found or multiple with same timestamp, ask user to specify
2. **Load the requirement and current state**
- Read `.requirements/<name>.md`
- Check which items in Implementation Checklist are checked
- Check which Acceptance Criteria are marked done
- Review any notes about implementation progress
3. **Analyze codebase for current state**
- Use available tools to examine relevant files
- Identify what has been implemented
- Find where implementation stopped
- Look for TODO comments or incomplete code
4. **Determine remaining work**
- Compare requirements with actual implementation
- List what's complete
- List what remains to be done
- Identify any gaps or issues
5. **Present status to user**
Show clearly:
```
Continuing: <name>
✅ Completed:
- [Item 1]
- [Item 2]
⏳ In Progress:
- [Item 3] (partially done)
❌ Not Started:
- [Item 4]
- [Item 5]
Next step: [What I'll do next]
```
6. **Ask for confirmation**
"Should I continue with [next step], or would you like to adjust the plan?"
7. **Resume implementation**
- Pick up where it left off
- Follow the same systematic approach as `/implement`
- Update checklist items as completed
- Update `_index.json` with `lastWorked` timestamp
8. **Handle completion**
- If all work is done, mark requirement as "done" in `_index.json`
- Summarize what was completed in this session
- Suggest running `/req-status <name>` to verify
## Best Practices
- Always verify current state before continuing
- Don't assume previous implementation was complete
- Check for code quality issues in existing implementation
- Maintain consistency with existing code
- Update the requirements document with any new insights
- If previous implementation has issues, fix them before continuing

89
commands/implement.md Normal file
View File

@@ -0,0 +1,89 @@
# Implement a requirement
Implement a feature based on its requirements document.
# Instructions
When the user runs `/implement <name>`:
1. **Validate requirement exists**
- Check if `.requirements/<name>.md` exists
- If not found, list available requirements and ask user to choose or run `/brainstorm` first
2. **Read and analyze the requirement**
- Load `.requirements/<name>.md`
- Parse all sections: overview, functional requirements, technical requirements, edge cases, acceptance criteria
- Understand the full scope before starting
3. **Update status to in-progress**
- Update `.requirements/_index.json`:
```json
{
"<name>": {
"status": "in-progress",
"lastWorked": "2025-01-17T11:00:00Z",
"lastModified": "2025-01-17T11:00:00Z"
}
}
```
4. **Create implementation plan**
Present a clear plan to the user:
- Break down into logical steps
- Identify files that need to be created/modified
- Highlight any dependencies or prerequisites
- Estimate complexity
Ask user: "Does this plan look good? Should I proceed?"
5. **Execute implementation systematically**
a. **Setup phase**
- Create necessary directories
- Set up any configuration files
- Install dependencies if needed
b. **Core implementation**
- Follow the plan step by step
- Implement according to technical requirements
- Handle edge cases mentioned in requirements
- Add proper error handling
- Include logging where appropriate
c. **Testing**
- Write unit tests for core functionality
- Write integration tests if applicable
- Test edge cases from requirements
- Verify acceptance criteria
d. **Documentation**
- Add code comments for complex logic
- Update README if needed
- Document API endpoints or interfaces
6. **Track progress in requirements file**
- Check off items in the Implementation Checklist
- Mark completed acceptance criteria
- Add notes about implementation decisions
7. **Report completion**
- Summarize what was implemented
- Show which acceptance criteria were met
- Highlight any deviations from original requirements
- Suggest running `/req-status <name>` to verify
- Update status in `_index.json` to "done" if fully complete
8. **Handle interruptions gracefully**
- If implementation is interrupted, save state clearly
- Update `_index.json` with current progress
- User can resume with `/continue <name>`
## Best Practices
- Always confirm the plan before executing
- Implement incrementally, not all at once
- Test as you go, don't wait until the end
- Keep the user informed of progress
- If requirements are unclear, ask for clarification before implementing
- Follow existing code patterns and project conventions
- Prioritize code quality and maintainability

62
commands/req-list.md Normal file
View File

@@ -0,0 +1,62 @@
# List all requirements
Show all requirements with their status and metadata.
# Instructions
When the user runs `/req-list`:
1. **Check if requirements directory exists**
- Look for `.requirements/` directory
- If not found, show message: "No requirements found. Run `/brainstorm <name>` to create your first requirement."
2. **Read all requirement files**
- List all `.md` files in `.requirements/` directory (excluding `_index.json`)
- Load metadata from `.requirements/_index.json`
3. **Display requirements in a table format**
```
Requirements
Name Status Created Last Modified
────────────────────────────────────────────────────────────────
user-authentication in-progress 2025-01-15 2025-01-17
payment-flow done 2025-01-14 2025-01-16
admin-dashboard draft 2025-01-17 2025-01-17
api-rate-limiting in-progress 2025-01-16 2025-01-17
```
4. **Add summary statistics**
```
Total: 4 requirements
✅ Done: 1
⏳ In Progress: 2
📝 Draft: 1
```
5. **Provide helpful next actions**
- If there are draft requirements: "Run `/implement <name>` to start implementation"
- If there are in-progress: "Run `/continue <name>` to resume work"
- If there are done requirements: "Run `/req-status <name>` to verify completion"
6. **Handle empty or missing metadata**
- If `_index.json` doesn't exist, create it with default values for discovered requirements
- If a requirement file exists but not in index, add it with default metadata
## Display Options
**Sort by:**
- Default: Last modified (most recent first)
- Can be enhanced to allow sorting by name, status, or created date
**Filter by status** (optional enhancement):
- Show only requirements with specific status
- Example: `/req-list --status in-progress`
## Best Practices
- Keep display clean and easy to scan
- Use emojis sparingly for status indicators
- Show most recently modified first by default
- Make it easy to identify what to work on next

126
commands/req-status.md Normal file
View File

@@ -0,0 +1,126 @@
# Show requirement implementation status
Compare requirements with actual implementation to show progress and gaps.
# Instructions
When the user runs `/req-status [name]`:
1. **Determine which requirement(s) to check**
a. **If name provided**: `/req-status user-authentication`
- Check that specific requirement
b. **If no name provided**: `/req-status`
- Show status for all requirements
- Focus on in-progress and recent ones
2. **Load requirement and analyze**
- Read `.requirements/<name>.md`
- Extract all requirements, acceptance criteria, and checklist items
3. **Analyze codebase implementation**
- Use available tools to examine relevant files
- Search for related code
- Check tests
- Verify functionality exists
4. **Compare requirements vs reality**
For each requirement, determine:
-**Fully Implemented** - Code exists, tests pass, acceptance criteria met
- ⚠️ **Partially Implemented** - Started but incomplete
-**Not Started** - No code found
-**Cannot Verify** - Need manual verification
5. **Generate detailed status report**
**For single requirement:**
```
Status: user-authentication
Overall Progress: 75% (6/8 items complete)
Functional Requirements:
✅ User can register with email/password
✅ User can login with credentials
⚠️ Password reset flow (partially done - email not implemented)
❌ OAuth integration not started
Acceptance Criteria:
✅ Passwords are hashed
✅ JWT tokens are generated
⚠️ Password strength validation (basic only, missing special chars)
❌ Rate limiting not implemented
Tests:
✅ Unit tests: 12/12 passing
⚠️ Integration tests: 3/5 passing
❌ E2E tests: Not written
Issues Found:
- Password reset emails not configured
- Rate limiting mentioned in requirements but missing
- Integration tests failing for OAuth flows
```
**For all requirements:**
```
Requirements Status Overview
Name Progress Status
────────────────────────────────────────────
user-authentication 75% in-progress
payment-flow 100% done ✅
admin-dashboard 30% in-progress
api-rate-limiting 0% draft
Summary:
- 1 complete
- 2 in progress (average 52.5% done)
- 1 not started
```
6. **Identify gaps and issues**
- Requirements with no implementation
- Partial implementations
- Missing tests
- Acceptance criteria not met
- Code that exists but wasn't in requirements (scope creep)
7. **Provide actionable recommendations**
```
Recommendations:
High Priority:
- Complete password reset email integration
- Implement rate limiting (security requirement)
- Fix failing integration tests
Medium Priority:
- Enhance password validation
- Add E2E tests
Low Priority:
- Start OAuth integration planning
```
8. **Update metadata if status changed**
- If implementation is now complete, update status to "done"
- Update `lastModified` timestamp
9. **Suggest next actions**
- If incomplete: "Run `/continue <name>` to finish implementation"
- If gaps found: "Run `/req-update <name>` to adjust requirements"
- If complete: "Requirement fully implemented!"
## Best Practices
- Be thorough but don't overwhelm with too much detail
- Highlight critical gaps (security, core functionality)
- Show progress visually (percentages, checkmarks)
- Make it easy to see what needs attention
- Don't just check if files exist - verify functionality
- Consider test coverage as part of "done"
- Flag discrepancies between requirements and implementation
- Provide clear next steps

153
commands/req-tests.md Normal file
View File

@@ -0,0 +1,153 @@
# Generate test plan from requirements
Create comprehensive test scenarios based on a requirement document.
# Instructions
When the user runs `/req-tests <name>`:
1. **Validate requirement exists**
- Check if `.requirements/<name>.md` exists
- If not found, list available requirements
2. **Read and analyze the requirement**
- Load `.requirements/<name>.md`
- Extract:
- Functional requirements
- Edge cases and constraints
- Acceptance criteria
- User stories
3. **Generate comprehensive test scenarios**
Create tests covering:
**a. Unit Tests**
- Test each functional requirement individually
- Test edge cases mentioned in requirements
- Test error conditions
- Test validation logic
**b. Integration Tests**
- Test interactions between components
- Test external integrations (APIs, databases, etc.)
- Test data flow through the system
**c. End-to-End Tests**
- Test complete user flows from user stories
- Test acceptance criteria scenarios
- Test real-world usage patterns
**d. Edge Case Tests**
- Test all edge cases mentioned in requirements
- Test boundary conditions
- Test error handling
- Test performance under constraints
4. **Structure test scenarios**
Format:
```markdown
## Test Scenarios
### Unit Tests
#### Test: [Function/Component Name]
- **Given:** [Initial state/conditions]
- **When:** [Action taken]
- **Then:** [Expected outcome]
- **Test Type:** Unit
- **Priority:** High/Medium/Low
### Integration Tests
#### Test: [Integration Point]
- **Given:** [Setup]
- **When:** [Integration action]
- **Then:** [Expected result]
- **Test Type:** Integration
- **Priority:** High/Medium/Low
### End-to-End Tests
#### Test: [User Flow]
- **Scenario:** [User story or flow]
- **Steps:**
1. [Step 1]
2. [Step 2]
3. [Step 3]
- **Expected:** [Final outcome]
- **Test Type:** E2E
- **Priority:** High/Medium/Low
### Edge Case Tests
#### Test: [Edge Case Description]
- **Given:** [Edge case setup]
- **When:** [Edge case trigger]
- **Then:** [Expected handling]
- **Test Type:** Edge Case
- **Priority:** High/Medium/Low
```
5. **Ask for preference on storage**
```
Where should I save these test scenarios?
1. Append to requirements file (.requirements/<name>.md)
2. Create separate test file (.requirements/<name>-tests.md)
3. Show only (don't save)
```
6. **Save test scenarios**
Based on user choice:
- If appending: Add "## Test Scenarios" section to requirement file
- If separate: Create new test file
- If show only: Display but don't persist
7. **Generate test code templates** (optional)
Ask: "Would you like me to generate test code templates?"
If yes, create example test code in the project's test framework:
```javascript
// Example for Jest
describe('User Authentication', () => {
test('should allow login with valid credentials', () => {
// Given: Valid user credentials
const credentials = { email: 'user@example.com', password: 'password123' };
// When: User attempts to login
const result = login(credentials);
// Then: Login succeeds
expect(result.success).toBe(true);
expect(result.token).toBeDefined();
});
});
```
8. **Update requirement metadata**
- Add note that test scenarios were generated
- Update `lastModified` in `_index.json`
9. **Provide summary**
```
Generated test scenarios:
- 5 Unit Tests
- 3 Integration Tests
- 2 End-to-End Tests
- 4 Edge Case Tests
Total: 14 test scenarios covering all requirements
```
## Best Practices
- Cover all functional requirements with tests
- Don't forget negative test cases (what should fail)
- Prioritize tests based on critical functionality
- Make tests specific and actionable
- Include setup/teardown considerations
- Consider performance testing for performance requirements
- Include security testing for security requirements
- Map tests back to acceptance criteria

93
commands/req-update.md Normal file
View File

@@ -0,0 +1,93 @@
# Update a requirement
Modify an existing requirement document.
# Instructions
When the user runs `/req-update <name>`:
1. **Validate requirement exists**
- Check if `.requirements/<name>.md` exists
- If not found, list available requirements or suggest creating one with `/brainstorm`
2. **Load current requirement**
- Read `.requirements/<name>.md`
- Show summary of current content
3. **Ask what to update**
Present options:
```
What would you like to update?
1. Overview / Goals
2. User Stories
3. Functional Requirements
4. Technical Requirements
5. Edge Cases & Constraints
6. Acceptance Criteria
7. Free-form edit (tell me what to change)
```
4. **Handle the update based on choice**
**For specific sections (1-6):**
- Show current content of that section
- Ask: "What changes would you like to make?"
- Accept additions, modifications, or deletions
- Update that section while preserving the rest
**For free-form edit (7):**
- Ask: "Describe the changes you want to make"
- Parse user's intent
- Apply changes to appropriate sections
- Show what changed
5. **Preview changes**
Show diff or summary:
```
Changes to be made:
+ Added to Functional Requirements:
- Support OAuth 2.0 authentication
~ Modified in Technical Requirements:
- JWT tokens → OAuth 2.0 tokens
- Removed from Edge Cases:
- Password reset via email (no longer needed)
```
6. **Confirm before saving**
Ask: "Should I save these changes?"
7. **Save updates**
- Write updated content to `.requirements/<name>.md`
- Update metadata in `.requirements/_index.json`:
```json
{
"<name>": {
"lastModified": "2025-01-17T12:00:00Z",
"version": 2
}
}
```
- Add changelog entry at bottom of requirement file:
```markdown
## Changelog
- **2025-01-17**: Updated authentication approach to OAuth 2.0
- **2025-01-15**: Initial version
```
8. **Suggest next steps**
- If status is "done": "Requirements changed. Consider running `/req-status <name>` to check if implementation still matches."
- If status is "in-progress": "Updated requirements. You may want to adjust your implementation."
- If status is "draft": "Requirements updated. Ready to run `/implement <name>`?"
## Best Practices
- Always show what will change before saving
- Preserve existing content unless explicitly asked to remove it
- Maintain document structure and formatting
- Keep changelog for traceability
- If changes affect completed work, warn the user
- Be careful not to lose information during updates

365
hooks/README.md Normal file
View File

@@ -0,0 +1,365 @@
# Puerto Prompt Analyzer Hook (v2.0)
Intelligent prompt analyzer for Claude Code that automatically validates user instructions and recommends relevant plugins from the Puerto marketplace.
**🆕 Version 2.0 Features:**
-**60x faster** with intelligent caching
- 🎯 **2x more accurate** with advanced text processing
- 🚫 **Filters installed plugins** - never recommends what you already have
- 🧠 **Project context aware** - detects JavaScript, Python, Rust, Go, Ruby, Java projects
- 🔧 **Fully configurable** - customize scoring, blacklist plugins, set favorites
- 📊 **Performance monitoring** - logs slow executions
- 🎲 **Diversity algorithm** - shows varied recommendations, not repetitive ones
- 💾 **Session memory** - avoids showing same plugins repeatedly
> **⚠️ Manual Setup Required**
> Due to Claude Code security restrictions, hooks cannot be automatically configured when installing plugins.
> **You must manually add this hook to your `~/.claude/settings.json` file.**
> See the [Essentials Plugin README](../README.md#instruction-analysis-hook-optional) for step-by-step setup instructions.
## Overview
The instruction analysis hook runs before Claude processes your prompts, providing:
- **Task Type Classification:** Automatically categorizes your request as Research, Implementation, Mixed, or General
- **Instruction Validation:** Detects vague or unclear instructions and provides improvement suggestions
- **Plugin Recommendations:** Suggests top 2-3 relevant plugins from Puerto marketplace based on your task
- **Transparent Analysis:** Shows recommendations directly in Claude's response with install commands
## How It Works
1. **You submit a prompt** (any non-command message)
2. **Hook analyzes** your prompt before Claude sees it:
- Classifies task type using keyword detection
- Validates instruction clarity
- Scores all marketplace plugins for relevance
- Generates top recommendations
3. **Claude processes** your original prompt + recommendations
4. **You see** analysis section in Claude's response with suggested plugins
## Features
### Task Classification (Enhanced v2.0)
Uses advanced tokenization and stemming to determine task type:
- **Research:** explain, analyze, investigate, understand, compare, study, evaluate, etc.
- **Implementation:** implement, create, build, fix, refactor, add, deploy, optimize, etc.
- **Mixed:** Contains both research and implementation keywords (requires 1.5x threshold)
- **General:** No clear classification
**Improvements:** Stemming ("building" → "build"), stop word filtering, better thresholds
### Instruction Validation
Detects common issues:
- **Too vague:** "fix it", "make it better", "help" → Suggests being more specific
- **Missing context:** Short prompts (<30 chars) with ambiguous pronouns → Requests clarification
- **Overly broad:** "build an app" → Suggests breaking down into components
### Intelligent Plugin Scoring (v2.0)
Recommendations based on weighted scoring algorithm:
1. **Tokenized description overlap** (weight: 2) - Advanced text matching with stemming
2. **Name matching** (weight: 5) - Plugin name relevance (strongest signal)
3. **Keyword tags** (weight: 3) - Plugin keywords matching prompt
4. **Task type alignment** (weight: 5) - Agents for implementation, skills for research
5. **Project context** (weight: 3) - Detects your project type and boosts relevant plugins
6. **Category matching** (weight: 4) - Favorite categories get priority
**Quality threshold:** Only shows plugins scoring ≥8 points (configurable)
**Filters:**
- ✅ Skips already-installed plugins
- ✅ Reduces score for recently shown plugins (-5 points)
- ✅ Respects blacklist configuration
- ✅ Diversity algorithm ensures varied recommendations
### Project Context Detection (New in v2.0)
Automatically detects your project type:
- **JavaScript/Node.js** - Detects `package.json`, frameworks (React, Vue, Next.js, Express)
- **Python** - Detects `requirements.txt`, `pyproject.toml`
- **Rust** - Detects `Cargo.toml`
- **Go** - Detects `go.mod`
- **Ruby** - Detects `Gemfile`
- **Java** - Detects `pom.xml`, `build.gradle`
Boosts plugin scores when they match your project stack!
## Example Output
```markdown
## 🔍 Instruction Analysis
**Task Type:** Implementation
**📦 Recommended Plugins:**
### 1. `engineering`
**Description:** Full-stack development department with 7 specialized agents
**Why:** Matches keywords: implementation, build, create
**Install:** `/plugin install engineering`
### 2. `product`
**Description:** Product management and data analysis department
**Why:** Matches keywords: analyze, track, metrics
**Install:** `/plugin install product`
---
```
## Configuration
### Installation
**⚠️ Manual setup is required.** Hooks cannot be automatically configured by plugins for security reasons.
### Hook Configuration (Manual Setup Required)
You must manually add this to your `~/.claude/settings.json` file:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "node ~/.claude/plugins/marketplaces/puerto/plugins/essentials/hooks/puerto-prompt-analyzer.js",
"timeout": 60
}
]
}
]
}
}
```
**Setup Steps:**
1. Install the essentials plugin: `/plugin install essentials@puerto`
2. Find your plugin path:
```bash
find ~/.claude/plugins -name "puerto-prompt-analyzer.js" 2>/dev/null
```
3. Replace the path in the hook configuration above with your actual path
4. Edit `~/.claude/settings.json` and add the hook configuration
5. Verify JSON is valid: `cat ~/.claude/settings.json | python3 -m json.tool`
6. **Restart Claude Code** (required)
7. Test by submitting a non-command prompt - you should see "🔍 Instruction Analysis" in the response
**Note:** If `node` is not in your PATH, use the full path: `"command": "/usr/local/bin/node /path/to/puerto-prompt-analyzer.js"`
For detailed setup instructions with troubleshooting, see the [Essentials Plugin README](../README.md#puerto-prompt-analyzer-hook-optional).
### Advanced Configuration (New in v2.0)
Customize the hook behavior by creating `~/.claude/puerto-prompt-analyzer.json`:
```json
{
"minScore": 8,
"maxRecommendations": 3,
"cacheMinutes": 1,
"blacklist": [],
"favoriteCategories": [],
"showScores": false
}
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `minScore` | number | 8 | Minimum score threshold (0-100). Higher = more selective |
| `maxRecommendations` | number | 3 | Maximum number of plugins to recommend |
| `cacheMinutes` | number | 1 | How long to cache marketplace data (in minutes) |
| `blacklist` | array | [] | Plugin names to never recommend (e.g., `["plugin-name"]`) |
| `favoriteCategories` | array | [] | Categories to boost (e.g., `["frontend", "backend"]`) |
| `showScores` | boolean | false | Show relevance scores for debugging |
**Example Configurations:**
**Conservative** (fewer, higher-quality recommendations):
```json
{
"minScore": 12,
"maxRecommendations": 2,
"showScores": false
}
```
**Exploratory** (more recommendations, lower threshold):
```json
{
"minScore": 5,
"maxRecommendations": 5,
"showScores": true
}
```
**Frontend-focused**:
```json
{
"minScore": 8,
"maxRecommendations": 3,
"favoriteCategories": ["frontend", "ui", "design"],
"blacklist": ["backend-heavy-plugin"]
}
```
See `puerto-prompt-analyzer.example.json` for more examples.
### Disabling the Hook
To temporarily disable the analyzer:
1. Open `~/.claude/settings.json`
2. Remove or comment out the `UserPromptSubmit` hook entry
3. Restart Claude Code
## Performance
**Version 2.0 Performance:**
- **Execution time:** < 100ms for 95% of prompts (was < 1s)
- **Cache hit rate:** ~95% after first run
- **Memory:** ~50KB marketplace.json cached in memory
- **Latency:** Minimal impact on workflow (<5% overhead)
- **Dependencies:** None (uses Node.js built-ins only)
**Optimizations:**
- ✅ Marketplace data cached for 60 seconds
- ✅ Installed plugins cached for 60 seconds
- ✅ Session memory for deduplication
- ✅ Efficient tokenization with stop words
- ✅ Logs executions >500ms for monitoring
## Error Handling
The hook follows a **fail-open** philosophy:
- Never blocks your prompt, even on errors
- All errors logged to stderr for debugging
- If marketplace.json is missing, proceeds without recommendations
- If JSON parsing fails, allows prompt normally
- Timeout protection (60s limit)
**You will never be prevented from working due to hook failures.**
## Troubleshooting
### No recommendations appearing
**Check:**
1. Is marketplace.json present? Run: `ls .claude-plugin/marketplace.json`
2. Is the hook executable? Run: `ls -la plugins/essentials/hooks/puerto-prompt-analyzer.js`
3. Check stderr for errors: Look for `[puerto-prompt-analyzer]` messages
**Solution:**
```bash
# Regenerate marketplace catalog
npm run generate-catalog
# Make hook executable
chmod +x plugins/essentials/hooks/puerto-prompt-analyzer.js
```
### Hook seems slow
**Check:**
```bash
# Test hook execution time
time echo '{"prompt": "test", "hook_event_name": "UserPromptSubmit"}' | \
node plugins/essentials/hooks/puerto-prompt-analyzer.js
```
**Expected:** < 1 second
### Invalid recommendations
The hook uses simple keyword matching (MVP). Accuracy improves with:
- More specific prompts
- Using keywords from plugin descriptions
- Clearer task intent (research vs implementation)
**Future:** v2 will include LLM-based classification for better accuracy.
## Technical Details
### Input Format
Hook receives JSON via stdin:
```json
{
"session_id": "abc123",
"transcript_path": "/path/to/transcript.jsonl",
"cwd": "/current/working/directory",
"permission_mode": "default",
"hook_event_name": "UserPromptSubmit",
"prompt": "User's submitted text"
}
```
### Output Format
Hook outputs JSON to stdout:
```json
{
"decision": undefined,
"reason": "Analysis complete",
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": "## 🔍 Instruction Analysis\n..."
}
}
```
### Skipped Prompts
Hook automatically skips:
- Empty prompts
- Command prompts (start with `/`)
- Prompts that fail JSON parsing
## Customization (Future v2)
Planned features:
- Configurable keywords in settings.json
- Plugin blacklist/whitelist
- LLM-based classification option
- User feedback loop (track recommendation acceptance)
- Installation status detection
- Context-aware recommendations
## Contributing
Found a bug or have a suggestion?
1. Check if it's a known issue in the troubleshooting section
2. File an issue with:
- Sample prompt that caused the issue
- Expected vs actual recommendations
- stderr output if available
## License
MIT License - Part of the Puerto marketplace ecosystem
## See Also
- [Claude Code Hooks Documentation](https://docs.claude.com/en/docs/claude-code/hooks)
- [Puerto Marketplace](https://github.com/bandofai/puerto)
- [Essentials Plugin README](../README.md)

127
hooks/install-hook.ps1 Normal file
View File

@@ -0,0 +1,127 @@
# Puerto Prompt Analyzer Hook Installer (Windows PowerShell)
#
# Automatically configures the Puerto Prompt Analyzer hook in your
# Claude Code settings.json file.
#
# Usage: .\install-hook.ps1
Write-Host "🔧 Puerto Prompt Analyzer Hook Installer" -ForegroundColor Cyan
Write-Host "=========================================" -ForegroundColor Cyan
Write-Host ""
# Set paths
$SettingsFile = Join-Path $env:USERPROFILE ".claude\settings.json"
$PluginSearchDir = Join-Path $env:USERPROFILE ".claude\plugins"
# Step 1: Find the hook script
Write-Host "🔍 Step 1: Finding puerto-prompt-analyzer.js..." -ForegroundColor Yellow
$HookPath = Get-ChildItem -Path $PluginSearchDir -Recurse -Filter "puerto-prompt-analyzer.js" -ErrorAction SilentlyContinue |
Select-Object -First 1 -ExpandProperty FullName
if (-not $HookPath) {
Write-Host "❌ Error: Could not find puerto-prompt-analyzer.js" -ForegroundColor Red
Write-Host ""
Write-Host "Please ensure the essentials plugin is installed:"
Write-Host " /plugin install essentials@puerto"
exit 1
}
Write-Host "✅ Found hook at: $HookPath" -ForegroundColor Green
Write-Host ""
# Step 2: Check if Node.js is available
Write-Host "🔍 Step 2: Checking Node.js installation..." -ForegroundColor Yellow
try {
$NodeVersion = & node --version 2>$null
Write-Host "✅ Found Node.js: $NodeVersion" -ForegroundColor Green
Write-Host ""
} catch {
Write-Host "❌ Error: Node.js not found in PATH" -ForegroundColor Red
Write-Host ""
Write-Host "Please install Node.js >= v18.0.0"
Write-Host "Visit: https://nodejs.org/"
exit 1
}
# Step 3: Create or update settings.json
Write-Host "🔍 Step 3: Updating settings.json..." -ForegroundColor Yellow
# Check if settings file exists
if (-not (Test-Path $SettingsFile)) {
Write-Host "📝 Creating new settings.json..." -ForegroundColor Cyan
$SettingsDir = Split-Path $SettingsFile -Parent
if (-not (Test-Path $SettingsDir)) {
New-Item -ItemType Directory -Path $SettingsDir -Force | Out-Null
}
Set-Content -Path $SettingsFile -Value '{}'
}
# Backup original settings
$BackupFile = "$SettingsFile.backup.$(Get-Date -Format 'yyyyMMdd_HHmmss')"
Copy-Item $SettingsFile $BackupFile
Write-Host "📦 Backup created: $BackupFile" -ForegroundColor Cyan
Write-Host ""
# Read current settings
$CurrentSettings = Get-Content $SettingsFile -Raw
# Check if hook already exists
if ($CurrentSettings -match "puerto-prompt-analyzer") {
Write-Host "⚠️ Hook configuration already exists in settings.json" -ForegroundColor Yellow
Write-Host ""
$Response = Read-Host "Do you want to update it? (y/N)"
if ($Response -notmatch "^[Yy]$") {
Write-Host "❌ Installation cancelled" -ForegroundColor Red
exit 0
}
}
# Update settings using PowerShell JSON handling
try {
$Settings = Get-Content $SettingsFile -Raw | ConvertFrom-Json -AsHashtable
# Ensure hooks object exists
if (-not $Settings.ContainsKey("hooks")) {
$Settings["hooks"] = @{}
}
# Add or update UserPromptSubmit hook
$Settings["hooks"]["UserPromptSubmit"] = @(
@{
"hooks" = @(
@{
"type" = "command"
"command" = "node $HookPath"
"timeout" = 60
}
)
}
)
# Write back to file with proper formatting
$Settings | ConvertTo-Json -Depth 10 | Set-Content $SettingsFile
Write-Host "✅ Successfully updated settings.json" -ForegroundColor Green
} catch {
Write-Host "❌ Error updating settings: $_" -ForegroundColor Red
Write-Host "Your original settings have been backed up to: $BackupFile"
exit 1
}
Write-Host ""
Write-Host "✅ Installation Complete!" -ForegroundColor Green
Write-Host ""
Write-Host "Next steps:"
Write-Host " 1. Restart Claude Code"
Write-Host " 2. Test by submitting any prompt (e.g., 'help me build a feature')"
Write-Host " 3. You should see '🔍 Instruction Analysis' in the response"
Write-Host ""
Write-Host "To uninstall:"
Write-Host " 1. Edit $SettingsFile"
Write-Host " 2. Remove the 'hooks' > 'UserPromptSubmit' section"
Write-Host " 3. Restart Claude Code"
Write-Host ""
Write-Host "Original settings backed up to:"
Write-Host " $BackupFile"
Write-Host ""

170
hooks/install-hook.sh Executable file
View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
#######################################################################
# Puerto Prompt Analyzer Hook Installer
#
# Automatically configures the Puerto Prompt Analyzer hook in your
# Claude Code settings.json file.
#
# Usage: ./install-hook.sh
#######################################################################
set -e
echo "🔧 Puerto Prompt Analyzer Hook Installer"
echo "========================================="
echo ""
# Detect OS
OS="$(uname -s)"
case "$OS" in
Darwin*) OS_TYPE="macOS";;
Linux*) OS_TYPE="Linux";;
CYGWIN*|MINGW*|MSYS*) OS_TYPE="Windows";;
*) OS_TYPE="Unknown";;
esac
echo "📍 Detected OS: $OS_TYPE"
echo ""
# Set paths based on OS
if [ "$OS_TYPE" = "Windows" ]; then
SETTINGS_FILE="$USERPROFILE/.claude/settings.json"
PLUGIN_SEARCH_DIR="$USERPROFILE/.claude/plugins"
else
SETTINGS_FILE="$HOME/.claude/settings.json"
PLUGIN_SEARCH_DIR="$HOME/.claude/plugins"
fi
# Step 1: Find the hook script
echo "🔍 Step 1: Finding puerto-prompt-analyzer.js..."
if [ "$OS_TYPE" = "macOS" ] || [ "$OS_TYPE" = "Linux" ]; then
HOOK_PATH=$(find "$PLUGIN_SEARCH_DIR" -name "puerto-prompt-analyzer.js" 2>/dev/null | head -1)
else
echo "⚠️ Windows detected. Please run the PowerShell version: install-hook.ps1"
exit 1
fi
if [ -z "$HOOK_PATH" ]; then
echo "❌ Error: Could not find puerto-prompt-analyzer.js"
echo ""
echo "Please ensure the essentials plugin is installed:"
echo " /plugin install essentials@puerto"
exit 1
fi
echo "✅ Found hook at: $HOOK_PATH"
echo ""
# Step 2: Check if Node.js is available
echo "🔍 Step 2: Checking Node.js installation..."
if ! command -v node &> /dev/null; then
echo "❌ Error: Node.js not found in PATH"
echo ""
echo "Please install Node.js >= v18.0.0"
echo "Visit: https://nodejs.org/"
exit 1
fi
NODE_VERSION=$(node --version)
echo "✅ Found Node.js: $NODE_VERSION"
echo ""
# Step 3: Create or update settings.json
echo "🔍 Step 3: Updating settings.json..."
# Check if settings file exists
if [ ! -f "$SETTINGS_FILE" ]; then
echo "📝 Creating new settings.json..."
mkdir -p "$(dirname "$SETTINGS_FILE")"
echo '{}' > "$SETTINGS_FILE"
fi
# Backup original settings
BACKUP_FILE="${SETTINGS_FILE}.backup.$(date +%Y%m%d_%H%M%S)"
cp "$SETTINGS_FILE" "$BACKUP_FILE"
echo "📦 Backup created: $BACKUP_FILE"
echo ""
# Read current settings
CURRENT_SETTINGS=$(cat "$SETTINGS_FILE")
# Check if hook already exists
if echo "$CURRENT_SETTINGS" | grep -q "puerto-prompt-analyzer"; then
echo "⚠️ Hook configuration already exists in settings.json"
echo ""
read -p "Do you want to update it? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "❌ Installation cancelled"
exit 0
fi
fi
# Use Python to safely merge JSON
python3 << PYTHON_EOF
import json
import sys
try:
# Read current settings
with open("$SETTINGS_FILE", "r") as f:
settings = json.load(f)
# Ensure hooks object exists
if "hooks" not in settings:
settings["hooks"] = {}
# Add or update UserPromptSubmit hook
settings["hooks"]["UserPromptSubmit"] = [
{
"hooks": [
{
"type": "command",
"command": "node $HOOK_PATH",
"timeout": 60
}
]
}
]
# Write back to file
with open("$SETTINGS_FILE", "w") as f:
json.dump(settings, f, indent=2)
f.write("\n")
print("✅ Successfully updated settings.json")
sys.exit(0)
except Exception as e:
print(f"❌ Error updating settings: {e}")
print(f"Your original settings have been backed up to: $BACKUP_FILE")
sys.exit(1)
PYTHON_EOF
if [ $? -ne 0 ]; then
echo ""
echo "Installation failed. Your original settings are safe in:"
echo " $BACKUP_FILE"
exit 1
fi
echo ""
echo "✅ Installation Complete!"
echo ""
echo "Next steps:"
echo " 1. Restart Claude Code"
echo " 2. Test by submitting any prompt (e.g., 'help me build a feature')"
echo " 3. You should see '🔍 Instruction Analysis' in the response"
echo ""
echo "To uninstall:"
echo " 1. Edit $SETTINGS_FILE"
echo " 2. Remove the 'hooks' > 'UserPromptSubmit' section"
echo " 3. Restart Claude Code"
echo ""
echo "Original settings backed up to:"
echo " $BACKUP_FILE"
echo ""

View File

@@ -0,0 +1,37 @@
{
"$schema": "Configuration for Puerto Prompt Analyzer Hook v2.0",
"minScore": 8,
"maxRecommendations": 3,
"cacheMinutes": 1,
"blacklist": [],
"favoriteCategories": [],
"showScores": false,
"_comments": {
"minScore": "Minimum score threshold (0-100). Higher = more selective. Default: 8",
"maxRecommendations": "Maximum number of plugin recommendations to show. Default: 3",
"cacheMinutes": "How long to cache marketplace data in minutes. Default: 1",
"blacklist": "Array of plugin names to never recommend. Example: ['plugin-name']",
"favoriteCategories": "Array of categories to boost in recommendations. Example: ['frontend', 'backend']",
"showScores": "Show relevance scores in output for debugging. Default: false"
},
"_examples": {
"conservative": {
"minScore": 12,
"maxRecommendations": 2,
"showScores": false
},
"exploratory": {
"minScore": 5,
"maxRecommendations": 5,
"showScores": true
},
"frontend_focused": {
"minScore": 8,
"maxRecommendations": 3,
"favoriteCategories": ["frontend", "ui", "design"],
"blacklist": ["backend-specific-plugin"]
}
}
}

790
hooks/puerto-prompt-analyzer.js Executable file
View File

@@ -0,0 +1,790 @@
#!/usr/bin/env node
/**
* Puerto Prompt Analyzer Hook for Claude Code (v2.0)
*
* Analyzes user prompts before Claude processes them, providing:
* - Task type classification (research, implementation, mixed)
* - Instruction quality validation
* - Intelligent plugin recommendations from Puerto marketplace
* - Project context awareness
* - Caching and performance optimization
*
* Part of the essentials plugin
* @see plugins/essentials/hooks/README.md for documentation
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
// ============================================================================
// CONFIGURATION
// ============================================================================
const RESEARCH_KEYWORDS = [
'explain', 'analyze', 'research', 'investigate', 'understand',
'review', 'compare', 'summarize', 'describe', 'document',
'learn', 'study', 'explore', 'examine', 'evaluate'
];
const IMPLEMENTATION_KEYWORDS = [
'implement', 'create', 'build', 'fix', 'refactor', 'add',
'modify', 'write', 'develop', 'code', 'make', 'update',
'change', 'remove', 'delete', 'optimize', 'improve', 'deploy'
];
const STOP_WORDS = new Set([
'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for',
'of', 'with', 'by', 'from', 'as', 'is', 'was', 'are', 'be', 'been',
'have', 'has', 'had', 'do', 'does', 'did', 'will', 'would', 'should',
'could', 'may', 'might', 'can', 'this', 'that', 'these', 'those'
]);
const VALIDATION_PATTERNS = {
tooVague: {
patterns: [/^(fix it|make it better|improve this|do it|help)$/i],
suggestion: 'Be more specific about what you want to fix or improve'
},
missingContext: {
patterns: [/\b(this|that|it)\b/i],
minLength: 30,
suggestion: 'Provide clear context - what specifically does "this" or "that" refer to?'
},
overlyBroad: {
patterns: [/^(build an? app|create a (website|system)|make (something|a thing))$/i],
suggestion: 'Break down your request into specific features or components'
}
};
// Scoring weights (tuned for better results)
const WEIGHTS = {
keywordMatch: 3,
nameMatch: 5,
descriptionOverlap: 2,
categoryMatch: 4,
taskTypeAlign: 5,
projectTypeMatch: 3
};
const MIN_SCORE_THRESHOLD = 8;
const MAX_RECOMMENDATIONS = 3;
const CACHE_TTL = 60000; // 60 seconds
const SESSION_MEMORY_TTL = 3600000; // 1 hour
// ============================================================================
// CACHING & SESSION MANAGEMENT
// ============================================================================
const CACHE = {
marketplace: null,
marketplacePath: null,
timestamp: 0,
installedPlugins: null,
installedTimestamp: 0
};
const SESSION_MEMORY = new Map(); // session_id -> { shown: Set, timestamp }
function cleanupOldSessions() {
const now = Date.now();
for (const [sessionId, data] of SESSION_MEMORY.entries()) {
if (now - data.timestamp > SESSION_MEMORY_TTL) {
SESSION_MEMORY.delete(sessionId);
}
}
}
function getSessionMemory(sessionId) {
if (!SESSION_MEMORY.has(sessionId)) {
SESSION_MEMORY.set(sessionId, {
shown: new Set(),
timestamp: Date.now()
});
}
return SESSION_MEMORY.get(sessionId);
}
function markAsShown(sessionId, pluginName) {
const memory = getSessionMemory(sessionId);
memory.shown.add(pluginName);
memory.timestamp = Date.now();
}
function wasRecentlyShown(sessionId, pluginName) {
const memory = SESSION_MEMORY.get(sessionId);
return memory && memory.shown.has(pluginName);
}
// ============================================================================
// MAIN ENTRY POINT
// ============================================================================
function main() {
const perfStart = Date.now();
try {
// Cleanup old sessions periodically
if (Math.random() < 0.1) { // 10% chance
cleanupOldSessions();
}
// Read hook input from stdin
const input = fs.readFileSync(0, 'utf-8');
const hookInput = JSON.parse(input);
// Analyze and generate output
const result = analyzeInstruction(hookInput);
// Output JSON to stdout
console.log(JSON.stringify(result, null, 2));
// Performance logging
const elapsed = Date.now() - perfStart;
if (elapsed > 500) {
console.error(`[puerto-prompt-analyzer] SLOW execution: ${elapsed}ms`);
}
} catch (error) {
// Fail open - log error but allow prompt to proceed
console.error('[puerto-prompt-analyzer] Fatal error:', error.message);
console.log(JSON.stringify(allowPrompt()));
process.exit(0);
}
}
// ============================================================================
// CORE ANALYSIS LOGIC
// ============================================================================
function analyzeInstruction(hookInput) {
try {
const { prompt, cwd, session_id } = hookInput;
// Skip if empty or command
if (!prompt || !prompt.trim()) {
return allowPrompt();
}
if (prompt.trim().startsWith('/')) {
return allowPrompt();
}
// Load configuration
const config = loadConfiguration();
// Classify task type
const taskType = classifyTaskType(prompt);
// Detect project context
const projectContext = detectProjectContext(cwd);
// Validate instruction quality
const validation = validateInstruction(prompt);
// Load and score plugins
const recommendations = getPluginRecommendations(
prompt,
taskType,
cwd,
session_id,
projectContext,
config
);
// Mark shown plugins
recommendations.forEach(p => markAsShown(session_id, p.name));
// Generate markdown output
const additionalContext = formatRecommendations(
taskType,
recommendations,
validation,
projectContext
);
return {
decision: undefined,
reason: 'Analysis complete',
hookSpecificOutput: {
hookEventName: 'UserPromptSubmit',
additionalContext
}
};
} catch (error) {
console.error('[puerto-prompt-analyzer] Error during analysis:', error.message);
return allowPrompt();
}
}
// ============================================================================
// CONFIGURATION MANAGEMENT
// ============================================================================
function loadConfiguration() {
const configPath = path.join(os.homedir(), '.claude', 'puerto-prompt-analyzer.json');
const defaults = {
minScore: MIN_SCORE_THRESHOLD,
maxRecommendations: MAX_RECOMMENDATIONS,
cacheMinutes: 1,
blacklist: [],
favoriteCategories: [],
showScores: false
};
try {
if (fs.existsSync(configPath)) {
const userConfig = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
return { ...defaults, ...userConfig };
}
} catch (error) {
console.error('[puerto-prompt-analyzer] Error loading config:', error.message);
}
return defaults;
}
// ============================================================================
// PROJECT CONTEXT DETECTION
// ============================================================================
function detectProjectContext(cwd) {
if (!cwd) return { type: 'unknown', files: [] };
const context = {
type: 'unknown',
languages: [],
frameworks: [],
files: []
};
try {
// Check for package.json (JavaScript/Node.js)
if (fs.existsSync(path.join(cwd, 'package.json'))) {
context.type = 'javascript';
context.languages.push('javascript', 'nodejs');
const pkg = JSON.parse(fs.readFileSync(path.join(cwd, 'package.json'), 'utf-8'));
// Detect frameworks
const deps = { ...pkg.dependencies, ...pkg.devDependencies };
if (deps['react']) context.frameworks.push('react');
if (deps['vue']) context.frameworks.push('vue');
if (deps['next']) context.frameworks.push('nextjs');
if (deps['express']) context.frameworks.push('express');
}
// Check for Python
if (fs.existsSync(path.join(cwd, 'requirements.txt')) ||
fs.existsSync(path.join(cwd, 'pyproject.toml'))) {
context.type = 'python';
context.languages.push('python');
}
// Check for Rust
if (fs.existsSync(path.join(cwd, 'Cargo.toml'))) {
context.type = 'rust';
context.languages.push('rust');
}
// Check for Go
if (fs.existsSync(path.join(cwd, 'go.mod'))) {
context.type = 'go';
context.languages.push('go', 'golang');
}
// Check for Ruby
if (fs.existsSync(path.join(cwd, 'Gemfile'))) {
context.type = 'ruby';
context.languages.push('ruby');
}
// Check for Java/Kotlin
if (fs.existsSync(path.join(cwd, 'pom.xml')) ||
fs.existsSync(path.join(cwd, 'build.gradle'))) {
context.type = 'java';
context.languages.push('java');
}
} catch (error) {
console.error('[puerto-prompt-analyzer] Error detecting project:', error.message);
}
return context;
}
// ============================================================================
// TEXT PROCESSING UTILITIES
// ============================================================================
function stem(word) {
// Simple stemmer - remove common suffixes
return word
.replace(/ies$/, 'y')
.replace(/ing$/, '')
.replace(/ed$/, '')
.replace(/s$/, '')
.toLowerCase();
}
function tokenize(text) {
return text
.toLowerCase()
.replace(/[^\w\s-]/g, ' ') // Keep hyphens
.split(/\s+/)
.map(stem)
.filter(w => w.length > 2 && !STOP_WORDS.has(w));
}
// ============================================================================
// TASK CLASSIFICATION
// ============================================================================
function classifyTaskType(prompt) {
const tokens = tokenize(prompt);
const lower = prompt.toLowerCase();
// Count keyword matches (using stemmed tokens)
const researchScore = RESEARCH_KEYWORDS.filter(kw =>
tokens.includes(stem(kw)) || lower.includes(kw)
).length;
const implScore = IMPLEMENTATION_KEYWORDS.filter(kw =>
tokens.includes(stem(kw)) || lower.includes(kw)
).length;
// Enhanced classification
if (implScore > researchScore * 1.5) {
return 'Implementation';
}
if (researchScore > implScore * 1.5) {
return 'Research';
}
if (researchScore > 0 && implScore > 0) {
return 'Mixed';
}
return 'General';
}
// ============================================================================
// INSTRUCTION VALIDATION
// ============================================================================
function validateInstruction(prompt) {
const suggestions = [];
// Check for vague instructions
if (VALIDATION_PATTERNS.tooVague.patterns.some(p => p.test(prompt))) {
suggestions.push(VALIDATION_PATTERNS.tooVague.suggestion);
}
// Check for missing context (only if prompt is very short)
const minLength = VALIDATION_PATTERNS.missingContext.minLength;
if (prompt.length < minLength &&
VALIDATION_PATTERNS.missingContext.patterns.some(p => p.test(prompt))) {
suggestions.push(VALIDATION_PATTERNS.missingContext.suggestion);
}
// Check for overly broad requests
if (VALIDATION_PATTERNS.overlyBroad.patterns.some(p => p.test(prompt))) {
suggestions.push(VALIDATION_PATTERNS.overlyBroad.suggestion);
}
return { suggestions };
}
// ============================================================================
// INSTALLED PLUGINS DETECTION
// ============================================================================
function getInstalledPlugins() {
const now = Date.now();
// Use cache
if (CACHE.installedPlugins && (now - CACHE.installedTimestamp) < CACHE_TTL) {
return CACHE.installedPlugins;
}
const installed = new Set();
try {
const settingsPath = path.join(os.homedir(), '.claude', 'settings.json');
if (fs.existsSync(settingsPath)) {
const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf-8'));
if (settings.enabledPlugins) {
Object.keys(settings.enabledPlugins).forEach(pluginId => {
// Extract plugin name from "plugin@marketplace" format
const pluginName = pluginId.split('@')[0];
installed.add(pluginName);
});
}
}
} catch (error) {
console.error('[puerto-prompt-analyzer] Error reading installed plugins:', error.message);
}
CACHE.installedPlugins = installed;
CACHE.installedTimestamp = now;
return installed;
}
// ============================================================================
// PLUGIN RECOMMENDATIONS
// ============================================================================
function getPluginRecommendations(prompt, taskType, cwd, sessionId, projectContext, config) {
try {
// Find and load marketplace.json (with caching)
const marketplacePath = findMarketplaceJson(cwd);
if (!marketplacePath) {
console.error('[puerto-prompt-analyzer] marketplace.json not found');
return [];
}
const marketplace = getMarketplaceData(marketplacePath);
if (!marketplace || !marketplace.plugins || !Array.isArray(marketplace.plugins)) {
console.error('[puerto-prompt-analyzer] Invalid marketplace format');
return [];
}
const installedPlugins = getInstalledPlugins();
// Score all plugins
const scored = marketplace.plugins
.map(plugin => ({
...plugin,
score: scorePlugin(plugin, prompt, taskType, sessionId, projectContext, installedPlugins, config)
}))
.filter(p => p.score >= config.minScore) // Quality threshold
.sort((a, b) => b.score - a.score);
// Apply diversity (don't show too many similar plugins)
const diverse = diversifyRecommendations(scored, config.maxRecommendations);
// Add recommendation reasons
return diverse.map(plugin => ({
...plugin,
reason: generateReason(plugin, taskType, prompt, projectContext)
}));
} catch (error) {
console.error('[puerto-prompt-analyzer] Error loading marketplace:', error.message);
return [];
}
}
function getMarketplaceData(marketplacePath) {
const now = Date.now();
// Check cache
if (CACHE.marketplace &&
CACHE.marketplacePath === marketplacePath &&
(now - CACHE.timestamp) < CACHE_TTL) {
return CACHE.marketplace;
}
// Load and cache
try {
CACHE.marketplace = JSON.parse(fs.readFileSync(marketplacePath, 'utf-8'));
CACHE.marketplacePath = marketplacePath;
CACHE.timestamp = now;
return CACHE.marketplace;
} catch (error) {
console.error('[puerto-prompt-analyzer] Error parsing marketplace:', error.message);
return null;
}
}
function findMarketplaceJson(startDir) {
if (!startDir) {
return null;
}
let currentDir = startDir;
// Search up to 5 levels up
for (let i = 0; i < 5; i++) {
const marketplacePath = path.join(currentDir, '.claude-plugin', 'marketplace.json');
if (fs.existsSync(marketplacePath)) {
return marketplacePath;
}
const parentDir = path.dirname(currentDir);
if (parentDir === currentDir) {
break; // Reached root
}
currentDir = parentDir;
}
return null;
}
function scorePlugin(plugin, prompt, taskType, sessionId, projectContext, installedPlugins, config) {
const promptTokens = new Set(tokenize(prompt));
const lower = prompt.toLowerCase();
let score = 0;
// Skip essentials plugin (it's already installed by definition)
if (plugin.name === 'essentials') {
return -1;
}
// Skip blacklisted plugins
if (config.blacklist.includes(plugin.name)) {
return -1;
}
// Skip installed plugins
if (installedPlugins.has(plugin.name)) {
return -1;
}
// Reduce score for recently shown plugins
if (wasRecentlyShown(sessionId, plugin.name)) {
score -= 5; // Penalty for repetition
}
// 1. Tokenized description overlap (better than simple word matching)
if (plugin.description) {
const descTokens = tokenize(plugin.description);
const overlap = descTokens.filter(t => promptTokens.has(t)).length;
score += overlap * WEIGHTS.descriptionOverlap;
}
// 2. Name match (strong signal)
if (plugin.name) {
const nameWords = plugin.name.split('-');
const nameMatches = nameWords.filter(w =>
promptTokens.has(stem(w)) || lower.includes(w)
).length;
score += nameMatches * WEIGHTS.nameMatch;
}
// 3. Keywords match
if (plugin.keywords && Array.isArray(plugin.keywords)) {
const keywordMatches = plugin.keywords.filter(kw =>
promptTokens.has(stem(kw)) || lower.includes(kw.toLowerCase())
).length;
score += keywordMatches * WEIGHTS.keywordMatch;
}
// 4. Task type alignment
if (taskType === 'Implementation') {
if (plugin.description && /agent|specialist|builder|creator|developer/.test(plugin.description.toLowerCase())) {
score += WEIGHTS.taskTypeAlign;
}
}
if (taskType === 'Research') {
if (plugin.description && /skill|knowledge|guide|reference|documentation/.test(plugin.description.toLowerCase())) {
score += WEIGHTS.taskTypeAlign;
}
}
// 5. Project context matching
if (projectContext.type !== 'unknown') {
const contextTerms = [
...projectContext.languages,
...projectContext.frameworks,
projectContext.type
];
contextTerms.forEach(term => {
if (plugin.keywords && plugin.keywords.some(kw => kw.toLowerCase().includes(term))) {
score += WEIGHTS.projectTypeMatch;
}
if (plugin.description && plugin.description.toLowerCase().includes(term)) {
score += WEIGHTS.projectTypeMatch * 0.5;
}
});
}
// 6. Category boost for favorites
if (config.favoriteCategories.length > 0 && plugin.category) {
if (config.favoriteCategories.includes(plugin.category)) {
score += 3;
}
}
return Math.max(0, score); // Never negative
}
function diversifyRecommendations(plugins, maxCount) {
const selected = [];
const usedCategories = new Set();
const usedKeywords = new Set();
// First pass: pick best from different categories
for (const plugin of plugins) {
if (selected.length >= maxCount) break;
const category = plugin.category || 'general';
const primaryKeyword = (plugin.keywords && plugin.keywords[0]) || '';
// Prefer different categories and keywords
if (!usedCategories.has(category) || selected.length === 0) {
selected.push(plugin);
usedCategories.add(category);
if (primaryKeyword) usedKeywords.add(primaryKeyword);
}
}
// Second pass: fill remaining slots if needed
if (selected.length < maxCount) {
for (const plugin of plugins) {
if (selected.length >= maxCount) break;
if (!selected.includes(plugin)) {
selected.push(plugin);
}
}
}
return selected;
}
function generateReason(plugin, taskType, prompt, projectContext) {
const reasons = [];
// Check for strong keyword matches
if (plugin.keywords && Array.isArray(plugin.keywords)) {
const matches = plugin.keywords.filter(kw =>
prompt.toLowerCase().includes(kw.toLowerCase())
);
if (matches.length > 0) {
reasons.push(`Matches keywords: ${matches.slice(0, 2).join(', ')}`);
}
}
// Check for name matches
const nameWords = plugin.name.split('-');
const nameMatches = nameWords.filter(w =>
prompt.toLowerCase().includes(w)
);
if (nameMatches.length > 0) {
reasons.push(`Related to ${nameMatches.join(', ')}`);
}
// Project context match
if (projectContext.type !== 'unknown') {
const contextTerms = [...projectContext.languages, ...projectContext.frameworks];
const matches = contextTerms.filter(term =>
(plugin.description && plugin.description.toLowerCase().includes(term)) ||
(plugin.keywords && plugin.keywords.some(kw => kw.toLowerCase().includes(term)))
);
if (matches.length > 0) {
reasons.push(`Fits your ${projectContext.type} project`);
}
}
// Task type alignment
if (taskType === 'Implementation') {
reasons.push('Provides specialized implementation tools');
} else if (taskType === 'Research') {
reasons.push('Offers expert knowledge and guidance');
}
// Default reason if nothing specific
if (reasons.length === 0) {
reasons.push('Relevant to your task based on description');
}
return reasons[0]; // Return the most specific reason
}
// ============================================================================
// OUTPUT FORMATTING
// ============================================================================
function formatRecommendations(taskType, plugins, validation, projectContext) {
let md = '\n\n---\n\n## 🔍 Puerto Prompt Analysis\n\n';
// Task type
md += `**Task Type:** ${taskType}`;
// Project context if detected
if (projectContext.type !== 'unknown') {
md += ` | **Project:** ${projectContext.type}`;
if (projectContext.frameworks.length > 0) {
md += ` (${projectContext.frameworks.slice(0, 2).join(', ')})`;
}
}
md += '\n';
// Validation suggestions
if (validation.suggestions.length > 0) {
md += `\n**💡 Suggestions:**\n`;
validation.suggestions.forEach(s => {
md += `- ${s}\n`;
});
}
// Plugin recommendations
if (plugins.length === 0) {
md += '\n*No specific plugin recommendations found.*\n';
md += '\n---\n\n';
return md;
}
md += `\n**📦 Recommended Plugins:**\n\n`;
plugins.forEach((plugin, idx) => {
md += `### ${idx + 1}. \`${plugin.name}\`\n`;
md += `**Description:** ${plugin.description}\n`;
md += `**Why:** ${plugin.reason}\n`;
// Show score if configured
const config = loadConfiguration();
if (config.showScores) {
md += `**Score:** ${Math.round(plugin.score)}\n`;
}
md += `**Install:** \`/plugin install ${plugin.name}\`\n\n`;
});
md += '---\n\n';
return md;
}
// ============================================================================
// HELPER FUNCTIONS
// ============================================================================
function allowPrompt() {
return {
decision: undefined,
reason: 'Proceeding normally',
hookSpecificOutput: {
hookEventName: 'UserPromptSubmit',
additionalContext: ''
}
};
}
// ============================================================================
// RUN
// ============================================================================
if (require.main === module) {
main();
}
// Export for testing
module.exports = {
classifyTaskType,
validateInstruction,
scorePlugin,
analyzeInstruction,
tokenize,
stem,
detectProjectContext
};

89
plugin.lock.json Normal file
View File

@@ -0,0 +1,89 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:bandofai/puerto:plugins/essentials",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "1d7dd43a1768c3fc7b535a83fa3c669b64a6a437",
"treeHash": "b36d3d22d2cad2af10fabc290ece79080ab009247b792de8c83f97de90e9217a",
"generatedAt": "2025-11-28T10:14:07.166608Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "essentials",
"description": "Essential MCP servers, requirements management, and intelligent instruction analysis for enhanced Claude Code development workflow",
"version": "1.1.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "4b03266f9234b9684fcd902ae863828b7bbea58925b5ed671b96479da12bcd46"
},
{
"path": "hooks/install-hook.ps1",
"sha256": "5a64e32cf0e32ff15858bd1248b459e64b8bc533e887e35b15b81ed0ed4500f9"
},
{
"path": "hooks/puerto-prompt-analyzer.js",
"sha256": "f78ed547c63e0b8f1f7075032f7651b8282315f7f44d454296a83af9e3e6f562"
},
{
"path": "hooks/README.md",
"sha256": "49c803fb205d5ff30e3a9fff681f5d5ef1793954df280446c0b1ca5dce0a3162"
},
{
"path": "hooks/install-hook.sh",
"sha256": "2a8a73dac22acb186094e176630e2d025a4cf99e157474682cf9ff399e29bf10"
},
{
"path": "hooks/puerto-prompt-analyzer.example.json",
"sha256": "f279af1cfcbca5e466a7087256400f1b348b09205f93498073401c1752073aa6"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "b0512cfef3ab8fc2b03efda0f76a20143d6fd66b50c6b2f6587299d7c7928591"
},
{
"path": "commands/req-status.md",
"sha256": "8d7650964e6ca0a0228a89ffdf2f5c24380558083f99b81724d58050cb929404"
},
{
"path": "commands/implement.md",
"sha256": "db0df8d097ce319f6c46410235848bf2fa67dbac2647792d9a5f9b2d5320cdc5"
},
{
"path": "commands/continue.md",
"sha256": "cfc00350f59a851fc7ea24a7175837a1aed1640df6a43394ddc3b55850e07a37"
},
{
"path": "commands/req-list.md",
"sha256": "db85a6742b26ea945d4fccd7241bcae0688d70874df6abd56c4957aa52e79de8"
},
{
"path": "commands/req-tests.md",
"sha256": "68e4c35ad8b1d3972943ce2bb1cdc8c8a49f75029294c70dc7e388fb1e2a9dec"
},
{
"path": "commands/req-update.md",
"sha256": "8cf6a2c8244f1588481369374d410c769def093939bdf2692ab90a4351933cd7"
},
{
"path": "commands/brainstorm.md",
"sha256": "aee7673e4481b0a833b1ae894db0119a211770c50568b08c93f49520a42d37fb"
}
],
"dirSha256": "b36d3d22d2cad2af10fabc290ece79080ab009247b792de8c83f97de90e9217a"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}