commit b727790a9e70d7dcb1d5e6e4484f8abc0e83082c Author: Zhongwei Li Date: Sat Nov 29 18:20:28 2025 +0800 Initial commit diff --git a/.claude-plugin/plugin.json b/.claude-plugin/plugin.json new file mode 100644 index 0000000..adaad10 --- /dev/null +++ b/.claude-plugin/plugin.json @@ -0,0 +1,19 @@ +{ + "name": "marketplace-validator-plugin", + "description": "Comprehensive validation for Claude Code marketplaces and plugins with quality scoring, security scanning, and automated checks", + "version": "1.0.0", + "author": { + "name": "Daniel Hofheinz", + "email": "daniel@danielhofheinz.com", + "url": "https://github.com/dhofheinz/open-plugins" + }, + "agents": [ + "./agents" + ], + "commands": [ + "./commands" + ], + "hooks": [ + "./hooks" + ] +} \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..dfe73ac --- /dev/null +++ b/README.md @@ -0,0 +1,3 @@ +# marketplace-validator-plugin + +Comprehensive validation for Claude Code marketplaces and plugins with quality scoring, security scanning, and automated checks diff --git a/agents/marketplace-validator.md b/agents/marketplace-validator.md new file mode 100644 index 0000000..9a77ab9 --- /dev/null +++ b/agents/marketplace-validator.md @@ -0,0 +1,299 @@ +--- +name: marketplace-validator +description: Proactive validation expert for Claude Code marketplaces and plugins. Use immediately when users mention validating, checking, or reviewing marketplaces or plugins, or when preparing for submission. +capabilities: [schema-validation, quality-assessment, security-scanning, best-practices-enforcement, automated-recommendations] +tools: Bash, Read, Glob, Grep +model: inherit +--- + +You are a marketplace validation expert specializing in Claude Code plugin ecosystems. Your mission is to ensure marketplaces and plugins meet quality standards before publication. + +## Core Responsibilities + +### 1. Automatic Validation Detection + +Proactively initiate validation when users: +- Mention "validate", "check", "review", or "verify" +- Reference marketplace.json or plugin.json files +- Ask about "quality", "standards", or "readiness" +- Prepare for "submission", "publication", or "release" +- Question whether something is "ready" or "correct" + +### 2. Intelligent Target Detection + +Automatically determine validation target: +- **Marketplace**: If `.claude-plugin/marketplace.json` exists +- **Plugin**: If `plugin.json` exists at plugin root +- **Both**: If user has both in different directories +- **Ask**: If target is ambiguous + +### 3. Validation Orchestration + +Execute appropriate validation: + +**For Marketplaces**: +```bash +/validate-marketplace [path] +``` + +**For Plugins**: +```bash +/validate-plugin [path] +``` + +**For Quick Checks**: +```bash +/validate-quick [path] +``` + +### 4. Comprehensive Analysis + +Analyze validation results and provide: + +**Critical Issues** (must fix before publication): +- Invalid JSON syntax +- Missing required fields +- Security vulnerabilities +- Invalid format violations + +**Important Warnings** (should fix for quality): +- Missing recommended fields +- Format inconsistencies +- Incomplete documentation +- Suboptimal descriptions + +**Recommendations** (improve discoverability): +- Add keywords for search +- Expand documentation +- Include CHANGELOG +- Add examples + +### 5. Educational Guidance + +For each issue, explain: +- **What's wrong**: Clear, specific description +- **Why it matters**: Impact on functionality, security, or user experience +- **How to fix**: Step-by-step remediation +- **Examples**: Show correct format + +## Validation Standards + +### Marketplace Standards + +**Required Fields**: +- `name`: Lowercase-hyphen format +- `owner.name`: Owner identification +- `owner.email`: Contact information +- `description`: 50-500 characters +- `plugins`: Array of plugin entries + +**Plugin Entry Requirements**: +- `name`: Lowercase-hyphen format +- `version`: Semantic versioning (X.Y.Z) +- `description`: 50-200 characters +- `author`: String or object with name +- `source`: Valid format (github:, URL, or path) +- `license`: SPDX identifier + +**Quality Criteria**: +- Keywords: 3-7 per plugin +- Category: One of 10 standard categories +- Complete metadata +- No exposed secrets + +### Plugin Standards + +**Required Fields**: +- `name`: Lowercase-hyphen format +- `version`: Semantic versioning (X.Y.Z) +- `description`: 50-200 characters +- `author`: String or object with name +- `license`: Valid identifier + +**Required Files**: +- `plugin.json`: Manifest at plugin root +- `README.md`: Comprehensive documentation +- `LICENSE`: License file + +**Component Validation**: +- Commands: Valid frontmatter with description +- Agents: Required fields (name, description) +- Hooks: Valid JSON, executable scripts +- MCP: Valid configuration + +**Security Requirements**: +- No exposed secrets or API keys +- No `.env` files with credentials +- HTTPS for external URLs +- Safe file permissions + +## Quality Scoring System + +Calculate and communicate quality scores: + +**90-100 (Excellent ⭐⭐⭐⭐⭐)**: +- Publication-ready +- All required fields present +- All recommended fields included +- No critical issues +- Minimal warnings + +**75-89 (Good ⭐⭐⭐⭐)**: +- Ready with minor improvements +- All required fields present +- Some recommended fields missing +- No critical issues +- Few warnings + +**60-74 (Fair ⭐⭐⭐)**: +- Needs work before publication +- All required fields present +- Many recommended fields missing +- Multiple warnings + +**40-59 (Needs Improvement ⭐⭐)**: +- Significant issues present +- Some required fields missing +- Many quality issues +- Not ready for publication + +**0-39 (Poor ⭐)**: +- Critical issues present +- Multiple required fields missing +- Security concerns +- Substantial work needed + +## Response Patterns + +### Pattern 1: Proactive Validation + +User: "I'm working on a plugin for the marketplace" + +Agent: +``` +I see you're working on a marketplace plugin! Let me validate it to ensure it meets quality standards. + +[Runs /validate-plugin automatically] + +[Provides detailed analysis of results] +[Lists critical issues, warnings, recommendations] +[Creates prioritized action plan] +``` + +### Pattern 2: Troubleshooting Guidance + +User: "My plugin.json has an error" + +Agent: +``` +Let me validate your plugin.json to identify the issue. + +[Runs /validate-plugin] + +I found the problem: +❌ Invalid JSON syntax on line 15 + Issue: Missing comma after "description" field + +To fix: +1. Open plugin.json +2. Add comma after line 15: "description": "...", +3. Validate: python3 -m json.tool plugin.json + +[Shows corrected example] +``` + +### Pattern 3: Pre-Submission Review + +User: "Is my plugin ready to submit to OpenPlugins?" + +Agent: +``` +Let me run a comprehensive validation to check publication readiness. + +[Runs /validate-plugin with detailed analysis] + +Quality Assessment: 85/100 - Good ⭐⭐⭐⭐ + +Your plugin is nearly ready! Address these items: + +Critical (must fix): +- None ✅ + +Recommended (should fix): +1. Add CHANGELOG.md for version tracking +2. Expand README with more examples +3. Add 2 more keywords for discoverability + +After fixing, your plugin will be excellent and ready for submission! +``` + +### Pattern 4: Comparative Analysis + +User: "Can you validate both my marketplace and plugin?" + +Agent: +``` +I'll validate both for you! + +[Validates marketplace] +Marketplace: 92/100 - Excellent ⭐⭐⭐⭐⭐ +✅ Ready for use + +[Validates plugin] +Plugin: 78/100 - Good ⭐⭐⭐⭐ +⚠️ Minor improvements recommended + +Summary: Your marketplace is excellent! The plugin is good but would benefit from adding keywords and expanding documentation. +``` + +## Best Practices You Enforce + +1. **Standards Compliance**: Always reference official documentation +2. **Security First**: Flag any potential security issues immediately +3. **User-Friendly**: Explain technical issues in accessible language +4. **Actionable**: Provide specific steps, not vague suggestions +5. **Encouraging**: Balance critique with positive feedback +6. **Educational**: Help users understand why standards exist + +## Documentation References + +Guide users to relevant documentation: +- Plugin reference: `https://raw.githubusercontent.com/dhofheinz/open-plugins/refs/heads/main/docs/plugins/plugins-reference.md` +- Marketplace guide: `https://raw.githubusercontent.com/dhofheinz/open-plugins/refs/heads/main/docs/plugins/plugin-marketplaces.md` +- OpenPlugins standards: `https://github.com/dhofheinz/open-plugins/blob/main/CONTRIBUTING.md` +- Best practices: `CLAUDE.md` in project root + +## Error Recovery + +When validation fails: +1. Clearly identify the error +2. Explain the impact +3. Provide remediation steps +4. Show correct examples +5. Offer to re-validate after fixes + +## Integration with Hooks + +Inform users about automatic validation: +``` +Tip: This plugin includes automatic validation hooks! +Whenever you edit marketplace.json or plugin.json, +quick validation runs automatically. You'll see +immediate feedback on critical issues. + +For comprehensive quality assessment, use: +- /validate-marketplace - Full marketplace analysis +- /validate-plugin - Complete plugin review +- /validate-quick - Fast essential checks +``` + +## Success Criteria + +Consider validation successful when: +- No critical errors present +- All required fields complete +- Security checks pass +- Quality score ≥ 75/100 +- User understands any remaining issues + +Always conclude with next steps and encouragement! diff --git a/commands/best-practices/.scripts/category-validator.sh b/commands/best-practices/.scripts/category-validator.sh new file mode 100755 index 0000000..a3b944b --- /dev/null +++ b/commands/best-practices/.scripts/category-validator.sh @@ -0,0 +1,254 @@ +#!/usr/bin/env bash + +# ============================================================================ +# Category Validator +# ============================================================================ +# Purpose: Validate category against OpenPlugins approved category list +# Version: 1.0.0 +# Usage: ./category-validator.sh [--suggest] +# Returns: 0=valid, 1=invalid, 2=missing params +# ============================================================================ + +set -euo pipefail + +# OpenPlugins approved categories (exactly 10) +APPROVED_CATEGORIES=( + "development" + "testing" + "deployment" + "documentation" + "security" + "database" + "monitoring" + "productivity" + "quality" + "collaboration" +) + +# Category descriptions +declare -A CATEGORY_DESCRIPTIONS=( + ["development"]="Code generation, scaffolding, refactoring" + ["testing"]="Test generation, coverage, quality assurance" + ["deployment"]="CI/CD, infrastructure, release automation" + ["documentation"]="Docs generation, API documentation" + ["security"]="Vulnerability scanning, secret detection" + ["database"]="Schema design, migrations, queries" + ["monitoring"]="Performance analysis, logging" + ["productivity"]="Workflow automation, task management" + ["quality"]="Linting, formatting, code review" + ["collaboration"]="Team tools, communication" +) + +# ============================================================================ +# Functions +# ============================================================================ + +usage() { + cat < [--suggest] + +Validate category against OpenPlugins approved category list. + +Arguments: + category Category name to validate (required) + --suggest Show similar categories if invalid + +Approved Categories (exactly 10): + 1. development - Code generation, scaffolding + 2. testing - Test generation, coverage + 3. deployment - CI/CD, infrastructure + 4. documentation - Docs generation, API docs + 5. security - Vulnerability scanning + 6. database - Schema design, migrations + 7. monitoring - Performance analysis + 8. productivity - Workflow automation + 9. quality - Linting, formatting + 10. collaboration - Team tools, communication + +Exit codes: + 0 - Valid category + 1 - Invalid category + 2 - Missing required parameters +EOF + exit 2 +} + +# Calculate Levenshtein distance for similarity +levenshtein_distance() { + local s1="$1" + local s2="$2" + local len1=${#s1} + local len2=${#s2} + + # Simple implementation + if [ "$s1" = "$s2" ]; then + echo 0 + return + fi + + # Rough approximation: count different characters + local diff=0 + local max_len=$((len1 > len2 ? len1 : len2)) + + for ((i=0; i [--min N] [--max N] +Returns: 0=valid, 1=count violation, 2=quality issues, 3=missing params +============================================================================ +""" + +import sys +import re +from typing import List, Tuple, Dict + +# Default constraints +DEFAULT_MIN_KEYWORDS = 3 +DEFAULT_MAX_KEYWORDS = 7 + +# Generic terms to avoid +GENERIC_BLOCKLIST = [ + 'plugin', 'tool', 'utility', 'helper', 'app', + 'code', 'software', 'program', 'system', + 'awesome', 'best', 'perfect', 'great', 'super', + 'amazing', 'cool', 'nice', 'good', 'excellent' +] + +# OpenPlugins categories (should not be duplicated as keywords) +CATEGORIES = [ + 'development', 'testing', 'deployment', 'documentation', + 'security', 'database', 'monitoring', 'productivity', + 'quality', 'collaboration' +] + +# Common keyword types for balance checking +FUNCTIONALITY_KEYWORDS = [ + 'testing', 'deployment', 'formatting', 'linting', 'migration', + 'generation', 'automation', 'analysis', 'monitoring', 'scanning', + 'refactoring', 'debugging', 'profiling', 'optimization' +] + +TECHNOLOGY_KEYWORDS = [ + 'python', 'javascript', 'typescript', 'docker', 'kubernetes', + 'react', 'vue', 'angular', 'node', 'bash', 'terraform', + 'postgresql', 'mysql', 'redis', 'aws', 'azure', 'gcp' +] + + +def usage(): + """Print usage information""" + print("""Usage: keyword-analyzer.py [--min N] [--max N] + +Analyze keyword quality and relevance for OpenPlugins standards. + +Arguments: + keywords Comma-separated list of keywords (required) + --min N Minimum keyword count (default: 3) + --max N Maximum keyword count (default: 7) + +Requirements: + - Count: 3-7 keywords (optimal: 5-6) + - No generic terms (plugin, tool, awesome) + - No marketing fluff (best, perfect, amazing) + - Mix of functionality and technology + - No redundant variations + +Good examples: + "testing,pytest,automation,tdd,python" + "deployment,kubernetes,ci-cd,docker" + "linting,javascript,code-quality" + +Bad examples: + "plugin,tool,awesome" (generic) + "test,testing,tests" (redundant) + "development" (only one, too generic) + +Exit codes: + 0 - Valid keyword set + 1 - Count violation (too few or too many) + 2 - Quality issues (generic terms, duplicates) + 3 - Missing required parameters +""") + sys.exit(3) + + +def parse_keywords(keyword_string: str) -> List[str]: + """Parse and normalize keyword string""" + if not keyword_string: + return [] + + # Split by comma, strip whitespace, lowercase + keywords = [k.strip().lower() for k in keyword_string.split(',')] + + # Remove empty strings + keywords = [k for k in keywords if k] + + # Remove duplicates while preserving order + seen = set() + unique_keywords = [] + for k in keywords: + if k not in seen: + seen.add(k) + unique_keywords.append(k) + + return unique_keywords + + +def check_generic_terms(keywords: List[str]) -> Tuple[List[str], List[str]]: + """ + Check for generic and marketing terms + + Returns: + (generic_terms, marketing_terms) + """ + generic_terms = [] + marketing_terms = [] + + for keyword in keywords: + if keyword in GENERIC_BLOCKLIST: + if keyword in ['awesome', 'best', 'perfect', 'great', 'super', 'amazing', 'cool', 'nice', 'good', 'excellent']: + marketing_terms.append(keyword) + else: + generic_terms.append(keyword) + + return generic_terms, marketing_terms + + +def check_redundant_variations(keywords: List[str]) -> List[Tuple[str, str]]: + """ + Find redundant keyword variations + + Returns: + List of (keyword1, keyword2) pairs that are redundant + """ + redundant = [] + + for i, kw1 in enumerate(keywords): + for kw2 in keywords[i+1:]: + # Check if one is a substring of the other + if kw1 in kw2 or kw2 in kw1: + redundant.append((kw1, kw2)) + # Check for plural variations + elif kw1.rstrip('s') == kw2 or kw2.rstrip('s') == kw1: + redundant.append((kw1, kw2)) + + return redundant + + +def check_category_duplication(keywords: List[str]) -> List[str]: + """Check if any keywords exactly match category names""" + duplicates = [] + for keyword in keywords: + if keyword in CATEGORIES: + duplicates.append(keyword) + return duplicates + + +def analyze_balance(keywords: List[str]) -> Dict[str, int]: + """ + Analyze keyword balance across types + + Returns: + Dict with counts for each type + """ + balance = { + 'functionality': 0, + 'technology': 0, + 'other': 0 + } + + for keyword in keywords: + if keyword in FUNCTIONALITY_KEYWORDS: + balance['functionality'] += 1 + elif keyword in TECHNOLOGY_KEYWORDS: + balance['technology'] += 1 + else: + balance['other'] += 1 + + return balance + + +def calculate_quality_score( + keywords: List[str], + generic_terms: List[str], + marketing_terms: List[str], + redundant: List[Tuple[str, str]], + category_dups: List[str], + min_count: int, + max_count: int +) -> Tuple[int, List[str]]: + """ + Calculate quality score and list issues + + Returns: + (score out of 10, list of issues) + """ + score = 10 + issues = [] + + # Count violations + count = len(keywords) + if count < min_count: + score -= 5 + issues.append(f"Too few keywords ({count} < {min_count} minimum)") + elif count > max_count: + score -= 3 + issues.append(f"Too many keywords ({count} > {max_count} maximum)") + + # Generic terms + if generic_terms: + score -= len(generic_terms) * 2 + issues.append(f"Generic terms detected: {', '.join(generic_terms)}") + + # Marketing terms + if marketing_terms: + score -= len(marketing_terms) * 2 + issues.append(f"Marketing terms detected: {', '.join(marketing_terms)}") + + # Redundant variations + if redundant: + score -= len(redundant) * 2 + redundant_str = ', '.join([f"{a}/{b}" for a, b in redundant]) + issues.append(f"Redundant variations: {redundant_str}") + + # Category duplication + if category_dups: + score -= len(category_dups) * 1 + issues.append(f"Category name duplication: {', '.join(category_dups)}") + + # Single-character keywords + single_char = [k for k in keywords if len(k) == 1] + if single_char: + score -= len(single_char) * 2 + issues.append(f"Single-character keywords: {', '.join(single_char)}") + + # Balance check + balance = analyze_balance(keywords) + if balance['functionality'] == 0 and balance['technology'] == 0: + score -= 2 + issues.append("No functional or technical keywords") + + return max(0, score), issues + + +def suggest_improvements( + keywords: List[str], + generic_terms: List[str], + marketing_terms: List[str], + redundant: List[Tuple[str, str]], + min_count: int, + max_count: int +) -> List[str]: + """Generate improvement suggestions""" + suggestions = [] + + # Remove generic/marketing terms + if generic_terms or marketing_terms: + suggestions.append("Remove generic/marketing terms") + suggestions.append(" Replace with specific functionality (e.g., testing, deployment, formatting)") + + # Consolidate redundant variations + if redundant: + suggestions.append("Consolidate redundant variations") + for kw1, kw2 in redundant: + suggestions.append(f" Keep one of: {kw1}, {kw2}") + + # Add more keywords if too few + count = len(keywords) + if count < min_count: + needed = min_count - count + suggestions.append(f"Add {needed} more relevant keyword(s)") + suggestions.append(" Consider: specific technologies, use-cases, or functionalities") + + # Remove keywords if too many + elif count > max_count: + excess = count - max_count + suggestions.append(f"Remove {excess} least relevant keyword(s)") + + # Balance suggestions + balance = analyze_balance(keywords) + if balance['functionality'] == 0: + suggestions.append("Add functionality keywords (e.g., testing, automation, deployment)") + if balance['technology'] == 0: + suggestions.append("Add technology keywords (e.g., python, docker, kubernetes)") + + return suggestions + + +def main(): + """Main entry point""" + if len(sys.argv) < 2 or sys.argv[1] in ['-h', '--help']: + usage() + + keyword_string = sys.argv[1] + + # Parse optional arguments + min_count = DEFAULT_MIN_KEYWORDS + max_count = DEFAULT_MAX_KEYWORDS + + for i, arg in enumerate(sys.argv[2:], start=2): + if arg == '--min' and i + 1 < len(sys.argv): + min_count = int(sys.argv[i + 1]) + elif arg == '--max' and i + 1 < len(sys.argv): + max_count = int(sys.argv[i + 1]) + + # Parse keywords + keywords = parse_keywords(keyword_string) + + if not keywords: + print("ERROR: Keywords cannot be empty\n") + print("Provide 3-7 relevant keywords describing your plugin.\n") + print("Examples:") + print(' "testing,pytest,automation"') + print(' "deployment,kubernetes,ci-cd"') + sys.exit(3) + + # Analyze keywords + count = len(keywords) + generic_terms, marketing_terms = check_generic_terms(keywords) + redundant = check_redundant_variations(keywords) + category_dups = check_category_duplication(keywords) + balance = analyze_balance(keywords) + + # Calculate quality score + score, issues = calculate_quality_score( + keywords, generic_terms, marketing_terms, + redundant, category_dups, min_count, max_count + ) + + # Determine status + if score >= 9 and min_count <= count <= max_count: + status = "✅ PASS" + exit_code = 0 + elif count < min_count or count > max_count: + status = "❌ FAIL" + exit_code = 1 + elif score < 7: + status = "❌ FAIL" + exit_code = 2 + else: + status = "⚠️ WARNING" + exit_code = 0 + + # Print results + print(f"{status}: Keyword validation\n") + print(f"Keywords: {', '.join(keywords)}") + print(f"Count: {count} (valid range: {min_count}-{max_count})") + print(f"Quality Score: {score}/10\n") + + if issues: + print("Issues Found:") + for issue in issues: + print(f" - {issue}") + print() + + # Balance breakdown + print("Breakdown:") + print(f" - Functionality: {balance['functionality']} keywords") + print(f" - Technology: {balance['technology']} keywords") + print(f" - Other: {balance['other']} keywords") + print() + + # Score impact + if score >= 9: + print("Quality Score Impact: +10 points (excellent)\n") + if exit_code == 0: + print("Excellent keyword selection for discoverability!") + elif score >= 7: + print("Quality Score Impact: +7 points (good)\n") + print("Good keywords, but could be improved.") + else: + print("Quality Score Impact: 0 points (fix to gain +10)\n") + print("Keywords need significant improvement.") + + # Suggestions + if issues: + suggestions = suggest_improvements( + keywords, generic_terms, marketing_terms, + redundant, min_count, max_count + ) + if suggestions: + print("\nSuggestions:") + for suggestion in suggestions: + print(f" {suggestion}") + + sys.exit(exit_code) + + +if __name__ == '__main__': + main() diff --git a/commands/best-practices/.scripts/naming-validator.sh b/commands/best-practices/.scripts/naming-validator.sh new file mode 100755 index 0000000..73bc95e --- /dev/null +++ b/commands/best-practices/.scripts/naming-validator.sh @@ -0,0 +1,224 @@ +#!/usr/bin/env bash + +# ============================================================================ +# Naming Convention Validator +# ============================================================================ +# Purpose: Validate plugin names against OpenPlugins lowercase-hyphen convention +# Version: 1.0.0 +# Usage: ./naming-validator.sh [--suggest] +# Returns: 0=valid, 1=invalid, 2=missing params +# ============================================================================ + +set -euo pipefail + +# OpenPlugins naming pattern +NAMING_PATTERN='^[a-z0-9]+(-[a-z0-9]+)*$' + +# Generic terms to avoid +GENERIC_TERMS=("plugin" "tool" "utility" "helper" "app" "code" "software") + +# ============================================================================ +# Functions +# ============================================================================ + +usage() { + cat < [--suggest] + +Validate plugin name against OpenPlugins naming convention. + +Arguments: + name Plugin name to validate (required) + --suggest Auto-suggest corrected name if invalid + +Pattern: ^[a-z0-9]+(-[a-z0-9]+)*$ + +Valid examples: + - code-formatter + - test-runner + - api-client + +Invalid examples: + - Code-Formatter (uppercase) + - test_runner (underscore) + - -helper (leading hyphen) + +Exit codes: + 0 - Valid naming convention + 1 - Invalid naming convention + 2 - Missing required parameters +EOF + exit 2 +} + +# Convert to lowercase-hyphen format +suggest_correction() { + local name="$1" + local corrected="$name" + + # Convert to lowercase + corrected="${corrected,,}" + + # Replace underscores with hyphens + corrected="${corrected//_/-}" + + # Replace spaces with hyphens + corrected="${corrected// /-}" + + # Remove non-alphanumeric except hyphens + corrected="$(echo "$corrected" | sed 's/[^a-z0-9-]//g')" + + # Remove leading/trailing hyphens + corrected="$(echo "$corrected" | sed 's/^-*//;s/-*$//')" + + # Replace multiple consecutive hyphens with single + corrected="$(echo "$corrected" | sed 's/-\+/-/g')" + + echo "$corrected" +} + +# Check for generic terms +check_generic_terms() { + local name="$1" + local found_generic=() + + for term in "${GENERIC_TERMS[@]}"; do + if [[ "$name" == "$term" ]] || [[ "$name" == *"-$term" ]] || [[ "$name" == "$term-"* ]] || [[ "$name" == *"-$term-"* ]]; then + found_generic+=("$term") + fi + done + + if [ ${#found_generic[@]} -gt 0 ]; then + echo "Warning: Contains generic term(s): ${found_generic[*]}" + return 1 + fi + return 0 +} + +# Find specific issues in the name +find_issues() { + local name="$1" + local issues=() + + # Check for uppercase + if [[ "$name" =~ [A-Z] ]]; then + local uppercase=$(echo "$name" | grep -o '[A-Z]' | tr '\n' ',' | sed 's/,$//') + issues+=("Contains uppercase characters: $uppercase") + fi + + # Check for underscores + if [[ "$name" =~ _ ]]; then + issues+=("Contains underscores instead of hyphens") + fi + + # Check for spaces + if [[ "$name" =~ \ ]]; then + issues+=("Contains spaces") + fi + + # Check for leading hyphen + if [[ "$name" =~ ^- ]]; then + issues+=("Starts with hyphen") + fi + + # Check for trailing hyphen + if [[ "$name" =~ -$ ]]; then + issues+=("Ends with hyphen") + fi + + # Check for consecutive hyphens + if [[ "$name" =~ -- ]]; then + issues+=("Contains consecutive hyphens") + fi + + # Check for special characters + if [[ "$name" =~ [^a-zA-Z0-9_\ -] ]]; then + issues+=("Contains special characters") + fi + + # Check for empty or too short + if [ ${#name} -eq 0 ]; then + issues+=("Name is empty") + elif [ ${#name} -eq 1 ]; then + issues+=("Name is too short (single character)") + fi + + # Print issues + if [ ${#issues[@]} -gt 0 ]; then + for issue in "${issues[@]}"; do + echo " - $issue" + done + return 1 + fi + return 0 +} + +# ============================================================================ +# Main +# ============================================================================ + +main() { + # Check for help flag + if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then + usage + fi + + local name="$1" + local suggest=false + + if [ $# -gt 1 ] && [ "$2" = "--suggest" ]; then + suggest=true + fi + + # Check if name is provided + if [ -z "$name" ]; then + echo "ERROR: Name cannot be empty" + exit 2 + fi + + # Validate against pattern + if [[ "$name" =~ $NAMING_PATTERN ]]; then + echo "✅ PASS: Valid naming convention" + echo "Name: $name" + echo "Format: lowercase-hyphen" + + # Check for generic terms (warning only) + if ! check_generic_terms "$name"; then + echo "" + echo "Recommendation: Use more descriptive, functionality-specific names" + fi + + exit 0 + else + echo "❌ FAIL: Invalid naming convention" + echo "Name: $name" + echo "" + echo "Issues Found:" + find_issues "$name" + + if [ "$suggest" = true ]; then + local correction=$(suggest_correction "$name") + echo "" + echo "Suggested Correction: $correction" + + # Validate the suggestion + if [[ "$correction" =~ $NAMING_PATTERN ]]; then + echo "✓ Suggestion is valid" + else + echo "⚠ Manual correction may be needed" + fi + fi + + echo "" + echo "Required Pattern: ^[a-z0-9]+(-[a-z0-9]+)*$" + echo "" + echo "Valid Examples:" + echo " - code-formatter" + echo " - test-runner" + echo " - api-client" + + exit 1 + fi +} + +main "$@" diff --git a/commands/best-practices/.scripts/semver-checker.py b/commands/best-practices/.scripts/semver-checker.py new file mode 100755 index 0000000..f2793bb --- /dev/null +++ b/commands/best-practices/.scripts/semver-checker.py @@ -0,0 +1,234 @@ +#!/usr/bin/env python3 + +""" +============================================================================ +Semantic Version Validator +============================================================================ +Purpose: Validate version strings against Semantic Versioning 2.0.0 +Version: 1.0.0 +Usage: ./semver-checker.py [--strict] +Returns: 0=valid, 1=invalid, 2=missing params, 3=strict mode violation +============================================================================ +""" + +import re +import sys +from typing import Tuple, Optional, Dict, List + +# Semantic versioning patterns +STRICT_SEMVER_PATTERN = r'^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)$' +FULL_SEMVER_PATTERN = r'^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$' + + +def usage(): + """Print usage information""" + print("""Usage: semver-checker.py [--strict] + +Validate version string against Semantic Versioning 2.0.0 specification. + +Arguments: + version Version string to validate (required) + --strict Enforce strict MAJOR.MINOR.PATCH format (no pre-release/build) + +Pattern (strict): MAJOR.MINOR.PATCH (e.g., 1.2.3) +Pattern (full): MAJOR.MINOR.PATCH[-PRERELEASE][+BUILD] + +Valid examples: + - 1.0.0 (strict) + - 1.2.3 (strict) + - 1.0.0-alpha.1 (full) + - 1.2.3+build.20241013 (full) + +Invalid examples: + - 1.0 (missing PATCH) + - v1.0.0 (has prefix) + - 1.2.x (placeholder) + +Exit codes: + 0 - Valid semantic version + 1 - Invalid format + 2 - Missing required parameters + 3 - Strict mode violation (valid semver, but has pre-release/build) + +Reference: https://semver.org/ +""") + sys.exit(2) + + +def parse_semver(version: str) -> Optional[Dict[str, any]]: + """ + Parse semantic version string into components + + Returns: + Dict with major, minor, patch, prerelease, build + None if invalid format + """ + match = re.match(FULL_SEMVER_PATTERN, version) + if not match: + return None + + major, minor, patch, prerelease, build = match.groups() + + return { + 'major': int(major), + 'minor': int(minor), + 'patch': int(patch), + 'prerelease': prerelease or None, + 'build': build or None, + 'is_strict': prerelease is None and build is None + } + + +def find_issues(version: str) -> List[str]: + """Find specific issues with version format""" + issues = [] + + # Check for common mistakes + if version.startswith('v') or version.startswith('V'): + issues.append("Starts with 'v' prefix (remove it)") + + # Check for missing components + parts = version.split('.') + if len(parts) < 3: + issues.append(f"Missing components (has {len(parts)}, needs 3: MAJOR.MINOR.PATCH)") + elif len(parts) > 3: + # Check if extra parts are pre-release or build + if '-' not in version and '+' not in version: + issues.append(f"Too many components (has {len(parts)}, expected 3)") + + # Check for placeholders + if 'x' in version.lower() or '*' in version: + issues.append("Contains placeholder values (x or *)") + + # Check for non-numeric base version + base_version = version.split('-')[0].split('+')[0] + base_parts = base_version.split('.') + for i, part in enumerate(base_parts): + if not part.isdigit(): + component = ['MAJOR', 'MINOR', 'PATCH'][i] if i < 3 else 'component' + issues.append(f"{component} is not numeric: '{part}'") + + # Check for leading zeros + for i, part in enumerate(base_parts[:3]): + if len(part) > 1 and part.startswith('0'): + component = ['MAJOR', 'MINOR', 'PATCH'][i] + issues.append(f"{component} has leading zero: '{part}'") + + # Check for non-standard identifiers + if version in ['latest', 'stable', 'dev', 'master', 'main']: + issues.append("Using non-numeric identifier (not a version)") + + return issues + + +def validate_version(version: str, strict: bool = False) -> Tuple[bool, int, str]: + """ + Validate semantic version + + Returns: + (is_valid, exit_code, message) + """ + if not version or version.strip() == '': + return False, 2, "ERROR: Version cannot be empty" + + # Parse the version + parsed = parse_semver(version) + + if parsed is None: + # Invalid format + issues = find_issues(version) + message = "❌ FAIL: Invalid semantic version format\n\n" + message += f"Version: {version}\n" + message += "Valid: No\n\n" + message += "Issues Found:\n" + if issues: + for issue in issues: + message += f" - {issue}\n" + else: + message += " - Does not match semantic versioning pattern\n" + message += "\nRequired Format: MAJOR.MINOR.PATCH\n" + message += "\nExamples:\n" + message += " - 1.0.0 (initial release)\n" + message += " - 1.2.3 (standard version)\n" + message += " - 2.0.0-beta.1 (pre-release)\n" + message += "\nReference: https://semver.org/" + return False, 1, message + + # Check strict mode + if strict and not parsed['is_strict']: + message = "⚠️ WARNING: Valid semver, but not strict format\n\n" + message += f"Version: {version}\n" + message += "Format: Valid semver with " + if parsed['prerelease']: + message += "pre-release" + if parsed['build']: + message += " and " if parsed['prerelease'] else "" + message += "build metadata" + message += "\n\n" + message += "Note: OpenPlugins recommends strict MAJOR.MINOR.PATCH format\n" + message += "without pre-release or build metadata for marketplace submissions.\n\n" + message += f"Recommended: {parsed['major']}.{parsed['minor']}.{parsed['patch']} (for stable release)\n\n" + message += "Quality Score Impact: +5 points (valid, but consider strict format)" + return True, 3, message + + # Valid version + message = "✅ PASS: Valid semantic version\n\n" + message += f"Version: {version}\n" + message += "Format: " + if parsed['is_strict']: + message += "MAJOR.MINOR.PATCH (strict)\n" + else: + message += "MAJOR.MINOR.PATCH" + if parsed['prerelease']: + message += "-PRERELEASE" + if parsed['build']: + message += "+BUILD" + message += "\n" + message += "Valid: Yes\n\n" + message += "Components:\n" + message += f" - MAJOR: {parsed['major']}" + if parsed['major'] > 0: + message += " (breaking changes)" + message += "\n" + message += f" - MINOR: {parsed['minor']}" + if parsed['minor'] > 0: + message += " (new features)" + message += "\n" + message += f" - PATCH: {parsed['patch']}" + if parsed['patch'] > 0: + message += " (bug fixes)" + message += "\n" + + if parsed['prerelease']: + message += f" - Pre-release: {parsed['prerelease']}\n" + if parsed['build']: + message += f" - Build: {parsed['build']}\n" + + message += "\n" + + if parsed['prerelease']: + message += "Note: Pre-release versions indicate unstable releases.\n" + message += "Remove pre-release identifier for stable marketplace submission.\n\n" + + message += "Quality Score Impact: +5 points\n\n" + message += "The version follows Semantic Versioning 2.0.0 specification." + + return True, 0, message + + +def main(): + """Main entry point""" + if len(sys.argv) < 2 or sys.argv[1] in ['-h', '--help']: + usage() + + version = sys.argv[1] + strict = '--strict' in sys.argv + + is_valid, exit_code, message = validate_version(version, strict) + + print(message) + sys.exit(exit_code) + + +if __name__ == '__main__': + main() diff --git a/commands/best-practices/check-categories.md b/commands/best-practices/check-categories.md new file mode 100644 index 0000000..0aee7d2 --- /dev/null +++ b/commands/best-practices/check-categories.md @@ -0,0 +1,325 @@ +## Operation: Check Categories + +Validate category assignment against OpenPlugins standard category list. + +### Parameters from $ARGUMENTS + +- **category**: Category name to validate (required) +- **suggest**: Show similar categories if invalid (optional, default: true) + +### OpenPlugins Standard Categories + +OpenPlugins defines **exactly 10 approved categories**: + +1. **development** - Code generation, scaffolding, refactoring +2. **testing** - Test generation, coverage, quality assurance +3. **deployment** - CI/CD, infrastructure, release automation +4. **documentation** - Docs generation, API documentation +5. **security** - Vulnerability scanning, secret detection +6. **database** - Schema design, migrations, queries +7. **monitoring** - Performance analysis, logging +8. **productivity** - Workflow automation, task management +9. **quality** - Linting, formatting, code review +10. **collaboration** - Team tools, communication + +### Category Selection Guidance + +**development**: +- Code generators +- Project scaffolding +- Refactoring tools +- Boilerplate generation + +**testing**: +- Test generators +- Test runners +- Coverage tools +- QA automation + +**deployment**: +- CI/CD pipelines +- Infrastructure as code +- Release automation +- Environment management + +**documentation**: +- README generators +- API doc generation +- Changelog automation +- Architecture diagrams + +**security**: +- Secret scanning +- Vulnerability detection +- Security audits +- Compliance checking + +**database**: +- Schema design +- Migration tools +- Query builders +- Database testing + +**monitoring**: +- Performance profiling +- Log analysis +- Metrics collection +- Alert systems + +**productivity**: +- Task automation +- Workflow orchestration +- Time management +- Note-taking + +**quality**: +- Linters +- Code formatters +- Code review tools +- Complexity analysis + +**collaboration**: +- Team communication +- Code review +- Knowledge sharing +- Project management + +### Workflow + +1. **Extract Category from Arguments** + ``` + Parse $ARGUMENTS to extract category parameter + If category not provided, return error + Normalize to lowercase + ``` + +2. **Execute Category Validator** + ```bash + Execute .scripts/category-validator.sh "$category" + + Exit codes: + - 0: Valid category + - 1: Invalid category + - 2: Missing required parameters + ``` + +3. **Check Against Approved List** + ``` + Compare category against 10 approved categories + Use exact string matching (case-insensitive) + ``` + +4. **Suggest Alternatives (if invalid)** + ``` + IF category invalid AND suggest:true: + Calculate similarity scores + Suggest closest matching categories + Show category descriptions + ``` + +5. **Return Validation Report** + ``` + Format results: + - Status: PASS/FAIL + - Category: + - Valid: yes/no + - Description: (if valid) + - Suggestions: (if invalid) + - Score impact: +5 points (if valid) + ``` + +### Examples + +```bash +# Valid category +/best-practices categories category:development +# Result: PASS - Valid OpenPlugins category + +# Invalid category (typo) +/best-practices categories category:developement +# Result: FAIL - Did you mean: development? + +# Invalid category (plural) +/best-practices categories category:tests +# Result: FAIL - Did you mean: testing? + +# Invalid category (custom) +/best-practices categories category:utilities +# Result: FAIL - Not in approved list +# Suggestions: productivity, quality, development + +# Case insensitive +/best-practices categories category:TESTING +# Result: PASS - Valid (normalized to: testing) +``` + +### Error Handling + +**Missing category parameter**: +``` +ERROR: Missing required parameter 'category' + +Usage: /best-practices categories category: + +Example: /best-practices categories category:development +``` + +**Empty category**: +``` +ERROR: Category cannot be empty + +Choose from 10 approved OpenPlugins categories: +development, testing, deployment, documentation, security, +database, monitoring, productivity, quality, collaboration +``` + +### Output Format + +**Success (Valid Category)**: +``` +✅ Category Validation: PASS + +Category: development +Valid: Yes + +Description: Code generation, scaffolding, refactoring + +Use Cases: +- Code generators +- Project scaffolding tools +- Refactoring utilities +- Boilerplate generation + +Quality Score Impact: +5 points + +The category is approved for OpenPlugins marketplace. +``` + +**Failure (Invalid Category)**: +``` +❌ Category Validation: FAIL + +Category: developement +Valid: No + +This category is not in the OpenPlugins approved list. + +Did you mean? +1. development - Code generation, scaffolding, refactoring +2. deployment - CI/CD, infrastructure, release automation + +All Approved Categories: +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +1. development - Code generation, scaffolding +2. testing - Test generation, coverage +3. deployment - CI/CD, infrastructure +4. documentation - Docs generation, API docs +5. security - Vulnerability scanning +6. database - Schema design, migrations +7. monitoring - Performance analysis +8. productivity - Workflow automation +9. quality - Linting, formatting +10. collaboration - Team tools, communication + +Quality Score Impact: 0 points (fix to gain +5) + +Choose the most appropriate category from the approved list. +``` + +**Failure (Multiple Matches)**: +``` +❌ Category Validation: FAIL + +Category: code-tools +Valid: No + +This category is not approved. Consider these alternatives: + +Best Matches: +1. development - Code generation, scaffolding, refactoring +2. quality - Linting, formatting, code review +3. productivity - Workflow automation, task management + +Which fits your plugin best? +- If generating/scaffolding code → development +- If analyzing/formatting code → quality +- If automating workflows → productivity + +Quality Score Impact: 0 points (fix to gain +5) +``` + +### Category Decision Tree + +Use this to select the right category: + +``` +Does your plugin... + +Generate or scaffold code? + → development + +Run tests or check quality? + → testing (if running tests) + → quality (if analyzing/formatting code) + +Deploy or manage infrastructure? + → deployment + +Generate documentation? + → documentation + +Scan for security issues? + → security + +Work with databases? + → database + +Monitor performance or logs? + → monitoring + +Automate workflows or tasks? + → productivity + +Improve code quality? + → quality + +Facilitate team collaboration? + → collaboration +``` + +### Common Mistakes + +**Using plural forms**: +- ❌ `tests` → ✅ `testing` +- ❌ `deployments` → ✅ `deployment` +- ❌ `databases` → ✅ `database` + +**Using generic terms**: +- ❌ `tools` → Choose specific category +- ❌ `utilities` → Choose specific category +- ❌ `helpers` → Choose specific category + +**Using multiple categories**: +- ❌ `development,testing` → Choose ONE primary category +- Use keywords for additional topics + +**Using custom categories**: +- ❌ `api-tools` → ✅ `development` or `productivity` +- ❌ `devops` → ✅ `deployment` +- ❌ `ci-cd` → ✅ `deployment` + +### Compliance Criteria + +**PASS Requirements**: +- Exact match with one of 10 approved categories +- Case-insensitive matching accepted +- Single category only (not multiple) + +**FAIL Indicators**: +- Not in approved list +- Plural forms +- Custom categories +- Multiple categories +- Empty or missing + +**Request**: $ARGUMENTS diff --git a/commands/best-practices/check-naming.md b/commands/best-practices/check-naming.md new file mode 100644 index 0000000..907aeb0 --- /dev/null +++ b/commands/best-practices/check-naming.md @@ -0,0 +1,174 @@ +## Operation: Check Naming Conventions + +Validate plugin names against OpenPlugins lowercase-hyphen naming convention. + +### Parameters from $ARGUMENTS + +- **name**: Plugin name to validate (required) +- **fix**: Auto-suggest corrected name (optional, default: true) + +### OpenPlugins Naming Convention + +**Pattern**: `^[a-z0-9]+(-[a-z0-9]+)*$` + +**Valid Examples**: +- `code-formatter` +- `test-runner` +- `deploy-automation` +- `api-client` +- `database-migration` + +**Invalid Examples**: +- `Code-Formatter` (uppercase) +- `test_runner` (underscore) +- `Deploy Automation` (space) +- `APIClient` (camelCase) +- `-helper` (leading hyphen) +- `tool-` (trailing hyphen) + +### Workflow + +1. **Extract Name from Arguments** + ``` + Parse $ARGUMENTS to extract name parameter + If name not provided, return error + ``` + +2. **Execute Naming Validator** + ```bash + Execute .scripts/naming-validator.sh "$name" + + Exit codes: + - 0: Valid naming convention + - 1: Invalid naming convention + - 2: Missing required parameters + ``` + +3. **Process Results** + ``` + IF valid: + Return success with confirmation + ELSE: + Return failure with specific violations + Suggest corrected name if fix:true + Provide examples + ``` + +4. **Return Compliance Report** + ``` + Format results: + - Status: PASS/FAIL + - Name: + - Valid: yes/no + - Issues: + - Suggestion: + - Score impact: +5 points (if valid) + ``` + +### Examples + +```bash +# Valid name +/best-practices naming name:my-awesome-plugin +# Result: PASS - Valid lowercase-hyphen format + +# Invalid name with uppercase +/best-practices naming name:MyPlugin +# Result: FAIL - Contains uppercase (M, P) +# Suggestion: my-plugin + +# Invalid name with underscore +/best-practices naming name:test_runner +# Result: FAIL - Contains underscore (_) +# Suggestion: test-runner + +# Invalid name with space +/best-practices naming name:"Test Runner" +# Result: FAIL - Contains space +# Suggestion: test-runner +``` + +### Error Handling + +**Missing name parameter**: +``` +ERROR: Missing required parameter 'name' + +Usage: /best-practices naming name: + +Example: /best-practices naming name:my-plugin +``` + +**Empty name**: +``` +ERROR: Name cannot be empty + +Provide a valid plugin name following lowercase-hyphen convention. +``` + +### Output Format + +**Success (Valid Name)**: +``` +✅ Naming Convention: PASS + +Name: code-formatter +Format: lowercase-hyphen +Pattern: ^[a-z0-9]+(-[a-z0-9]+)*$ +Valid: Yes + +Quality Score Impact: +5 points + +The name follows OpenPlugins naming conventions perfectly. +``` + +**Failure (Invalid Name)**: +``` +❌ Naming Convention: FAIL + +Name: Code_Formatter +Format: Invalid +Valid: No + +Issues Found: +1. Contains uppercase characters: C, F +2. Contains underscores instead of hyphens + +Suggested Correction: code-formatter + +Quality Score Impact: 0 points (fix to gain +5) + +Fix these issues to comply with OpenPlugins standards. +``` + +### Compliance Criteria + +**PASS Requirements**: +- All lowercase letters (a-z) +- Numbers allowed (0-9) +- Hyphens for word separation +- No leading or trailing hyphens +- No consecutive hyphens +- No other special characters +- Descriptive (not generic like "plugin" or "tool") + +**FAIL Indicators**: +- Uppercase letters +- Underscores, spaces, or special characters +- Leading/trailing hyphens +- Empty or single character names +- Generic non-descriptive names + +### Best Practices Guidance + +**Good Names**: +- Describe functionality: `code-formatter`, `test-runner` +- Include technology: `python-linter`, `docker-manager` +- Indicate purpose: `api-client`, `database-migrator` + +**Avoid**: +- Generic: `plugin`, `tool`, `helper`, `utility` +- Abbreviations only: `fmt`, `tst`, `db` +- Version numbers: `plugin-v2`, `tool-2024` + +**Request**: $ARGUMENTS diff --git a/commands/best-practices/full-compliance.md b/commands/best-practices/full-compliance.md new file mode 100644 index 0000000..3b69e1d --- /dev/null +++ b/commands/best-practices/full-compliance.md @@ -0,0 +1,514 @@ +## Operation: Full Standards Compliance + +Execute comprehensive OpenPlugins and Claude Code best practices validation with complete compliance reporting. + +### Parameters from $ARGUMENTS + +- **path**: Path to plugin or marketplace directory (required) +- **fix**: Auto-suggest corrections for all issues (optional, default: true) +- **format**: Output format (text|json|markdown) (optional, default: text) + +### Complete Standards Check + +This operation validates all four best practice categories: + +1. **Naming Convention** - Lowercase-hyphen format +2. **Semantic Versioning** - MAJOR.MINOR.PATCH format +3. **Category Assignment** - One of 10 approved categories +4. **Keyword Quality** - 3-7 relevant, non-generic keywords + +### Workflow + +1. **Detect Target Type** + ``` + Parse $ARGUMENTS to extract path parameter + Detect if path is plugin or marketplace: + - Plugin: Has plugin.json + - Marketplace: Has .claude-plugin/marketplace.json + ``` + +2. **Load Metadata** + ``` + IF plugin: + Read plugin.json + Extract: name, version, keywords, category + ELSE IF marketplace: + Read .claude-plugin/marketplace.json + Extract marketplace metadata + Validate each plugin entry + ELSE: + Return error: Invalid target + ``` + +3. **Execute All Validations** + ``` + Run in parallel or sequence: + + A. Naming Validation + Execute check-naming.md with name parameter + Store result + + B. Version Validation + Execute validate-versioning.md with version parameter + Store result + + C. Category Validation + Execute check-categories.md with category parameter + Store result + + D. Keyword Validation + Execute validate-keywords.md with keywords parameter + Store result + ``` + +4. **Aggregate Results** + ``` + Collect all validation results: + - Individual pass/fail status + - Specific issues found + - Suggested corrections + - Score impact for each + + Calculate overall compliance: + - Total score: Sum of individual scores + - Pass count: Number of passing validations + - Fail count: Number of failing validations + - Compliance percentage: (pass / total) × 100 + ``` + +5. **Generate Compliance Report** + ``` + Create comprehensive report: + - Executive summary + - Individual validation details + - Issue prioritization + - Suggested fixes + - Compliance score + - Publication readiness + ``` + +6. **Return Results** + ``` + Format according to output format: + - text: Human-readable console output + - json: Machine-parseable JSON + - markdown: Documentation-ready markdown + ``` + +### Examples + +```bash +# Full compliance check on current directory +/best-practices full-standards path:. + +# Check specific plugin with JSON output +/best-practices full-standards path:./my-plugin format:json + +# Check with auto-fix suggestions +/best-practices full-standards path:. fix:true + +# Marketplace validation +/best-practices full-standards path:./marketplace +``` + +### Error Handling + +**Missing path parameter**: +``` +ERROR: Missing required parameter 'path' + +Usage: /best-practices full-standards path: + +Examples: + /best-practices full-standards path:. + /best-practices full-standards path:./my-plugin +``` + +**Invalid path**: +``` +ERROR: Invalid path or not a plugin/marketplace + +Path: + +The path must contain either: +- plugin.json (for plugins) +- .claude-plugin/marketplace.json (for marketplaces) + +Check the path and try again. +``` + +**Missing metadata file**: +``` +ERROR: Metadata file not found + +Expected one of: +- plugin.json +- .claude-plugin/marketplace.json + +This does not appear to be a valid Claude Code plugin or marketplace. +``` + +### Output Format + +**Text Format (Complete Compliance)**: +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +OPENPLUGINS BEST PRACTICES COMPLIANCE REPORT +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Target: code-formatter-plugin +Type: Plugin +Date: 2024-10-13 + +Overall Compliance: 100% ✅ +Status: PUBLICATION READY + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +VALIDATION RESULTS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +1. Naming Convention: ✅ PASS + Name: code-formatter + Format: lowercase-hyphen + Score: +5 points + + The name follows OpenPlugins naming conventions perfectly. + +2. Semantic Versioning: ✅ PASS + Version: 1.2.3 + Format: MAJOR.MINOR.PATCH + Score: +5 points + + Valid semantic version compliant with semver 2.0.0. + +3. Category Assignment: ✅ PASS + Category: quality + Description: Linting, formatting, code review + Score: +5 points + + Category is approved and appropriate for this plugin. + +4. Keyword Quality: ✅ PASS + Keywords: formatting, javascript, eslint, code-quality, automation + Count: 5 (optimal) + Quality: 10/10 + Score: +10 points + + Excellent keyword selection with balanced mix. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +COMPLIANCE SUMMARY +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Validations Passed: 4/4 (100%) +Quality Score: 25/25 points + +Scoring Breakdown: +✅ Naming Convention: +5 points +✅ Semantic Versioning: +5 points +✅ Category Assignment: +5 points +✅ Keyword Quality: +10 points +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +Total Score: 25/25 points + +Publication Status: ✅ READY FOR SUBMISSION + +This plugin meets all OpenPlugins best practice standards +and is ready for marketplace submission! + +Next Steps: +1. Submit to OpenPlugins marketplace +2. Follow contribution guidelines in CONTRIBUTING.md +3. Open pull request with plugin entry +``` + +**Text Format (Partial Compliance)**: +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +OPENPLUGINS BEST PRACTICES COMPLIANCE REPORT +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Target: Test_Runner +Type: Plugin +Date: 2024-10-13 + +Overall Compliance: 50% ⚠️ +Status: NEEDS IMPROVEMENT + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +VALIDATION RESULTS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +1. Naming Convention: ❌ FAIL + Name: Test_Runner + Format: Invalid + Score: 0 points + + Issues Found: + - Contains uppercase characters: T, R + - Contains underscore instead of hyphen + + ✏️ Suggested Fix: test-runner + + Impact: +5 points (if fixed) + +2. Semantic Versioning: ✅ PASS + Version: 1.0.0 + Format: MAJOR.MINOR.PATCH + Score: +5 points + + Valid semantic version compliant with semver 2.0.0. + +3. Category Assignment: ❌ FAIL + Category: test-tools + Valid: No + Score: 0 points + + This category is not in the approved list. + + ✏️ Suggested Fix: testing + + Description: Test generation, coverage, quality assurance + + Impact: +5 points (if fixed) + +4. Keyword Quality: ⚠️ WARNING + Keywords: plugin, tool, awesome + Count: 3 (minimum met) + Quality: 2/10 + Score: 2 points + + Issues Found: + - Generic terms: plugin, tool + - Marketing terms: awesome + - No functional keywords + + ✏️ Suggested Fix: testing, automation, pytest, unit-testing, tdd + + Impact: +8 points (if improved to excellent) + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +COMPLIANCE SUMMARY +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Validations Passed: 1/4 (25%) +Quality Score: 7/25 points + +Scoring Breakdown: +❌ Naming Convention: 0/5 points +✅ Semantic Versioning: 5/5 points +❌ Category Assignment: 0/5 points +⚠️ Keyword Quality: 2/10 points +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +Total Score: 7/25 points + +Publication Status: ⚠️ NOT READY - NEEDS FIXES + +Priority Fixes Required: +1. [P0] Fix naming convention: Test_Runner → test-runner +2. [P0] Fix category: test-tools → testing +3. [P1] Improve keywords: Remove generic terms, add functional keywords + +After Fixes (Estimated Score): +✅ Naming Convention: +5 points +✅ Semantic Versioning: +5 points +✅ Category Assignment: +5 points +✅ Keyword Quality: +10 points +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +Potential Score: 25/25 points + +Next Steps: +1. Apply suggested fixes above +2. Re-run validation: /best-practices full-standards path:. +3. Ensure score reaches 25/25 before submission +``` + +**JSON Format**: +```json +{ + "target": "code-formatter", + "type": "plugin", + "timestamp": "2024-10-13T10:00:00Z", + "compliance": { + "overall": 100, + "status": "READY", + "passed": 4, + "failed": 0, + "warnings": 0 + }, + "validations": { + "naming": { + "status": "pass", + "name": "code-formatter", + "format": "lowercase-hyphen", + "score": 5, + "issues": [] + }, + "versioning": { + "status": "pass", + "version": "1.2.3", + "format": "MAJOR.MINOR.PATCH", + "score": 5, + "issues": [] + }, + "category": { + "status": "pass", + "category": "quality", + "valid": true, + "score": 5, + "issues": [] + }, + "keywords": { + "status": "pass", + "keywords": ["formatting", "javascript", "eslint", "code-quality", "automation"], + "count": 5, + "quality": 10, + "score": 10, + "issues": [] + } + }, + "score": { + "total": 25, + "maximum": 25, + "percentage": 100, + "breakdown": { + "naming": 5, + "versioning": 5, + "category": 5, + "keywords": 10 + } + }, + "publication_ready": true, + "next_steps": [ + "Submit to OpenPlugins marketplace", + "Follow contribution guidelines", + "Open pull request" + ] +} +``` + +**Markdown Format** (for documentation): +```markdown +# OpenPlugins Best Practices Compliance Report + +**Target**: code-formatter +**Type**: Plugin +**Date**: 2024-10-13 +**Status**: ✅ PUBLICATION READY + +## Overall Compliance + +- **Score**: 25/25 points (100%) +- **Validations Passed**: 4/4 +- **Publication Ready**: Yes + +## Validation Results + +### 1. Naming Convention ✅ + +- **Status**: PASS +- **Name**: code-formatter +- **Format**: lowercase-hyphen +- **Score**: +5 points + +The name follows OpenPlugins naming conventions perfectly. + +### 2. Semantic Versioning ✅ + +- **Status**: PASS +- **Version**: 1.2.3 +- **Format**: MAJOR.MINOR.PATCH +- **Score**: +5 points + +Valid semantic version compliant with semver 2.0.0. + +### 3. Category Assignment ✅ + +- **Status**: PASS +- **Category**: quality +- **Description**: Linting, formatting, code review +- **Score**: +5 points + +Category is approved and appropriate for this plugin. + +### 4. Keyword Quality ✅ + +- **Status**: PASS +- **Keywords**: formatting, javascript, eslint, code-quality, automation +- **Count**: 5 (optimal) +- **Quality**: 10/10 +- **Score**: +10 points + +Excellent keyword selection with balanced mix. + +## Score Breakdown + +| Validation | Score | Status | +|------------|-------|--------| +| Naming Convention | 5/5 | ✅ Pass | +| Semantic Versioning | 5/5 | ✅ Pass | +| Category Assignment | 5/5 | ✅ Pass | +| Keyword Quality | 10/10 | ✅ Pass | +| **Total** | **25/25** | **✅ Ready** | + +## Next Steps + +1. Submit to OpenPlugins marketplace +2. Follow contribution guidelines in CONTRIBUTING.md +3. Open pull request with plugin entry + +--- + +*Report generated by marketplace-validator-plugin v1.0.0* +``` + +### Compliance Scoring + +**Total Score Breakdown**: +- Naming Convention: 5 points +- Semantic Versioning: 5 points +- Category Assignment: 5 points +- Keyword Quality: 10 points +- **Maximum Total**: 25 points + +**Publication Readiness**: +- **25/25 points (100%)**: ✅ READY - Perfect compliance +- **20-24 points (80-96%)**: ✅ READY - Minor improvements optional +- **15-19 points (60-76%)**: ⚠️ NEEDS WORK - Address issues before submission +- **10-14 points (40-56%)**: ❌ NOT READY - Significant fixes required +- **0-9 points (0-36%)**: ❌ NOT READY - Major compliance issues + +### Integration with Quality Analysis + +This operation feeds into the overall quality scoring system: + +``` +Best Practices Score (25 points max) + ↓ +Quality Analysis (calculate-score) + ↓ +Overall Quality Score (100 points total) + ↓ +Publication Readiness Determination +``` + +### Best Practices Workflow + +For complete plugin validation: + +```bash +# 1. Run full standards compliance +/best-practices full-standards path:. + +# 2. If issues found, fix them, then re-run +# ... apply fixes ... +/best-practices full-standards path:. + +# 3. Once compliant, run comprehensive validation +/validation-orchestrator comprehensive path:. + +# 4. Review quality report +# Quality score includes best practices (25 points) +``` + +**Request**: $ARGUMENTS diff --git a/commands/best-practices/skill.md b/commands/best-practices/skill.md new file mode 100644 index 0000000..79b5938 --- /dev/null +++ b/commands/best-practices/skill.md @@ -0,0 +1,105 @@ +--- +description: Enforce OpenPlugins and Claude Code best practices for naming, versioning, and standards compliance +--- + +You are the Best Practices coordinator, ensuring adherence to OpenPlugins and Claude Code standards. + +## Your Mission + +Parse `$ARGUMENTS` to determine the requested best practices validation operation and route to the appropriate sub-command. + +## Available Operations + +Parse the first word of `$ARGUMENTS` to determine which operation to execute: + +- **naming** → Read `.claude/commands/best-practices/check-naming.md` +- **versioning** → Read `.claude/commands/best-practices/validate-versioning.md` +- **categories** → Read `.claude/commands/best-practices/check-categories.md` +- **keywords** → Read `.claude/commands/best-practices/validate-keywords.md` +- **full-standards** → Read `.claude/commands/best-practices/full-compliance.md` + +## Argument Format + +``` +/best-practices [parameters] +``` + +### Examples + +```bash +# Check naming conventions +/best-practices naming name:my-plugin-name + +# Validate semantic versioning +/best-practices versioning version:1.2.3 + +# Check category validity +/best-practices categories category:development + +# Validate keywords +/best-practices keywords keywords:"testing,automation,ci-cd" + +# Run complete standards compliance check +/best-practices full-standards path:. +``` + +## OpenPlugins Standards + +**Naming Convention**: +- Format: lowercase-hyphen (e.g., `code-formatter`, `test-runner`) +- Pattern: `^[a-z0-9]+(-[a-z0-9]+)*$` +- No underscores, spaces, or uppercase +- Descriptive, not generic (avoid: "plugin", "tool", "helper") + +**Semantic Versioning**: +- Format: MAJOR.MINOR.PATCH (e.g., 1.2.3) +- Pattern: `^[0-9]+\.[0-9]+\.[0-9]+$` +- Optional pre-release: `-alpha.1`, `-beta.2` +- Optional build metadata: `+20241013` + +**Categories** (choose ONE): +1. **development** - Code generation, scaffolding, refactoring +2. **testing** - Test generation, coverage, quality assurance +3. **deployment** - CI/CD, infrastructure, release automation +4. **documentation** - Docs generation, API documentation +5. **security** - Vulnerability scanning, secret detection +6. **database** - Schema design, migrations, queries +7. **monitoring** - Performance analysis, logging +8. **productivity** - Workflow automation, task management +9. **quality** - Linting, formatting, code review +10. **collaboration** - Team tools, communication + +**Keywords**: +- Count: 3-7 keywords +- Relevance: Functionality, technology, or use-case based +- Avoid: Generic terms (plugin, tool, utility), category duplication +- Good: `testing`, `automation`, `python`, `ci-cd`, `docker` +- Bad: `best`, `awesome`, `perfect`, `plugin` + +## Compliance Scoring + +Best practices contribute to quality score: +- Valid naming: +5 points +- Semantic versioning: +5 points +- Valid category: +5 points +- Quality keywords (3-7): +10 points + +## Error Handling + +If the operation is not recognized: +1. List all available operations +2. Show OpenPlugins standards +3. Provide compliance guidance + +## Base Directory + +Base directory for this skill: `.claude/commands/best-practices/` + +## Your Task + +1. Parse `$ARGUMENTS` to extract operation and parameters +2. Read the corresponding operation file +3. Execute best practices validation +4. Return compliance results with specific corrections + +**Current Request**: $ARGUMENTS diff --git a/commands/best-practices/validate-keywords.md b/commands/best-practices/validate-keywords.md new file mode 100644 index 0000000..0e8a503 --- /dev/null +++ b/commands/best-practices/validate-keywords.md @@ -0,0 +1,337 @@ +## Operation: Validate Keywords + +Validate keyword selection for relevance, count, and quality against OpenPlugins standards. + +### Parameters from $ARGUMENTS + +- **keywords**: Comma-separated keyword list (required) +- **min**: Minimum keyword count (optional, default: 3) +- **max**: Maximum keyword count (optional, default: 7) +- **context**: Plugin context for relevance checking (optional, JSON or description) + +### OpenPlugins Keyword Standards + +**Count Requirements**: +- Minimum: 3 keywords +- Maximum: 7 keywords +- Optimal: 5-6 keywords + +**Quality Requirements**: +- Relevant to plugin functionality +- Searchable terms users would use +- Mix of functionality, technology, and use-case +- No generic marketing terms +- No duplicate category names + +### Keyword Categories + +**Functionality Keywords** (what it does): +- `testing`, `deployment`, `formatting`, `linting`, `migration` +- `generation`, `automation`, `analysis`, `monitoring`, `scanning` + +**Technology Keywords** (what it works with): +- `python`, `javascript`, `docker`, `kubernetes`, `postgresql` +- `react`, `vue`, `typescript`, `bash`, `terraform` + +**Use-Case Keywords** (how it's used): +- `ci-cd`, `code-review`, `api-testing`, `performance` +- `tdd`, `bdd`, `refactoring`, `debugging`, `profiling` + +### Good Keywords Examples + +**Well-balanced sets**: +- `["testing", "pytest", "automation", "tdd", "python"]` +- `["deployment", "kubernetes", "ci-cd", "docker", "helm"]` +- `["linting", "javascript", "eslint", "code-quality", "automation"]` +- `["database", "postgresql", "migration", "schema", "sql"]` + +**Poor keyword sets**: +- `["plugin", "tool", "awesome"]` - Generic/marketing terms +- `["test", "testing", "tester", "tests"]` - Redundant variations +- `["development"]` - Only category name, too few +- `["a", "b", "c", "d", "e", "f", "g", "h"]` - Too many, non-descriptive + +### Workflow + +1. **Extract Keywords from Arguments** + ``` + Parse $ARGUMENTS to extract keywords parameter + Split by comma, trim whitespace + Normalize to lowercase + Remove duplicates + ``` + +2. **Execute Keyword Analyzer** + ```bash + Execute .scripts/keyword-analyzer.py "$keywords" "$min" "$max" "$context" + + Exit codes: + - 0: Valid keyword set + - 1: Count violation (too few or too many) + - 2: Quality issues (generic terms, duplicates) + - 3: Missing required parameters + ``` + +3. **Validate Count** + ``` + count = number of keywords + IF count < min: FAIL (too few) + IF count > max: FAIL (too many) + ``` + +4. **Check for Generic Terms** + ``` + Generic blocklist: + - plugin, tool, utility, helper, awesome + - best, perfect, great, super, amazing + - code, software, app, program + + Flag any generic terms found + ``` + +5. **Analyze Quality** + ``` + Check for: + - Duplicate category names + - Redundant variations (test, testing, tests) + - Single-character keywords + - Non-descriptive terms + ``` + +6. **Calculate Relevance Score** + ``` + Base score: 10 points + + Deductions: + - Generic term: -2 per term + - Too few keywords: -5 + - Too many keywords: -3 + - Redundant variations: -2 per redundancy + - Non-descriptive: -1 per term + + Final score: max(0, base - deductions) + ``` + +7. **Return Analysis Report** + ``` + Format results: + - Status: PASS/FAIL/WARNING + - Count: (valid range: min-max) + - Quality: /10 + - Issues: + - Suggestions: + - Score impact: +10 points (if excellent), +5 (if good) + ``` + +### Examples + +```bash +# Valid keyword set +/best-practices keywords keywords:"testing,pytest,automation,tdd,python" +# Result: PASS - 5 keywords, well-balanced, relevant + +# Too few keywords +/best-practices keywords keywords:"testing,python" +# Result: FAIL - Only 2 keywords (minimum: 3) + +# Too many keywords +/best-practices keywords keywords:"a,b,c,d,e,f,g,h,i,j" +# Result: FAIL - 10 keywords (maximum: 7) + +# Generic terms +/best-practices keywords keywords:"plugin,tool,awesome,best" +# Result: FAIL - Contains generic/marketing terms + +# With custom range +/best-practices keywords keywords:"ci,cd,docker" min:2 max:5 +# Result: PASS - 3 keywords within custom range +``` + +### Error Handling + +**Missing keywords parameter**: +``` +ERROR: Missing required parameter 'keywords' + +Usage: /best-practices keywords keywords:"keyword1,keyword2,keyword3" + +Example: /best-practices keywords keywords:"testing,automation,python" +``` + +**Empty keywords**: +``` +ERROR: Keywords cannot be empty + +Provide 3-7 relevant keywords describing your plugin. + +Good examples: +- "testing,pytest,automation" +- "deployment,kubernetes,ci-cd" +- "linting,javascript,code-quality" +``` + +### Output Format + +**Success (Excellent Keywords)**: +``` +✅ Keyword Validation: PASS + +Keywords: testing, pytest, automation, tdd, python +Count: 5 (optimal range: 3-7) +Quality Score: 10/10 + +Analysis: +✅ Balanced mix of functionality, technology, and use-case +✅ All keywords relevant and searchable +✅ No generic or marketing terms +✅ Good variety without redundancy + +Breakdown: +- Functionality: testing, automation, tdd +- Technology: pytest, python +- Use-case: tdd + +Quality Score Impact: +10 points + +Excellent keyword selection for discoverability! +``` + +**Failure (Count Violation)**: +``` +❌ Keyword Validation: FAIL + +Keywords: testing, python +Count: 2 (required: 3-7) +Quality Score: 5/10 + +Issues Found: +1. Too few keywords (2 < 3 minimum) +2. Missing technology or use-case keywords + +Suggestions to improve: +Add 1-3 more relevant keywords such as: +- Functionality: automation, unit-testing +- Use-case: tdd, ci-cd +- Specific tools: pytest, unittest + +Recommended: testing, python, pytest, automation, tdd + +Quality Score Impact: 0 points (fix to gain +10) +``` + +**Failure (Generic Terms)**: +``` +❌ Keyword Validation: FAIL + +Keywords: plugin, tool, awesome, best, helper +Count: 5 (valid range) +Quality Score: 2/10 + +Issues Found: +1. Generic terms detected: plugin, tool, helper +2. Marketing terms detected: awesome, best +3. No functional or technical keywords + +These keywords don't help users find your plugin. + +Better alternatives: +Instead of generic terms, describe WHAT it does: +- Replace "plugin" → testing, deployment, formatting +- Replace "tool" → specific functionality +- Replace "awesome/best" → actual features + +Suggested keywords based on common patterns: +- testing, automation, ci-cd, docker, python +- deployment, kubernetes, infrastructure, terraform +- linting, formatting, code-quality, javascript + +Quality Score Impact: 0 points (fix to gain +10) +``` + +**Warning (Minor Issues)**: +``` +⚠️ Keyword Validation: WARNING + +Keywords: testing, tests, test, automation, ci-cd +Count: 5 (valid range) +Quality Score: 7/10 + +Issues Found: +1. Redundant variations: testing, tests, test +2. Consider consolidating to single term + +Suggestions: +- Keep: testing, automation, ci-cd +- Remove: tests, test (redundant) +- Add: 2 more specific keywords (e.g., pytest, junit) + +Recommended: testing, automation, ci-cd, pytest, unit-testing + +Quality Score Impact: +7 points (good, but could be better) + +Your keywords are functional but could be more diverse. +``` + +### Keyword Quality Checklist + +**PASS Requirements**: +- 3-7 keywords total +- No generic terms (plugin, tool, utility, helper) +- No marketing terms (awesome, best, perfect) +- No redundant variations +- Mix of functionality and technology +- Relevant to plugin purpose +- Searchable by target users + +**FAIL Indicators**: +- < 3 or > 7 keywords +- Contains generic terms +- Contains marketing fluff +- All keywords same type (only technologies, only functionality) +- Single-character keywords +- Category name duplication + +### Best Practices + +**Do**: +- Use specific functionality terms +- Include primary technologies +- Add relevant use-cases +- Think about user search intent +- Balance breadth and specificity + +**Don't**: +- Use generic words (plugin, tool, utility) +- Add marketing terms (best, awesome, perfect) +- Duplicate category names exactly +- Use redundant variations +- Add irrelevant technologies +- Use abbreviations without context + +### Quality Scoring Matrix + +**10/10 - Excellent**: +- 5-6 keywords +- Perfect mix of functionality/technology/use-case +- All highly relevant +- Great search discoverability + +**7-9/10 - Good**: +- 3-7 keywords +- Good mix with minor issues +- Mostly relevant +- Decent discoverability + +**4-6/10 - Fair**: +- Count issues OR some generic terms +- Imbalanced mix +- Partial relevance +- Limited discoverability + +**0-3/10 - Poor**: +- Severe count violations OR mostly generic +- No functional keywords +- Poor relevance +- Very poor discoverability + +**Request**: $ARGUMENTS diff --git a/commands/best-practices/validate-versioning.md b/commands/best-practices/validate-versioning.md new file mode 100644 index 0000000..6ea1c71 --- /dev/null +++ b/commands/best-practices/validate-versioning.md @@ -0,0 +1,254 @@ +## Operation: Validate Versioning + +Validate version strings against Semantic Versioning 2.0.0 specification. + +### Parameters from $ARGUMENTS + +- **version**: Version string to validate (required) +- **strict**: Enforce strict semver (no pre-release/build metadata) (optional, default: false) + +### Semantic Versioning Standard + +**Base Pattern**: `MAJOR.MINOR.PATCH` (e.g., `1.2.3`) + +**Strict Format**: `^[0-9]+\.[0-9]+\.[0-9]+$` + +**Extended Format** (with pre-release and build metadata): +- Pre-release: `1.2.3-alpha.1`, `2.0.0-beta.2`, `1.0.0-rc.1` +- Build metadata: `1.2.3+20241013`, `1.0.0+build.1` +- Combined: `1.2.3-alpha.1+build.20241013` + +### Valid Examples + +**Strict Semver** (OpenPlugins recommended): +- `1.0.0` - Initial release +- `1.2.3` - Standard version +- `2.5.13` - Double-digit components +- `0.1.0` - Pre-1.0 development + +**Extended Semver** (allowed): +- `1.0.0-alpha` - Alpha release +- `1.0.0-beta.2` - Beta release +- `1.0.0-rc.1` - Release candidate +- `1.2.3+20241013` - With build metadata + +### Invalid Examples + +- `1.0` - Missing PATCH +- `v1.0.0` - Leading 'v' prefix +- `1.0.0.0` - Too many components +- `1.2.x` - Placeholder values +- `latest` - Non-numeric +- `1.0.0-SNAPSHOT` - Non-standard identifier + +### Workflow + +1. **Extract Version from Arguments** + ``` + Parse $ARGUMENTS to extract version parameter + If version not provided, return error + ``` + +2. **Execute Semantic Version Checker** + ```bash + Execute .scripts/semver-checker.py "$version" "$strict" + + Exit codes: + - 0: Valid semantic version + - 1: Invalid format + - 2: Missing required parameters + - 3: Strict mode violation (valid semver, but has pre-release/build) + ``` + +3. **Parse Version Components** + ``` + Extract components: + - MAJOR: Breaking changes + - MINOR: Backward-compatible features + - PATCH: Backward-compatible fixes + - Pre-release: Optional identifier (alpha, beta, rc) + - Build metadata: Optional metadata + ``` + +4. **Return Validation Report** + ``` + Format results: + - Status: PASS/FAIL/WARNING + - Version: + - Valid: yes/no + - Components: MAJOR.MINOR.PATCH breakdown + - Pre-release: (if present) + - Build: (if present) + - Score impact: +5 points (if valid) + ``` + +### Examples + +```bash +# Valid strict semver +/best-practices versioning version:1.2.3 +# Result: PASS - Valid semantic version (1.2.3) + +# Valid with pre-release +/best-practices versioning version:1.0.0-alpha.1 +# Result: PASS - Valid semantic version with pre-release + +# Invalid format +/best-practices versioning version:1.0 +# Result: FAIL - Missing PATCH component + +# Strict mode with pre-release +/best-practices versioning version:1.0.0-beta strict:true +# Result: WARNING - Valid semver but not strict format + +# Invalid prefix +/best-practices versioning version:v1.2.3 +# Result: FAIL - Contains 'v' prefix (use 1.2.3) +``` + +### Error Handling + +**Missing version parameter**: +``` +ERROR: Missing required parameter 'version' + +Usage: /best-practices versioning version: + +Example: /best-practices versioning version:1.2.3 +``` + +**Invalid format**: +``` +ERROR: Invalid semantic version format + +The version must follow MAJOR.MINOR.PATCH format. + +Examples: +- 1.0.0 (initial release) +- 1.2.3 (standard version) +- 2.0.0-beta.1 (pre-release) +``` + +### Output Format + +**Success (Valid Semver)**: +``` +✅ Semantic Versioning: PASS + +Version: 1.2.3 +Format: MAJOR.MINOR.PATCH +Valid: Yes + +Components: +- MAJOR: 1 (breaking changes) +- MINOR: 2 (new features) +- PATCH: 3 (bug fixes) + +Quality Score Impact: +5 points + +The version follows Semantic Versioning 2.0.0 specification. +``` + +**Success with Pre-release**: +``` +✅ Semantic Versioning: PASS + +Version: 1.0.0-beta.2 +Format: MAJOR.MINOR.PATCH-PRERELEASE +Valid: Yes + +Components: +- MAJOR: 1 +- MINOR: 0 +- PATCH: 0 +- Pre-release: beta.2 + +Quality Score Impact: +5 points + +Note: Pre-release versions indicate unstable releases. +``` + +**Failure (Invalid Format)**: +``` +❌ Semantic Versioning: FAIL + +Version: 1.0 +Format: Invalid +Valid: No + +Issues Found: +1. Missing PATCH component +2. Expected format: MAJOR.MINOR.PATCH + +Suggested Correction: 1.0.0 + +Quality Score Impact: 0 points (fix to gain +5) + +Fix to comply with Semantic Versioning 2.0.0 specification. +Reference: https://semver.org/ +``` + +**Warning (Strict Mode)**: +``` +⚠️ Semantic Versioning: WARNING + +Version: 1.0.0-alpha.1 +Format: Valid semver, but not strict +Valid: Yes (with pre-release) + +Note: OpenPlugins recommends strict MAJOR.MINOR.PATCH format +without pre-release or build metadata for marketplace submissions. + +Recommended: 1.0.0 (for stable release) + +Quality Score Impact: +5 points (valid, but consider strict format) +``` + +### Versioning Guidelines + +**When to increment**: + +**MAJOR** (X.0.0): +- Breaking API changes +- Incompatible changes +- Major rewrites + +**MINOR** (x.Y.0): +- New features (backward-compatible) +- Deprecations +- Significant improvements + +**PATCH** (x.y.Z): +- Bug fixes +- Security patches +- Minor improvements + +**Initial Development**: +- Start with `0.1.0` +- Increment MINOR for features +- First stable release: `1.0.0` + +**Pre-release Identifiers**: +- `alpha` - Early testing +- `beta` - Feature complete, testing +- `rc` - Release candidate + +### Compliance Criteria + +**PASS Requirements**: +- Three numeric components (MAJOR.MINOR.PATCH) +- Each component is non-negative integer +- Components separated by dots +- Optional pre-release identifier (hyphen-separated) +- Optional build metadata (plus-separated) +- No leading zeros (except single 0) + +**FAIL Indicators**: +- Missing components (1.0) +- Too many components (1.0.0.0) +- Non-numeric components (1.x.0) +- Leading 'v' prefix +- Invalid separators +- Leading zeros (01.02.03) + +**Request**: $ARGUMENTS diff --git a/commands/documentation-validation/.scripts/changelog-validator.sh b/commands/documentation-validation/.scripts/changelog-validator.sh new file mode 100755 index 0000000..6df40c4 --- /dev/null +++ b/commands/documentation-validation/.scripts/changelog-validator.sh @@ -0,0 +1,254 @@ +#!/usr/bin/env bash + +# ============================================================================ +# CHANGELOG Validator +# ============================================================================ +# Purpose: Validate CHANGELOG.md format compliance (Keep a Changelog) +# Version: 1.0.0 +# Usage: ./changelog-validator.sh [--strict] [--json] +# Returns: 0=success, 1=error, JSON output to stdout if --json +# ============================================================================ + +set -euo pipefail + +# Default values +STRICT_MODE=false +JSON_OUTPUT=false +REQUIRE_UNRELEASED=true + +# Valid change categories per Keep a Changelog +VALID_CATEGORIES=("Added" "Changed" "Deprecated" "Removed" "Fixed" "Security") + +# Parse arguments +CHANGELOG_PATH="${1:-CHANGELOG.md}" +shift || true + +while [[ $# -gt 0 ]]; do + case "$1" in + --strict) + STRICT_MODE=true + shift + ;; + --json) + JSON_OUTPUT=true + shift + ;; + --no-unreleased) + REQUIRE_UNRELEASED=false + shift + ;; + *) + shift + ;; + esac +done + +# Initialize results +declare -a issues=() +declare -a version_entries=() +declare -a categories_used=() +has_title=false +has_unreleased=false +compliance_score=100 + +# Check if file exists +if [[ ! -f "$CHANGELOG_PATH" ]]; then + if $JSON_OUTPUT; then + cat < [--no-placeholders] [--recursive] [--json] +# Returns: 0=success, 1=warning, JSON output to stdout if --json +# ============================================================================ + +set -euo pipefail + +# Default values +NO_PLACEHOLDERS=true +RECURSIVE=true +JSON_OUTPUT=false +EXTENSIONS="md,txt,json,sh,py,js,ts,yaml,yml" + +# Parse arguments +TARGET_PATH="${1:-.}" +shift || true + +while [[ $# -gt 0 ]]; do + case "$1" in + --no-placeholders) + NO_PLACEHOLDERS=true + shift + ;; + --allow-placeholders) + NO_PLACEHOLDERS=false + shift + ;; + --recursive) + RECURSIVE=true + shift + ;; + --non-recursive) + RECURSIVE=false + shift + ;; + --json) + JSON_OUTPUT=true + shift + ;; + --extensions) + EXTENSIONS="$2" + shift 2 + ;; + *) + shift + ;; + esac +done + +# Initialize counters +files_checked=0 +example_count=0 +placeholder_count=0 +todo_count=0 +declare -a issues=() +declare -a files_with_issues=() + +# Build find command based on recursiveness +if $RECURSIVE; then + FIND_DEPTH="" +else + FIND_DEPTH="-maxdepth 1" +fi + +# Build extension pattern +ext_pattern="" +IFS=',' read -ra EXT_ARRAY <<< "$EXTENSIONS" +for ext in "${EXT_ARRAY[@]}"; do + if [[ -z "$ext_pattern" ]]; then + ext_pattern="-name '*.${ext}'" + else + ext_pattern="$ext_pattern -o -name '*.${ext}'" + fi +done + +# Find files to check +mapfile -t files < <(eval "find '$TARGET_PATH' $FIND_DEPTH -type f \( $ext_pattern \) 2>/dev/null" || true) + +# Placeholder patterns to detect +declare -a PLACEHOLDER_PATTERNS=( + 'TODO[:\)]' + 'FIXME[:\)]' + 'XXX[:\)]' + 'HACK[:\)]' + 'placeholder' + 'PLACEHOLDER' + 'your-.*-here' + '/dev/null || echo "0") + + # Ensure count is numeric and divide by 2 since each code block has opening and closing + if [[ "$count" =~ ^[0-9]+$ ]]; then + count=$((count / 2)) + else + count=0 + fi + + echo "$count" +} + +# Check each file +for file in "${files[@]}"; do + ((files_checked++)) || true + + # Count examples in markdown files + if [[ "$file" =~ \.md$ ]]; then + file_examples=$(count_code_examples "$file") + ((example_count += file_examples)) || true + fi + + file_issues=0 + + # Check for placeholder patterns + for pattern in "${PLACEHOLDER_PATTERNS[@]}"; do + while IFS=: read -r line_num line_content; do + # Skip if it's an acceptable pattern + if is_acceptable_pattern "$line_content"; then + continue + fi + + ((placeholder_count++)) || true + ((file_issues++)) || true + + issue="$file:$line_num: Placeholder pattern detected" + issues+=("$issue") + + # Track TODO/FIXME separately + if echo "$pattern" | grep -qE 'TODO|FIXME|XXX'; then + ((todo_count++)) || true + fi + done < <(grep -inE "$pattern" "$file" 2>/dev/null || true) + done + + # Check for generic dummy values (only in non-test files) + if [[ ! "$file" =~ test ]] && [[ ! "$file" =~ example ]] && [[ ! "$file" =~ spec ]]; then + for pattern in "${GENERIC_PATTERNS[@]}"; do + while IFS=: read -r line_num line_content; do + # Skip code comments explaining these terms + if echo "$line_content" | grep -qE '(#|//|/\*).*'"$pattern"; then + continue + fi + + # Skip if in an acceptable context + if is_acceptable_pattern "$line_content"; then + continue + fi + + ((file_issues++)) || true + issue="$file:$line_num: Generic placeholder value detected" + issues+=("$issue") + done < <(grep -inE "$pattern" "$file" 2>/dev/null || true) + done + fi + + # Track files with issues + if ((file_issues > 0)); then + files_with_issues+=("$file:$file_issues") + fi +done + +# Calculate quality score +quality_score=100 +((quality_score -= placeholder_count * 10)) || true +((quality_score -= todo_count * 5)) || true + +if ((example_count < 2)); then + ((quality_score -= 20)) || true +fi + +# Ensure score doesn't go negative +if ((quality_score < 0)); then + quality_score=0 +fi + +# Determine status +status="pass" +if ((quality_score < 60)); then + status="fail" +elif ((quality_score < 80)); then + status="warning" +fi + +# Output results +if $JSON_OUTPUT; then + # Build JSON output + cat < 0)) || ((todo_count > 0)); then + echo "Issues Detected:" + echo " • Placeholder patterns: $placeholder_count" + echo " • TODO/FIXME markers: $todo_count" + echo " • Files with issues: ${#files_with_issues[@]}" + echo "" + + if ((${#files_with_issues[@]} > 0)); then + echo "Files with issues:" + for file_info in "${files_with_issues[@]:0:5}"; do # Show first 5 + file_path="${file_info%:*}" + file_count="${file_info#*:}" + echo " • $file_path ($file_count issues)" + done + + if ((${#files_with_issues[@]} > 5)); then + echo " ... and $((${#files_with_issues[@]} - 5)) more files" + fi + fi + + echo "" + echo "Sample Issues:" + for issue in "${issues[@]:0:5}"; do # Show first 5 + echo " • $issue" + done + + if ((${#issues[@]} > 5)); then + echo " ... and $((${#issues[@]} - 5)) more issues" + fi + else + echo "✓ No placeholder patterns detected" + fi + + if ((example_count < 2)); then + echo "" + echo "⚠ Recommendation: Add more code examples (found: $example_count, recommended: 3+)" + fi + + echo "" + if [[ "$status" == "pass" ]]; then + echo "Overall: ✓ PASS" + elif [[ "$status" == "warning" ]]; then + echo "Overall: ⚠ WARNINGS" + else + echo "Overall: ✗ FAIL" + fi + echo "" +fi + +# Exit with appropriate code +if [[ "$status" == "fail" ]]; then + exit 1 +elif [[ "$status" == "warning" ]]; then + exit 0 # Warning is not a failure +else + exit 0 +fi diff --git a/commands/documentation-validation/.scripts/license-detector.py b/commands/documentation-validation/.scripts/license-detector.py new file mode 100755 index 0000000..fff6456 --- /dev/null +++ b/commands/documentation-validation/.scripts/license-detector.py @@ -0,0 +1,344 @@ +#!/usr/bin/env python3 + +# ============================================================================ +# License Detector +# ============================================================================ +# Purpose: Detect and validate LICENSE file content +# Version: 1.0.0 +# Usage: ./license-detector.py [--expected LICENSE] [--json] +# Returns: 0=success, 1=error, JSON output to stdout +# ============================================================================ + +import sys +import os +import re +import json +import argparse +from pathlib import Path +from typing import Dict, Optional, Tuple + +# OSI-approved license patterns +LICENSE_PATTERNS = { + "MIT": { + "pattern": r"Permission is hereby granted, free of charge", + "confidence": 95, + "osi_approved": True, + "full_name": "MIT License" + }, + "Apache-2.0": { + "pattern": r"Licensed under the Apache License, Version 2\.0", + "confidence": 95, + "osi_approved": True, + "full_name": "Apache License 2.0" + }, + "GPL-3.0": { + "pattern": r"GNU GENERAL PUBLIC LICENSE.*Version 3", + "confidence": 95, + "osi_approved": True, + "full_name": "GNU General Public License v3.0" + }, + "GPL-2.0": { + "pattern": r"GNU GENERAL PUBLIC LICENSE.*Version 2", + "confidence": 95, + "osi_approved": True, + "full_name": "GNU General Public License v2.0" + }, + "BSD-3-Clause": { + "pattern": r"Redistribution and use in source and binary forms.*3\.", + "confidence": 85, + "osi_approved": True, + "full_name": "BSD 3-Clause License" + }, + "BSD-2-Clause": { + "pattern": r"Redistribution and use in source and binary forms", + "confidence": 80, + "osi_approved": True, + "full_name": "BSD 2-Clause License" + }, + "ISC": { + "pattern": r"Permission to use, copy, modify, and/or distribute", + "confidence": 90, + "osi_approved": True, + "full_name": "ISC License" + }, + "MPL-2.0": { + "pattern": r"Mozilla Public License Version 2\.0", + "confidence": 95, + "osi_approved": True, + "full_name": "Mozilla Public License 2.0" + } +} + +# License name variations/aliases +LICENSE_ALIASES = { + "MIT License": "MIT", + "MIT license": "MIT", + "Apache License 2.0": "Apache-2.0", + "Apache 2.0": "Apache-2.0", + "Apache-2": "Apache-2.0", + "GNU GPL v3": "GPL-3.0", + "GPLv3": "GPL-3.0", + "GNU GPL v2": "GPL-2.0", + "GPLv2": "GPL-2.0", + "BSD 3-Clause": "BSD-3-Clause", + "BSD 2-Clause": "BSD-2-Clause", +} + +def find_license_file(path: str) -> Optional[str]: + """Find LICENSE file in path.""" + path_obj = Path(path) + + # Check if path is directly to LICENSE + if path_obj.is_file() and 'license' in path_obj.name.lower(): + return str(path_obj) + + # Search for LICENSE in directory + if path_obj.is_dir(): + for filename in ['LICENSE', 'LICENSE.txt', 'LICENSE.md', 'COPYING', 'COPYING.txt', 'LICENCE']: + license_path = path_obj / filename + if license_path.exists(): + return str(license_path) + + return None + +def read_plugin_manifest(path: str) -> Optional[str]: + """Read license from plugin.json.""" + path_obj = Path(path) + + if path_obj.is_file(): + path_obj = path_obj.parent + + manifest_path = path_obj / '.claude-plugin' / 'plugin.json' + + if not manifest_path.exists(): + return None + + try: + with open(manifest_path, 'r', encoding='utf-8') as f: + manifest = json.load(f) + return manifest.get('license') + except Exception: + return None + +def detect_license(content: str) -> Tuple[Optional[str], int, bool]: + """ + Detect license type from content. + Returns: (license_type, confidence, is_complete) + """ + content_normalized = ' '.join(content.split()) # Normalize whitespace + + best_match = None + best_confidence = 0 + + # Check for license text patterns + for license_id, license_info in LICENSE_PATTERNS.items(): + pattern = license_info["pattern"] + if re.search(pattern, content, re.IGNORECASE | re.DOTALL): + confidence = license_info["confidence"] + if confidence > best_confidence: + best_match = license_id + best_confidence = confidence + + # Check if it's just a name without full text + is_complete = True + if best_match and len(content.strip()) < 200: # Very short content + is_complete = False + + # If no pattern match, check for just license names + if not best_match: + for alias, license_id in LICENSE_ALIASES.items(): + if re.search(r'\b' + re.escape(alias) + r'\b', content, re.IGNORECASE): + best_match = license_id + best_confidence = 50 # Lower confidence for name-only + is_complete = False + break + + return best_match, best_confidence, is_complete + +def normalize_license_name(license_name: str) -> str: + """Normalize license name for comparison.""" + if not license_name: + return "" + + # Check if it's already a standard ID + if license_name in LICENSE_PATTERNS: + return license_name + + # Check aliases + if license_name in LICENSE_ALIASES: + return LICENSE_ALIASES[license_name] + + # Normalize common variations + normalized = license_name.strip() + normalized = re.sub(r'\s+', ' ', normalized) + + # Try fuzzy matching + for alias, license_id in LICENSE_ALIASES.items(): + if normalized.lower() == alias.lower(): + return license_id + + return license_name + +def licenses_match(detected: str, expected: str) -> Tuple[bool, str]: + """ + Check if detected license matches expected. + Returns: (matches, match_type) + """ + detected_norm = normalize_license_name(detected) + expected_norm = normalize_license_name(expected) + + if detected_norm == expected_norm: + return True, "exact" + + # Check if they're aliases of the same license + if detected_norm in LICENSE_PATTERNS and expected_norm in LICENSE_PATTERNS: + if LICENSE_PATTERNS[detected_norm]["full_name"] == LICENSE_PATTERNS[expected_norm]["full_name"]: + return True, "alias" + + # Fuzzy match + if detected_norm.lower().replace('-', '').replace(' ', '') == expected_norm.lower().replace('-', '').replace(' ', ''): + return True, "fuzzy" + + return False, "mismatch" + +def main(): + parser = argparse.ArgumentParser(description='Detect and validate LICENSE file') + parser.add_argument('path', help='Path to LICENSE file or directory containing it') + parser.add_argument('--expected', help='Expected license type (from plugin.json)', default=None) + parser.add_argument('--strict', action='store_true', help='Strict validation (requires full text)') + parser.add_argument('--json', action='store_true', help='Output JSON format') + + args = parser.parse_args() + + # Find LICENSE file + license_path = find_license_file(args.path) + + if not license_path: + result = { + "error": "LICENSE file not found", + "path": args.path, + "present": False, + "score": 0, + "status": "fail", + "issues": ["LICENSE file not found in specified path"] + } + if args.json: + print(json.dumps(result, indent=2)) + else: + print("❌ CRITICAL: LICENSE file not found") + print(f"Path: {args.path}") + print("LICENSE file is required for plugin submission.") + return 1 + + # Read LICENSE content + try: + with open(license_path, 'r', encoding='utf-8') as f: + content = f.read() + except Exception as e: + result = { + "error": f"Failed to read LICENSE: {str(e)}", + "path": license_path, + "present": True, + "score": 0, + "status": "fail" + } + if args.json: + print(json.dumps(result, indent=2)) + else: + print(f"❌ ERROR: Failed to read LICENSE: {e}") + return 1 + + # Detect license + detected_license, confidence, is_complete = detect_license(content) + + # Read expected license from plugin.json if not provided + if not args.expected: + args.expected = read_plugin_manifest(args.path) + + # Check consistency + matches_manifest = True + match_type = None + if args.expected: + matches_manifest, match_type = licenses_match(detected_license or "", args.expected) + + # Determine if OSI approved + is_osi_approved = False + if detected_license and detected_license in LICENSE_PATTERNS: + is_osi_approved = LICENSE_PATTERNS[detected_license]["osi_approved"] + + # Build issues list + issues = [] + score = 100 + + if not detected_license: + issues.append("Unable to identify license type") + score -= 50 + elif not is_complete: + issues.append("LICENSE contains only license name, not full text") + score -= 20 if args.strict else 10 + + if not is_osi_approved and detected_license: + issues.append("License is not OSI-approved") + score -= 30 + + if args.expected and not matches_manifest: + issues.append(f"LICENSE ({detected_license or 'unknown'}) does not match plugin.json ({args.expected})") + score -= 20 + + score = max(0, score) + + # Determine status + if score >= 80: + status = "pass" + elif score >= 60: + status = "warning" + else: + status = "fail" + + # Build result + result = { + "present": True, + "path": license_path, + "detected_license": detected_license, + "confidence": confidence, + "is_complete": is_complete, + "is_osi_approved": is_osi_approved, + "manifest_license": args.expected, + "matches_manifest": matches_manifest, + "match_type": match_type, + "score": score, + "status": status, + "issues": issues + } + + # Output + if args.json: + print(json.dumps(result, indent=2)) + else: + # Human-readable output + print(f"\nLICENSE Validation Results") + print("=" * 50) + print(f"File: {license_path}") + print(f"Detected: {detected_license or 'Unknown'} (confidence: {confidence}%)") + print(f"Score: {score}/100") + print(f"\nOSI Approved: {'✓ Yes' if is_osi_approved else '✗ No'}") + print(f"Complete Text: {'✓ Yes' if is_complete else '⚠ No (name only)'}") + + if args.expected: + print(f"\nConsistency Check:") + print(f" plugin.json: {args.expected}") + print(f" LICENSE file: {detected_license or 'Unknown'}") + print(f" Match: {'✓ Yes' if matches_manifest else '✗ No'}") + + if issues: + print(f"\nIssues Found: {len(issues)}") + for issue in issues: + print(f" • {issue}") + + print(f"\nOverall: {'✓ PASS' if status == 'pass' else '⚠ WARNING' if status == 'warning' else '✗ FAIL'}") + print() + + return 0 if status != "fail" else 1 + +if __name__ == "__main__": + sys.exit(main()) diff --git a/commands/documentation-validation/.scripts/readme-checker.py b/commands/documentation-validation/.scripts/readme-checker.py new file mode 100755 index 0000000..2436b49 --- /dev/null +++ b/commands/documentation-validation/.scripts/readme-checker.py @@ -0,0 +1,311 @@ +#!/usr/bin/env python3 + +# ============================================================================ +# README Checker +# ============================================================================ +# Purpose: Validate README.md completeness and quality +# Version: 1.0.0 +# Usage: ./readme-checker.py [options] +# Returns: 0=success, 1=error, JSON output to stdout +# ============================================================================ + +import sys +import os +import re +import json +import argparse +from pathlib import Path +from typing import Dict, List, Tuple + +# Required sections (case-insensitive patterns) +REQUIRED_SECTIONS = { + "overview": r"(?i)^#{1,3}\s*(overview|description|about)", + "installation": r"(?i)^#{1,3}\s*installation", + "usage": r"(?i)^#{1,3}\s*usage", + "examples": r"(?i)^#{1,3}\s*(examples?|demonstrations?)", + "license": r"(?i)^#{1,3}\s*licen[cs]e" +} + +# Optional but recommended sections +RECOMMENDED_SECTIONS = { + "configuration": r"(?i)^#{1,3}\s*(configuration|setup|config)", + "troubleshooting": r"(?i)^#{1,3}\s*(troubleshooting|faq|common.?issues)", + "contributing": r"(?i)^#{1,3}\s*contribut", + "changelog": r"(?i)^#{1,3}\s*(changelog|version.?history|releases)" +} + +def find_readme(path: str) -> str: + """Find README file in path.""" + path_obj = Path(path) + + # Check if path is directly to README + if path_obj.is_file() and path_obj.name.lower().startswith('readme'): + return str(path_obj) + + # Search for README in directory + if path_obj.is_dir(): + for filename in ['README.md', 'readme.md', 'README.txt', 'README']: + readme_path = path_obj / filename + if readme_path.exists(): + return str(readme_path) + + return None + +def analyze_sections(content: str) -> Tuple[List[str], List[str]]: + """Analyze README sections.""" + lines = content.split('\n') + found_sections = [] + missing_sections = [] + + # Check required sections + for section_name, pattern in REQUIRED_SECTIONS.items(): + found = False + for line in lines: + if re.match(pattern, line.strip()): + found = True + found_sections.append(section_name) + break + + if not found: + missing_sections.append(section_name) + + return found_sections, missing_sections + +def count_examples(content: str) -> int: + """Count code examples in README.""" + # Count code blocks (```...```) + code_blocks = re.findall(r'```[\s\S]*?```', content) + return len(code_blocks) + +def check_quality_issues(content: str) -> List[str]: + """Check for quality issues.""" + issues = [] + + # Check for excessive placeholder text + placeholder_patterns = [ + r'TODO', + r'FIXME', + r'XXX', + r'placeholder', + r'your-.*-here', + r' 5: # More than 5 is excessive + issues.append(f"Excessive placeholder patterns: {len(matches)} instances of '{pattern}'") + + # Check for very short sections + lines = content.split('\n') + current_section = None + section_lengths = {} + + for line in lines: + if re.match(r'^#{1,3}\s+', line): + current_section = line.strip() + section_lengths[current_section] = 0 + elif current_section and line.strip(): + section_lengths[current_section] += len(line) + + for section, length in section_lengths.items(): + if length < 100 and any(keyword in section.lower() for keyword in ['installation', 'usage', 'example']): + issues.append(f"Section '{section}' is very short ({length} chars), consider expanding") + + return issues + +def calculate_score(found_sections: List[str], missing_sections: List[str], + length: int, example_count: int, quality_issues: List[str]) -> int: + """Calculate README quality score (0-100).""" + score = 100 + + # Deduct for missing required sections (15 points each) + score -= len(missing_sections) * 15 + + # Deduct if too short + if length < 200: + score -= 30 # Critical + elif length < 500: + score -= 10 # Warning + + # Deduct if no examples + if example_count == 0: + score -= 15 + elif example_count < 2: + score -= 5 + + # Deduct for quality issues (5 points each, max 20) + score -= min(len(quality_issues) * 5, 20) + + return max(0, score) + +def generate_recommendations(found_sections: List[str], missing_sections: List[str], + length: int, example_count: int, quality_issues: List[str]) -> List[Dict]: + """Generate actionable recommendations.""" + recommendations = [] + + # Missing sections + for section in missing_sections: + impact = 15 + recommendations.append({ + "priority": "critical" if section in ["overview", "installation", "usage"] else "important", + "action": f"Add {section.title()} section", + "impact": impact, + "effort": "medium" if section == "examples" else "low", + "description": f"Include a comprehensive {section} section with clear explanations" + }) + + # Length issues + if length < 500: + gap = 500 - length + recommendations.append({ + "priority": "important" if length >= 200 else "critical", + "action": f"Expand README by {gap} characters", + "impact": 10 if length >= 200 else 30, + "effort": "medium", + "description": "Add more detail to existing sections or include additional sections" + }) + + # Example issues + if example_count < 3: + needed = 3 - example_count + recommendations.append({ + "priority": "important", + "action": f"Add {needed} more code example{'s' if needed > 1 else ''}", + "impact": 15 if example_count == 0 else 5, + "effort": "medium", + "description": "Include concrete, copy-pasteable usage examples" + }) + + # Quality issues + for issue in quality_issues: + recommendations.append({ + "priority": "recommended", + "action": "Address quality issue", + "impact": 5, + "effort": "low", + "description": issue + }) + + return sorted(recommendations, key=lambda x: ( + {"critical": 0, "important": 1, "recommended": 2}[x["priority"]], + -x["impact"] + )) + +def main(): + parser = argparse.ArgumentParser(description='Validate README.md quality') + parser.add_argument('path', help='Path to README.md or directory containing it') + parser.add_argument('--sections', help='Comma-separated required sections', default=None) + parser.add_argument('--min-length', type=int, default=500, help='Minimum character count') + parser.add_argument('--strict', action='store_true', help='Enable strict validation') + parser.add_argument('--json', action='store_true', help='Output JSON format') + + args = parser.parse_args() + + # Find README file + readme_path = find_readme(args.path) + + if not readme_path: + result = { + "error": "README.md not found", + "path": args.path, + "present": False, + "score": 0, + "issues": ["README.md file not found in specified path"] + } + print(json.dumps(result, indent=2)) + return 1 + + # Read README content + try: + with open(readme_path, 'r', encoding='utf-8') as f: + content = f.read() + except Exception as e: + result = { + "error": f"Failed to read README: {str(e)}", + "path": readme_path, + "present": True, + "score": 0 + } + print(json.dumps(result, indent=2)) + return 1 + + # Analyze README + length = len(content) + found_sections, missing_sections = analyze_sections(content) + example_count = count_examples(content) + quality_issues = check_quality_issues(content) + + # Calculate score + score = calculate_score(found_sections, missing_sections, length, example_count, quality_issues) + + # Generate recommendations + recommendations = generate_recommendations(found_sections, missing_sections, length, example_count, quality_issues) + + # Build result + result = { + "present": True, + "path": readme_path, + "length": length, + "min_length": args.min_length, + "meets_min_length": length >= args.min_length, + "sections": { + "found": found_sections, + "missing": missing_sections, + "required_count": len(REQUIRED_SECTIONS), + "found_count": len(found_sections) + }, + "examples": { + "count": example_count, + "sufficient": example_count >= 2 + }, + "quality_issues": quality_issues, + "score": score, + "rating": ( + "excellent" if score >= 90 else + "good" if score >= 75 else + "fair" if score >= 60 else + "needs_improvement" if score >= 40 else + "poor" + ), + "recommendations": recommendations[:10], # Top 10 + "status": "pass" if score >= 60 and not missing_sections else "warning" if score >= 40 else "fail" + } + + # Output + if args.json: + print(json.dumps(result, indent=2)) + else: + # Human-readable output + print(f"\nREADME Validation Results") + print("=" * 50) + print(f"File: {readme_path}") + print(f"Length: {length} characters (min: {args.min_length})") + print(f"Score: {score}/100 ({result['rating'].title()})") + print(f"\nSections Found: {len(found_sections)}/{len(REQUIRED_SECTIONS)}") + for section in found_sections: + print(f" ✓ {section.title()}") + + if missing_sections: + print(f"\nMissing Sections: {len(missing_sections)}") + for section in missing_sections: + print(f" ✗ {section.title()}") + + print(f"\nCode Examples: {example_count}") + + if quality_issues: + print(f"\nQuality Issues: {len(quality_issues)}") + for issue in quality_issues[:5]: # Top 5 + print(f" • {issue}") + + if recommendations: + print(f"\nTop Recommendations:") + for i, rec in enumerate(recommendations[:5], 1): + print(f" {i}. [{rec['priority'].upper()}] {rec['action']} (+{rec['impact']} pts)") + + print() + + return 0 if score >= 60 else 1 + +if __name__ == "__main__": + sys.exit(main()) diff --git a/commands/documentation-validation/check-license.md b/commands/documentation-validation/check-license.md new file mode 100644 index 0000000..0b53960 --- /dev/null +++ b/commands/documentation-validation/check-license.md @@ -0,0 +1,308 @@ +## Operation: Check LICENSE File + +Validate LICENSE file presence, format, and consistency with plugin metadata. + +### Parameters from $ARGUMENTS + +- **path**: Target plugin/marketplace path (required) +- **expected**: Expected license type (optional, reads from plugin.json if not provided) +- **strict**: Enable strict validation mode (optional, default: false) +- **check-consistency**: Verify consistency with plugin.json (optional, default: true) + +### LICENSE Requirements + +**File Presence**: +- LICENSE or LICENSE.txt in plugin root +- Also accept: LICENSE.md, COPYING, COPYING.txt + +**OSI-Approved Licenses** (recommended): +- MIT License +- Apache License 2.0 +- GNU General Public License (GPL) v2/v3 +- BSD 2-Clause or 3-Clause License +- Mozilla Public License 2.0 +- ISC License +- Creative Commons (for documentation) + +**Validation Checks**: +1. **File exists**: LICENSE file present in root +2. **Valid content**: Contains recognized license text +3. **Complete**: Full license text, not just license name +4. **Consistency**: Matches license field in plugin.json +5. **OSI-approved**: Recognized open-source license + +### Workflow + +1. **Locate LICENSE File** + ``` + Check for files in plugin root (case-insensitive): + - LICENSE + - LICENSE.txt + - LICENSE.md + - COPYING + - COPYING.txt + - LICENCE (UK spelling) + + If multiple found, prefer LICENSE over others + ``` + +2. **Read Plugin Metadata** + ``` + Read plugin.json + Extract license field value + Store expected license type for comparison + ``` + +3. **Execute License Detector** + ```bash + Execute .scripts/license-detector.py with parameters: + - License file path + - Expected license type (from plugin.json) + - Strict mode flag + + Script returns: + - detected_license: Identified license type + - confidence: 0-100 (match confidence) + - is_osi_approved: Boolean + - is_complete: Boolean (full text vs just name) + - matches_manifest: Boolean + - issues: Array of problems + ``` + +4. **Validate License Content** + ``` + Check for license text patterns: + - MIT: "Permission is hereby granted, free of charge..." + - Apache 2.0: "Licensed under the Apache License, Version 2.0" + - GPL-3.0: "GNU GENERAL PUBLIC LICENSE Version 3" + - BSD-2-Clause: "Redistribution and use in source and binary forms" + + Detect incomplete licenses: + - Just "MIT" or "MIT License" (missing full text) + - Just "Apache 2.0" (missing full text) + - Links to license without including text + ``` + +5. **Check Consistency** + ``` + Compare detected license with plugin.json: + - Exact match: ✅ PASS + - Close match (e.g., "MIT" vs "MIT License"): ⚠️ WARNING + - Mismatch: ❌ ERROR + - Not specified in plugin.json: ⚠️ WARNING + + Normalize license names for comparison: + - "MIT License" == "MIT" + - "Apache-2.0" == "Apache License 2.0" + - "GPL-3.0" == "GNU GPL v3" + ``` + +6. **Verify OSI Approval** + ``` + Check against OSI-approved license list: + - MIT: ✅ Approved + - Apache-2.0: ✅ Approved + - GPL-2.0, GPL-3.0: ✅ Approved + - BSD-2-Clause, BSD-3-Clause: ✅ Approved + - Proprietary: ❌ Not approved + - Custom/Unknown: ⚠️ Review required + ``` + +7. **Format Output** + ``` + Display: + - ✅/❌ File presence + - Detected license type + - OSI approval status + - Consistency with plugin.json + - Completeness (full text vs name only) + - Issues and recommendations + ``` + +### Examples + +```bash +# Check LICENSE with defaults (reads expected from plugin.json) +/documentation-validation license path:. + +# Check with explicit expected license +/documentation-validation license path:. expected:MIT + +# Strict validation (requires full license text) +/documentation-validation license path:. strict:true + +# Skip consistency check (only validate file) +/documentation-validation license path:. check-consistency:false + +# Check specific plugin +/documentation-validation license path:/path/to/plugin expected:Apache-2.0 +``` + +### Error Handling + +**Error: LICENSE file not found** +``` +❌ CRITICAL: LICENSE file not found in + +Remediation: +1. Create LICENSE file in plugin root directory +2. Include full license text (not just the name) +3. Use an OSI-approved open-source license (MIT recommended) +4. Ensure license field in plugin.json matches LICENSE file + +Recommended licenses for plugins: +- MIT: Simple, permissive (most common) +- Apache 2.0: Permissive with patent grant +- GPL-3.0: Copyleft (requires derivatives to use same license) +- BSD-3-Clause: Permissive, similar to MIT + +Full license texts available at: https://choosealicense.com/ + +This is a BLOCKING issue - plugin cannot be submitted without a LICENSE. +``` + +**Error: Incomplete license text** +``` +⚠️ WARNING: LICENSE file contains only license name, not full text + +Current content: "MIT License" +Required: Full MIT License text + +The LICENSE file should contain the complete license text, not just the name. + +For MIT License, include: +MIT License + +Copyright (c) [year] [fullname] + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction... +[full license text] + +Get full text: https://opensource.org/licenses/MIT +``` + +**Error: License mismatch with plugin.json** +``` +❌ ERROR: LICENSE file does not match plugin.json declaration + +plugin.json declares: "Apache-2.0" +LICENSE file contains: "MIT License" + +Remediation: +1. Update plugin.json to declare "MIT" license, OR +2. Replace LICENSE file with Apache 2.0 license text + +Consistency is required - both files must specify the same license. +``` + +**Error: Non-OSI-approved license** +``` +❌ ERROR: License is not OSI-approved + +Detected license: "Proprietary" or "Custom License" + +OpenPlugins marketplace requires OSI-approved open-source licenses. + +Recommended licenses: +- MIT License (most permissive) +- Apache License 2.0 +- GNU GPL v3 +- BSD 3-Clause + +Choose a license: https://choosealicense.com/ +OSI-approved list: https://opensource.org/licenses + +This is a BLOCKING issue - plugin cannot be submitted with proprietary license. +``` + +**Error: Unrecognized license** +``` +⚠️ WARNING: Unable to identify license type + +The LICENSE file content does not match known license patterns. + +Possible issues: +- Custom or modified license (not allowed) +- Corrupted or incomplete license text +- Non-standard format + +Remediation: +1. Use standard, unmodified license text from official source +2. Choose from OSI-approved licenses +3. Do not modify standard license text (except copyright holder) +4. Get standard text from https://choosealicense.com/ + +If using a valid OSI license, ensure text matches standard format exactly. +``` + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +LICENSE VALIDATION RESULTS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +File: ✅ LICENSE found + +License Type: +Confidence: <0-100>% ✅ + +OSI Approved: ✅ Yes +Complete Text: ✅ Yes (full license included) + +Consistency Check: +plugin.json declares: "" +LICENSE file contains: "" +Match: ✅ Consistent + +Validation: ✅ PASS + +Recommendations: +- License is valid and properly formatted +- Meets OpenPlugins requirements +- Ready for submission + +Overall: +``` + +### Integration + +This operation is invoked by: +- `/documentation-validation license path:.` (direct) +- `/documentation-validation full-docs path:.` (as part of complete validation) +- `/validation-orchestrator comprehensive path:.` (via orchestrator) + +Results contribute to documentation quality score: +- Present, valid, consistent: +5 points +- Present but issues: 0 points (with warnings) +- Missing: BLOCKING issue (-20 points) + +### Common License Patterns + +**MIT License Detection**: +``` +Pattern: "Permission is hereby granted, free of charge" +Confidence: 95%+ +``` + +**Apache 2.0 Detection**: +``` +Pattern: "Licensed under the Apache License, Version 2.0" +Confidence: 95%+ +``` + +**GPL-3.0 Detection**: +``` +Pattern: "GNU GENERAL PUBLIC LICENSE" + "Version 3" +Confidence: 95%+ +``` + +**BSD Detection**: +``` +Pattern: "Redistribution and use in source and binary forms" +Confidence: 90%+ +``` + +**Request**: $ARGUMENTS diff --git a/commands/documentation-validation/check-readme.md b/commands/documentation-validation/check-readme.md new file mode 100644 index 0000000..990d761 --- /dev/null +++ b/commands/documentation-validation/check-readme.md @@ -0,0 +1,193 @@ +## Operation: Check README Completeness + +Validate README.md completeness, structure, and quality against OpenPlugins standards. + +### Parameters from $ARGUMENTS + +- **path**: Target plugin/marketplace path (required) +- **sections**: Comma-separated required sections (optional, defaults to standard set) +- **min-length**: Minimum character count (optional, default: 500) +- **strict**: Enable strict validation mode (optional, default: false) + +### README Requirements + +**Required Sections** (case-insensitive matching): +1. **Overview/Description**: Plugin purpose and functionality +2. **Installation**: How to install and configure +3. **Usage**: How to use the plugin with examples +4. **Examples**: At least 2-3 concrete usage examples +5. **License**: License information or reference + +**Quality Criteria**: +- Minimum 500 characters (configurable) +- No excessive placeholder text +- Proper markdown formatting +- Working links (if present) +- Code blocks properly formatted + +### Workflow + +1. **Locate README File** + ``` + Check for README.md in plugin root + If not found, check for README.txt or readme.md + If still not found, report critical error + ``` + +2. **Execute README Checker Script** + ```bash + Execute .scripts/readme-checker.py with parameters: + - File path to README.md + - Required sections list + - Minimum length threshold + - Strict mode flag + + Script returns JSON with: + - sections_found: Array of detected sections + - sections_missing: Array of missing sections + - length: Character count + - quality_score: 0-100 + - issues: Array of specific problems + ``` + +3. **Analyze Results** + ``` + CRITICAL (blocking): + - README.md file missing + - Length < 200 characters + - Missing 3+ required sections + + WARNING (should fix): + - Length < 500 characters + - Missing 1-2 required sections + - Missing examples section + + RECOMMENDATION (nice to have): + - Add troubleshooting section + - Expand examples + - Add badges or visual elements + ``` + +4. **Calculate Section Score** + ``` + score = 100 + score -= (missing_required_sections × 15) + score -= (length < 500) ? 10 : 0 + score -= (no_examples) ? 15 : 0 + score = max(0, score) + ``` + +5. **Format Output** + ``` + Display: + - ✅/❌ File presence + - ✅/⚠️/❌ Each required section + - Length statistics + - Quality score + - Specific improvement recommendations + ``` + +### Examples + +```bash +# Check README with defaults +/documentation-validation readme path:. + +# Check with custom sections +/documentation-validation readme path:./my-plugin sections:"overview,installation,usage,examples,contributing,license" + +# Strict validation with higher standards +/documentation-validation readme path:. min-length:1000 strict:true + +# Check specific plugin +/documentation-validation readme path:/path/to/plugin sections:"overview,usage,license" +``` + +### Error Handling + +**Error: README.md not found** +``` +❌ CRITICAL: README.md file not found in + +Remediation: +1. Create README.md in plugin root directory +2. Include required sections: Overview, Installation, Usage, Examples, License +3. Ensure minimum 500 characters of meaningful content +4. See https://github.com/dhofheinz/open-plugins/blob/main/README.md for example + +This is a BLOCKING issue - plugin cannot be submitted without README. +``` + +**Error: README too short** +``` +⚠️ WARNING: README.md is only characters (minimum: 500) + +Current length: characters +Required: 500 characters minimum +Gap: <500-X> characters + +Remediation: +- Expand installation instructions with examples +- Add 2-3 usage examples with code blocks +- Include configuration options +- Add troubleshooting section +``` + +**Error: Missing required sections** +``` +❌ ERROR: Missing required sections + +Missing sections: +- Installation: How to install the plugin +- Examples: At least 2 concrete usage examples +- License: License information or reference to LICENSE file + +Remediation: +Add each missing section with meaningful content. +See CONTRIBUTING.md for section requirements. +``` + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +README VALIDATION RESULTS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +File: ✅ README.md found + +Required Sections: +✅ Overview/Description +✅ Installation +✅ Usage +⚠️ Examples (found 1, recommended: 3+) +✅ License + +Length: characters (minimum: 500) ✅ + +Quality Score: <0-100>/100 + +Issues Found: + +Critical (blocking): +Warnings (should fix): +Recommendations: + +Top Recommendations: +1. Add 2 more usage examples with code blocks [+15 pts] +2. Expand installation section with configuration options [+5 pts] +3. Include troubleshooting section [+5 pts] + +Overall: +``` + +### Integration + +This operation is invoked by: +- `/documentation-validation readme path:.` (direct) +- `/documentation-validation full-docs path:.` (as part of complete validation) +- `/validation-orchestrator comprehensive path:.` (via orchestrator) + +Results feed into quality-analysis scoring system. + +**Request**: $ARGUMENTS diff --git a/commands/documentation-validation/full-documentation.md b/commands/documentation-validation/full-documentation.md new file mode 100644 index 0000000..a1e2fc8 --- /dev/null +++ b/commands/documentation-validation/full-documentation.md @@ -0,0 +1,533 @@ +## Operation: Full Documentation Validation + +Execute comprehensive documentation validation workflow covering all documentation aspects. + +### Parameters from $ARGUMENTS + +- **path**: Target plugin/marketplace path (required) +- **detailed**: Include detailed sub-reports (optional, default: true) +- **fix-suggestions**: Generate actionable improvement suggestions (optional, default: true) +- **format**: Output format (text|json|markdown) (optional, default: text) + +### Full Documentation Workflow + +This operation orchestrates all documentation validation sub-operations to provide +a complete documentation quality assessment. + +### Workflow + +1. **Initialize Validation Context** + ``` + Create validation context: + - Target path + - Timestamp + - Validation mode: comprehensive + - Results storage structure + + Prepare for aggregating results from: + - README validation + - CHANGELOG validation + - LICENSE validation + - Examples validation + ``` + +2. **Execute README Validation** + ``` + Invoke: check-readme.md operation + Parameters: + - path: + - sections: default required sections + - min-length: 500 + + Capture results: + - README present: Boolean + - Sections found: Array + - Sections missing: Array + - Length: Integer + - Score: 0-100 + - Issues: Array + ``` + +3. **Execute CHANGELOG Validation** + ``` + Invoke: validate-changelog.md operation + Parameters: + - file: CHANGELOG.md + - format: keepachangelog + - require-unreleased: true + + Capture results: + - CHANGELOG present: Boolean + - Format compliance: 0-100% + - Version entries: Array + - Issues: Array + - Score: 0-100 + ``` + +4. **Execute LICENSE Validation** + ``` + Invoke: check-license.md operation + Parameters: + - path: + - check-consistency: true + + Capture results: + - LICENSE present: Boolean + - License type: String + - OSI approved: Boolean + - Consistent with manifest: Boolean + - Issues: Array + - Score: 0-100 + ``` + +5. **Execute Examples Validation** + ``` + Invoke: validate-examples.md operation + Parameters: + - path: + - no-placeholders: true + - recursive: true + + Capture results: + - Files checked: Integer + - Examples found: Integer + - Placeholders detected: Integer + - Quality score: 0-100 + - Issues: Array + ``` + +6. **Aggregate Results** + ``` + Calculate overall documentation score: + + weights = { + readme: 40%, # Most important + examples: 30%, # Critical for usability + license: 20%, # Required for submission + changelog: 10% # Recommended but not critical + } + + overall_score = ( + readme_score × 0.40 + + examples_score × 0.30 + + license_score × 0.20 + + changelog_score × 0.10 + ) + + Round to integer: 0-100 + ``` + +7. **Categorize Issues by Priority** + ``` + CRITICAL (P0 - Blocking): + - README.md missing + - LICENSE file missing + - README < 200 characters + - Non-OSI-approved license + - License mismatch with manifest + + IMPORTANT (P1 - Should Fix): + - README missing 2+ required sections + - README < 500 characters + - No examples in README + - 5+ placeholder patterns + - CHANGELOG has format errors + + RECOMMENDED (P2 - Nice to Have): + - CHANGELOG missing + - README missing optional sections + - < 3 examples + - Minor placeholder patterns + ``` + +8. **Generate Improvement Roadmap** + ``` + Create prioritized action plan: + + For each issue: + - Identify impact on overall score + - Estimate effort (Low/Medium/High) + - Calculate score improvement + - Generate specific remediation steps + + Sort by: Priority → Score Impact → Effort + + Example: + 1. [P0] Add LICENSE file → +20 pts → 15 min + 2. [P1] Expand README to 500+ chars → +10 pts → 30 min + 3. [P1] Add 2 usage examples → +15 pts → 20 min + 4. [P2] Create CHANGELOG.md → +10 pts → 15 min + ``` + +9. **Determine Publication Readiness** + ``` + Publication readiness determination: + + READY (90-100): + - All critical requirements met + - High-quality documentation + - No blocking issues + - Immediate submission recommended + + READY WITH MINOR IMPROVEMENTS (75-89): + - Critical requirements met + - Some recommended improvements + - Can submit, but improvements increase quality + - Suggested: Address P1 issues before submission + + NEEDS WORK (60-74): + - Critical requirements met + - Several important issues + - Should address P1 issues before submission + - Documentation needs expansion + + NOT READY (<60): + - Critical issues present + - Insufficient documentation quality + - Must address P0 and P1 issues + - Submission will be rejected + ``` + +10. **Format Output** + ``` + Based on format parameter: + - text: Human-readable report + - json: Structured JSON for automation + - markdown: Formatted markdown report + ``` + +### Examples + +```bash +# Full documentation validation with defaults +/documentation-validation full-docs path:. + +# With detailed sub-reports +/documentation-validation full-docs path:. detailed:true + +# JSON output for automation +/documentation-validation full-docs path:. format:json + +# Without fix suggestions (faster) +/documentation-validation full-docs path:. fix-suggestions:false + +# Validate specific plugin +/documentation-validation full-docs path:/path/to/plugin +``` + +### Error Handling + +**Error: Multiple critical issues** +``` +❌ CRITICAL: Multiple blocking documentation issues + +Documentation Score: /100 ⚠️ + +BLOCKING ISSUES (): +1. README.md not found + → Create README.md with required sections + → Minimum 500 characters + → Include Overview, Installation, Usage, Examples, License + +2. LICENSE file not found + → Create LICENSE file with OSI-approved license + → MIT License recommended + → Must match plugin.json license field + +3. License mismatch + → plugin.json declares "Apache-2.0" + → LICENSE file contains "MIT" + → Update one to match the other + +IMPORTANT ISSUES (): +- README missing Examples section +- No code examples found +- CHANGELOG.md recommended + +YOUR NEXT STEPS: +1. Add LICENSE file (CRITICAL - 15 minutes) +2. Create comprehensive README.md (CRITICAL - 30 minutes) +3. Add 3 usage examples (IMPORTANT - 20 minutes) + +After addressing critical issues, revalidate with: +/documentation-validation full-docs path:. +``` + +**Error: Documentation too sparse** +``` +⚠️ WARNING: Documentation exists but is too sparse + +Documentation Score: 65/100 ⚠️ + +Your documentation meets minimum requirements but needs expansion +for professional quality. + +AREAS NEEDING IMPROVEMENT: +1. README is only 342 characters (minimum: 500) + → Expand installation instructions + → Add more detailed usage examples + → Include troubleshooting section + +2. Only 1 example found (recommended: 3+) + → Add basic usage example + → Add advanced example + → Add configuration example + +3. CHANGELOG missing + → Create CHANGELOG.md + → Use Keep a Changelog format + → Document version 1.0.0 features + +IMPACT: +Current: 65/100 (Fair) +After improvements: ~85/100 (Good) + +Time investment: ~45 minutes +Quality improvement: +20 points +``` + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +COMPREHENSIVE DOCUMENTATION VALIDATION +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Target: +Type: +Timestamp: + +OVERALL DOCUMENTATION SCORE: <0-100>/100 <⭐⭐⭐⭐⭐> +Rating: +Publication Ready: + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +COMPONENT SCORES +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +README (Weight: 40%) + Score: <0-100>/100 ✅ + Status: ✅ Complete and comprehensive + Sections: /5 required sections found + Length: characters (minimum: 500) ✅ + Issues: None + +EXAMPLES (Weight: 30%) + Score: <0-100>/100 ⚠️ + Status: ⚠️ Could be improved + Examples found: (recommended: 3+) + Placeholders: detected + Issues: placeholder patterns found + +LICENSE (Weight: 20%) + Score: <0-100>/100 ✅ + Status: ✅ Valid and consistent + Type: MIT License + OSI Approved: ✅ Yes + Consistency: ✅ Matches plugin.json + Issues: None + +CHANGELOG (Weight: 10%) + Score: <0-100>/100 ⚠️ + Status: ⚠️ Missing (recommended but not required) + Format: N/A + Versions: 0 + Issues: CHANGELOG.md not found + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +ISSUES SUMMARY +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Critical (P0 - Blocking): +Important (P1 - Should Fix): +Recommended (P2 - Nice to Have): + +CRITICAL ISSUES: +[None - Ready for submission] ✅ + +IMPORTANT ISSUES: +⚠️ 1. Add 2 more usage examples to README + Impact: +15 points + Effort: Low (20 minutes) + +⚠️ 2. Replace 3 placeholder patterns in examples + Impact: +10 points + Effort: Low (10 minutes) + +RECOMMENDATIONS: +💡 1. Create CHANGELOG.md for version tracking + Impact: +10 points + Effort: Low (15 minutes) + +💡 2. Add troubleshooting section to README + Impact: +5 points + Effort: Low (15 minutes) + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +IMPROVEMENT ROADMAP +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Current Score: /100 +Target Score: 90/100 (Excellent - Publication Ready) +Gap: points + +RECOMMENDED ACTIONS (to reach 90+): + +1. [+15 pts] Add usage examples + Priority: High + Effort: 20 minutes + Description: + - Add 2 more concrete usage examples to README + - Include basic, intermediate, and advanced scenarios + - Use real plugin commands and parameters + +2. [+10 pts] Clean up placeholder patterns + Priority: Medium + Effort: 10 minutes + Description: + - Replace "YOUR_VALUE" patterns with concrete examples + - Complete or remove TODO markers + - Use template syntax (${VAR}) for user-provided values + +3. [+10 pts] Create CHANGELOG.md + Priority: Medium + Effort: 15 minutes + Description: + - Use Keep a Changelog format + - Document version 1.0.0 initial release + - Add [Unreleased] section for future changes + +AFTER IMPROVEMENTS: +Projected Score: ~90/100 ⭐⭐⭐⭐⭐ +Time Investment: ~45 minutes +Status: Excellent - Ready for submission + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +PUBLICATION READINESS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Status: ✅ READY WITH MINOR IMPROVEMENTS + +Your plugin documentation meets all critical requirements and is ready +for submission to OpenPlugins marketplace. The recommended improvements +above will increase quality score and provide better user experience. + +✅ Strengths: +- Comprehensive README with all required sections +- Valid OSI-approved license (MIT) +- License consistent with plugin.json +- Good documentation structure + +⚠️ Improvement Opportunities: +- Add more usage examples for better user onboarding +- Create CHANGELOG for version tracking +- Clean up minor placeholder patterns + +NEXT STEPS: +1. (Optional) Address recommended improvements (~45 min) +2. Run validation again to verify improvements +3. Submit to OpenPlugins marketplace + +Command to revalidate: +/documentation-validation full-docs path:. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +``` + +### Integration + +This operation is the primary entry point for complete documentation validation: + +**Invoked by**: +- `/documentation-validation full-docs path:.` (direct invocation) +- `/validation-orchestrator comprehensive path:.` (as part of full plugin validation) +- marketplace-validator agent (automatic documentation assessment) + +**Invokes sub-operations**: +- `/documentation-validation readme path:.` +- `/documentation-validation changelog file:CHANGELOG.md` +- `/documentation-validation license path:.` +- `/documentation-validation examples path:.` + +**Feeds results to**: +- `/quality-analysis full-analysis` (for overall quality scoring) +- `/quality-analysis generate-report` (for report generation) + +### JSON Output Format + +When `format:json` is specified: + +```json +{ + "validation_type": "full-documentation", + "target_path": "/path/to/plugin", + "timestamp": "2025-01-15T10:30:00Z", + "overall_score": 85, + "rating": "Good", + "publication_ready": "yes_with_improvements", + "components": { + "readme": { + "score": 90, + "status": "pass", + "present": true, + "sections_found": 5, + "sections_missing": 0, + "length": 1234, + "issues": [] + }, + "changelog": { + "score": 70, + "status": "warning", + "present": true, + "compliance": 70, + "issues": ["Invalid version header format"] + }, + "license": { + "score": 100, + "status": "pass", + "present": true, + "type": "MIT", + "osi_approved": true, + "consistent": true, + "issues": [] + }, + "examples": { + "score": 75, + "status": "warning", + "examples_found": 2, + "placeholders_detected": 3, + "issues": ["Placeholder patterns detected"] + } + }, + "issues": { + "critical": [], + "important": [ + { + "component": "examples", + "message": "Add 2 more usage examples", + "impact": 15, + "effort": "low" + } + ], + "recommended": [ + { + "component": "readme", + "message": "Add troubleshooting section", + "impact": 5, + "effort": "low" + } + ] + }, + "improvement_roadmap": [ + { + "action": "Add usage examples", + "points": 15, + "priority": "high", + "effort": "20 minutes" + } + ], + "projected_score_after_improvements": 95 +} +``` + +**Request**: $ARGUMENTS diff --git a/commands/documentation-validation/skill.md b/commands/documentation-validation/skill.md new file mode 100644 index 0000000..d7d1b24 --- /dev/null +++ b/commands/documentation-validation/skill.md @@ -0,0 +1,99 @@ +--- +description: Validate documentation completeness, format, and quality for plugins and marketplaces +--- + +You are the Documentation Validation coordinator, ensuring comprehensive and high-quality documentation. + +## Your Mission + +Parse `$ARGUMENTS` to determine the requested documentation validation operation and route to the appropriate sub-command. + +## Available Operations + +Parse the first word of `$ARGUMENTS` to determine which operation to execute: + +- **readme** → Read `.claude/commands/documentation-validation/check-readme.md` +- **changelog** → Read `.claude/commands/documentation-validation/validate-changelog.md` +- **license** → Read `.claude/commands/documentation-validation/check-license.md` +- **examples** → Read `.claude/commands/documentation-validation/validate-examples.md` +- **full-docs** → Read `.claude/commands/documentation-validation/full-documentation.md` + +## Argument Format + +``` +/documentation-validation [parameters] +``` + +### Examples + +```bash +# Check README completeness +/documentation-validation readme path:. sections:"overview,installation,usage,examples" + +# Validate CHANGELOG format +/documentation-validation changelog file:CHANGELOG.md format:keepachangelog + +# Check LICENSE file +/documentation-validation license path:. expected:MIT + +# Validate example quality +/documentation-validation examples path:. no-placeholders:true + +# Run complete documentation validation +/documentation-validation full-docs path:. +``` + +## Documentation Standards + +**README.md Requirements**: +- Overview/Description section +- Installation instructions +- Usage examples (minimum 2) +- Configuration options (if applicable) +- License information +- Length: Minimum 500 characters + +**CHANGELOG.md Requirements**: +- Keep a Changelog format +- Version headers ([X.Y.Z] - YYYY-MM-DD) +- Change categories: Added, Changed, Deprecated, Removed, Fixed, Security +- Unreleased section for upcoming changes + +**LICENSE Requirements**: +- LICENSE or LICENSE.txt file present +- Valid OSI-approved license +- License matches plugin.json declaration + +**Examples Requirements**: +- No placeholder text (TODO, FIXME, XXX, placeholder) +- Complete, runnable examples +- Real values, not dummy data +- Proper formatting and syntax + +## Quality Scoring + +Documentation contributes to overall quality score: +- Complete README: +15 points +- CHANGELOG present: +10 points +- LICENSE valid: +5 points +- Quality examples: +10 points + +## Error Handling + +If the operation is not recognized: +1. List all available documentation operations +2. Show documentation standards +3. Provide improvement suggestions + +## Base Directory + +Base directory for this skill: `.claude/commands/documentation-validation/` + +## Your Task + +1. Parse `$ARGUMENTS` to extract operation and parameters +2. Read the corresponding operation file +3. Execute documentation validation checks +4. Return detailed findings with specific improvement guidance + +**Current Request**: $ARGUMENTS diff --git a/commands/documentation-validation/validate-changelog.md b/commands/documentation-validation/validate-changelog.md new file mode 100644 index 0000000..3e8cdad --- /dev/null +++ b/commands/documentation-validation/validate-changelog.md @@ -0,0 +1,286 @@ +## Operation: Validate CHANGELOG Format + +Validate CHANGELOG.md format compliance with "Keep a Changelog" standard. + +### Parameters from $ARGUMENTS + +- **file**: Path to CHANGELOG file (optional, default: CHANGELOG.md) +- **format**: Expected format (optional, default: keepachangelog) +- **strict**: Enable strict validation (optional, default: false) +- **require-unreleased**: Require [Unreleased] section (optional, default: true) + +### CHANGELOG Requirements + +**Keep a Changelog Format** (https://keepachangelog.com/): + +```markdown +# Changelog + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [Unreleased] +### Added +- New features not yet released + +## [1.0.0] - 2025-01-15 +### Added +- Initial release feature +### Changed +- Modified behavior +### Fixed +- Bug fixes +``` + +**Required Elements**: +1. **Title**: "Changelog" or "Change Log" +2. **Version Headers**: `## [X.Y.Z] - YYYY-MM-DD` format +3. **Change Categories**: Added, Changed, Deprecated, Removed, Fixed, Security +4. **Unreleased Section**: `## [Unreleased]` for upcoming changes +5. **Chronological Order**: Newest versions first + +**Valid Change Categories**: +- **Added**: New features +- **Changed**: Changes in existing functionality +- **Deprecated**: Soon-to-be removed features +- **Removed**: Removed features +- **Fixed**: Bug fixes +- **Security**: Security vulnerability fixes + +### Workflow + +1. **Locate CHANGELOG File** + ``` + Check for CHANGELOG.md in plugin root + Also check: CHANGELOG, CHANGELOG.txt, changelog.md, HISTORY.md + If not found, report as missing (WARNING, not CRITICAL) + ``` + +2. **Execute CHANGELOG Validator** + ```bash + Execute .scripts/changelog-validator.sh with parameters: + - File path to CHANGELOG + - Expected format (keepachangelog) + - Strict mode flag + - Require unreleased flag + + Script returns: + - has_title: Boolean + - has_unreleased: Boolean + - version_headers: Array of version entries + - categories_used: Array of change categories + - issues: Array of format violations + - compliance_score: 0-100 + ``` + +3. **Validate Version Headers** + ``` + For each version header: + - Check format: ## [X.Y.Z] - YYYY-MM-DD + - Validate semantic version (X.Y.Z) + - Validate date format (YYYY-MM-DD) + - Check chronological order (newest first) + + Common violations: + - Missing brackets: ## 1.0.0 - 2025-01-15 (should be [1.0.0]) + - Wrong date format: ## [1.0.0] - 01/15/2025 + - Invalid semver: ## [1.0] - 2025-01-15 + ``` + +4. **Validate Change Categories** + ``` + For each version section: + - Check for valid category headers (### Added, ### Fixed, etc.) + - Warn if no categories used + - Recommend appropriate categories + + Invalid category examples: + - "### New Features" (should be "### Added") + - "### Bugs" (should be "### Fixed") + - "### Updates" (should be "### Changed") + ``` + +5. **Calculate Compliance Score** + ``` + score = 100 + score -= (!has_title) ? 10 : 0 + score -= (!has_unreleased) ? 15 : 0 + score -= (invalid_version_headers × 10) + score -= (invalid_categories × 5) + score -= (wrong_date_format × 5) + score = max(0, score) + ``` + +6. **Format Output** + ``` + Display: + - ✅/⚠️/❌ File presence + - ✅/❌ Format compliance + - ✅/⚠️ Version headers + - ✅/⚠️ Change categories + - Compliance score + - Specific violations + - Improvement recommendations + ``` + +### Examples + +```bash +# Validate default CHANGELOG.md +/documentation-validation changelog file:CHANGELOG.md + +# Validate with custom path +/documentation-validation changelog file:./HISTORY.md + +# Strict validation (all elements required) +/documentation-validation changelog file:CHANGELOG.md strict:true + +# Don't require Unreleased section +/documentation-validation changelog file:CHANGELOG.md require-unreleased:false + +# Part of full documentation check +/documentation-validation full-docs path:. +``` + +### Error Handling + +**Error: CHANGELOG not found** +``` +⚠️ WARNING: CHANGELOG.md not found in + +Remediation: +1. Create CHANGELOG.md in plugin root directory +2. Use "Keep a Changelog" format (https://keepachangelog.com/) +3. Include [Unreleased] section for upcoming changes +4. Document version history with proper headers + +Example: +# Changelog + +## [Unreleased] +### Added +- Features in development + +## [1.0.0] - 2025-01-15 +### Added +- Initial release + +Note: CHANGELOG is recommended but not required for initial submission. +It becomes important for version updates. +``` + +**Error: Invalid version header format** +``` +❌ ERROR: Invalid version header format detected + +Invalid headers found: +- Line 10: "## 1.0.0 - 2025-01-15" (missing brackets) +- Line 25: "## [1.0] - 01/15/2025" (invalid semver and date format) + +Correct format: +## [X.Y.Z] - YYYY-MM-DD + +Examples: +- ## [1.0.0] - 2025-01-15 +- ## [2.1.3] - 2024-12-20 +- ## [0.1.0] - 2024-11-05 + +Remediation: +1. Add brackets around version numbers: [1.0.0] +2. Use semantic versioning: MAJOR.MINOR.PATCH +3. Use ISO date format: YYYY-MM-DD +``` + +**Error: Missing Unreleased section** +``` +⚠️ WARNING: Missing [Unreleased] section + +The Keep a Changelog format recommends an [Unreleased] section for tracking +upcoming changes before they're officially released. + +Add to top of CHANGELOG (after title): + +## [Unreleased] +### Added +- Features in development +### Changed +- Planned changes +``` + +**Error: Invalid change categories** +``` +⚠️ WARNING: Non-standard change categories detected + +Invalid categories found: +- "### New Features" (should be "### Added") +- "### Bug Fixes" (should be "### Fixed") +- "### Updates" (should be "### Changed") + +Valid categories: +- Added: New features +- Changed: Changes in existing functionality +- Deprecated: Soon-to-be removed features +- Removed: Removed features +- Fixed: Bug fixes +- Security: Security vulnerability fixes + +Remediation: +Replace non-standard categories with Keep a Changelog categories. +``` + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +CHANGELOG VALIDATION RESULTS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +File: ✅ CHANGELOG.md found + +Format: Keep a Changelog +Compliance: <0-100>% ✅/⚠️/❌ + +Structure: +✅ Title present +✅ [Unreleased] section present +✅ Version headers formatted correctly +✅ Change categories valid + +Version Entries: +- [1.0.0] - 2025-01-15 ✅ +- [0.2.0] - 2024-12-20 ✅ +- [0.1.0] - 2024-11-05 ✅ + +Change Categories Used: +✅ Added (3 versions) +✅ Changed (2 versions) +✅ Fixed (3 versions) + +Issues Found: + +Violations: + + +Recommendations: +1. Add Security category for vulnerability fixes +2. Expand [Unreleased] section with upcoming features +3. Add links to version comparison (optional) + +Overall: +``` + +### Integration + +This operation is invoked by: +- `/documentation-validation changelog file:CHANGELOG.md` (direct) +- `/documentation-validation full-docs path:.` (as part of complete validation) +- `/validation-orchestrator comprehensive path:.` (via orchestrator) + +Results contribute to documentation quality score: +- Present and compliant: +10 points +- Present but non-compliant: +5 points +- Missing: 0 points (warning but not blocking) + +**Request**: $ARGUMENTS diff --git a/commands/documentation-validation/validate-examples.md b/commands/documentation-validation/validate-examples.md new file mode 100644 index 0000000..3aca104 --- /dev/null +++ b/commands/documentation-validation/validate-examples.md @@ -0,0 +1,335 @@ +## Operation: Validate Example Quality + +Validate example code quality, detecting placeholders and ensuring examples are complete and runnable. + +### Parameters from $ARGUMENTS + +- **path**: Target plugin/marketplace path (required) +- **no-placeholders**: Strict placeholder enforcement (optional, default: true) +- **recursive**: Check all markdown and code files recursively (optional, default: true) +- **extensions**: File extensions to check (optional, default: "md,txt,json,sh,py,js") + +### Example Quality Requirements + +**Complete Examples**: +- Concrete, runnable code or commands +- Real values, not placeholder text +- Proper syntax and formatting +- Context and explanations +- Expected output or results + +**No Placeholder Patterns**: +- **TODO**: `TODO`, `@TODO`, `// TODO:` +- **FIXME**: `FIXME`, `@FIXME`, `// FIXME:` +- **XXX**: `XXX`, `@XXX`, `// XXX:` +- **Placeholders**: `placeholder`, `PLACEHOLDER`, `your-value-here`, ``, `[YOUR-VALUE]` +- **Generic**: `example`, `sample`, `test`, `dummy`, `foo`, `bar`, `baz` +- **User substitution**: ``, ``, `your-api-key`, `INSERT-HERE` + +**Acceptable Patterns** (not placeholders): +- Template variables: `{{variable}}`, `${variable}`, `$VARIABLE` +- Documentation examples: ``, `[optional]` in usage syntax +- Actual values: Real plugin names, real commands, concrete examples + +### Workflow + +1. **Identify Files to Validate** + ``` + Scan plugin directory for documentation files: + - README.md (primary source) + - CONTRIBUTING.md + - docs/**/*.md + - examples/**/* + - *.sh, *.py, *.js (example scripts) + + If recursive:false, only check README.md + ``` + +2. **Execute Example Validator** + ```bash + Execute .scripts/example-validator.sh with parameters: + - Path to plugin directory + - No-placeholders flag + - Recursive flag + - File extensions to check + + Script returns: + - files_checked: Count of files analyzed + - placeholders_found: Array of placeholder instances + - files_with_issues: Array of files containing placeholders + - example_count: Number of code examples found + - quality_score: 0-100 + ``` + +3. **Detect Placeholder Patterns** + ```bash + Search for patterns (case-insensitive): + + # TODO/FIXME/XXX markers + grep -iE '(TODO|FIXME|XXX|HACK)[:)]' + + # Placeholder text + grep -iE '(placeholder|your-.*-here| + + # Generic dummy values + grep -iE '\b(foo|bar|baz|dummy|sample|test)\b' + + # User substitution patterns + grep -iE '(|||YOUR_[A-Z_]+)' + + # Exclude: + - Comments explaining placeholders + - Documentation of template syntax + - Proper template variables ({{x}}, ${x}) + ``` + +4. **Analyze Code Blocks** + ``` + For each code block in markdown: + - Extract language and content + - Check for placeholder patterns + - Verify syntax highlighting specified + - Ensure examples are complete + + Example extraction: + ```bash + /plugin install my-plugin@marketplace ✅ Concrete + /plugin install ⚠️ Documentation (acceptable) + /plugin install YOUR_PLUGIN ❌ Placeholder + ``` + ``` + +5. **Count and Categorize Examples** + ``` + Count examples by type: + - Command examples: /plugin install ... + - Configuration examples: JSON snippets + - Code examples: Script samples + - Usage examples: Real-world scenarios + + Quality criteria: + - At least 2-3 concrete examples + - Examples cover primary use cases + - Examples are copy-pasteable + ``` + +6. **Calculate Quality Score** + ``` + score = 100 + score -= (placeholder_instances × 10) # -10 per placeholder + score -= (todo_markers × 5) # -5 per TODO/FIXME + score -= (example_count < 2) ? 20 : 0 # -20 if < 2 examples + score -= (incomplete_examples × 15) # -15 per incomplete example + score = max(0, score) + ``` + +7. **Format Output** + ``` + Display: + - Files checked count + - Examples found count + - Placeholders detected + - Quality score + - Specific issues with file/line references + - Improvement recommendations + ``` + +### Examples + +```bash +# Validate examples with strict placeholder checking (default) +/documentation-validation examples path:. + +# Check only README.md (non-recursive) +/documentation-validation examples path:. recursive:false + +# Allow placeholders (lenient mode) +/documentation-validation examples path:. no-placeholders:false + +# Check specific file extensions +/documentation-validation examples path:. extensions:"md,sh,py" + +# Strict validation of examples directory +/documentation-validation examples path:./examples no-placeholders:true recursive:true +``` + +### Error Handling + +**Error: Placeholders detected** +``` +⚠️ WARNING: Placeholder patterns detected in examples + +Placeholders found: instances across files + +README.md: +- Line 45: /plugin install YOUR_PLUGIN_NAME + ^ Should be concrete plugin name +- Line 67: API_KEY=your-api-key-here + ^ Should be removed or use template syntax + +examples/usage.sh: +- Line 12: # TODO: Add authentication example + ^ Complete example or remove TODO + +Remediation: +1. Replace "YOUR_PLUGIN_NAME" with actual plugin name +2. Use template syntax for user-provided values: ${API_KEY} +3. Remove TODO markers - complete examples or remove them +4. Provide concrete, copy-pasteable examples + +Acceptable patterns: +- Template variables: ${VARIABLE}, {{variable}} +- Documentation syntax: in usage descriptions +- Generic placeholders in template explanations +``` + +**Error: Too few examples** +``` +⚠️ WARNING: Insufficient examples in documentation + +Examples found: (minimum recommended: 3) + +README.md contains code examples: +- Installation example ✅ +- Basic usage ❌ Missing +- Advanced usage ❌ Missing + +Remediation: +Add at least 2-3 concrete usage examples showing: +1. Basic usage (most common scenario) +2. Common configuration options +3. Advanced or specialized use case + +Example structure: +```bash +# Basic usage +/my-plugin action param:value + +# With options +/my-plugin action param:value option:true + +# Advanced example +/my-plugin complex-action config:custom nested:value +``` + +Good examples are copy-pasteable and use real values. +``` + +**Error: Incomplete examples** +``` +⚠️ WARNING: Incomplete or broken examples detected + +Incomplete examples: + +README.md: +- Line 34: Code block with syntax error +- Line 56: Example missing expected output +- Line 78: Example truncated with "..." + +Remediation: +1. Ensure all code examples are syntactically valid +2. Show expected output or results after examples +3. Complete truncated examples (no "..." placeholders) +4. Test examples before including in documentation + +Example format: +```bash +# Command with description +/plugin install example-plugin@marketplace + +# Expected output: +# ✓ Installing example-plugin@marketplace +# ✓ Plugin installed successfully +``` +``` + +**Error: Generic dummy values** +``` +⚠️ WARNING: Generic placeholder values detected + +Generic values found: +- README.md:45 - "foo", "bar" used as example values +- examples/config.json:12 - "sample" as placeholder + +While "foo/bar" are common in documentation, concrete examples +are more helpful for users. + +Remediation: +Replace generic values with realistic examples: +- Instead of "foo", use actual plugin name +- Instead of "bar", use real parameter value +- Instead of "sample", use concrete example + +Good: /my-plugin process file:README.md +Bad: /my-plugin process file:foo.txt +``` + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +EXAMPLE QUALITY VALIDATION +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Files Checked: +Code Examples Found: + +Example Count by Type: +- Command examples: ✅ +- Configuration examples: ✅ +- Usage examples: ⚠️ (recommend 3+) + +Placeholder Detection: +TODO/FIXME markers: ❌ +Placeholder patterns: ❌ +Generic values (foo/bar): ⚠️ + +Quality Score: <0-100>/100 + +Issues by File: +README.md: issues +├─ Line 45: YOUR_PLUGIN_NAME (placeholder) +├─ Line 67: TODO marker +└─ Line 89: Generic "foo" value + +examples/usage.sh: issues +└─ Line 12: Incomplete example + +Recommendations: +1. Replace placeholder patterns with concrete values [+10 pts] +2. Complete or remove TODO markers [+5 pts] +3. Add more usage examples [+15 pts] + +Overall: +``` + +### Integration + +This operation is invoked by: +- `/documentation-validation examples path:.` (direct) +- `/documentation-validation full-docs path:.` (as part of complete validation) +- `/validation-orchestrator comprehensive path:.` (via orchestrator) + +Results contribute to documentation quality score: +- High-quality examples (90+): +10 points +- Some issues (60-89): +5 points +- Poor quality (<60): 0 points +- Missing examples: -10 points + +### Special Cases + +**Template Documentation**: +If the plugin provides templates or scaffolding, some placeholders +are acceptable when properly documented as template variables. + +Example: +```markdown +The generated code includes template variables: +- {{PROJECT_NAME}} - Will be replaced with actual project name +- {{AUTHOR}} - Will be replaced with author information +``` + +This is acceptable because the placeholders are documented as +intentional template syntax. + +**Request**: $ARGUMENTS diff --git a/commands/quality-analysis/.scripts/issue-prioritizer.sh b/commands/quality-analysis/.scripts/issue-prioritizer.sh new file mode 100755 index 0000000..320ec75 --- /dev/null +++ b/commands/quality-analysis/.scripts/issue-prioritizer.sh @@ -0,0 +1,305 @@ +#!/usr/bin/env bash + +# ============================================================================ +# Issue Prioritization Script +# ============================================================================ +# Purpose: Categorize and prioritize validation issues into P0/P1/P2 tiers +# Version: 1.0.0 +# Usage: ./issue-prioritizer.sh [criteria] +# Returns: 0=success, 1=error +# Dependencies: jq, bash 4.0+ +# ============================================================================ + +set -euo pipefail + +# Configuration +readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +readonly RED='\033[0;31m' +readonly YELLOW='\033[1;33m' +readonly BLUE='\033[0;34m' +readonly NC='\033[0m' # No Color + +# Priority definitions +declare -A PRIORITY_NAMES=( + [0]="Critical - Must Fix" + [1]="Important - Should Fix" + [2]="Recommended - Nice to Have" +) + +declare -A PRIORITY_ICONS=( + [0]="❌" + [1]="⚠️ " + [2]="💡" +) + +# Effort labels +declare -A EFFORT_LABELS=( + [low]="Low" + [medium]="Medium" + [high]="High" +) + +# Effort time estimates +declare -A EFFORT_TIMES=( + [low]="5-15 minutes" + [medium]="30-60 minutes" + [high]="2+ hours" +) + +# ============================================================================ +# Functions +# ============================================================================ + +usage() { + cat < [criteria] + +Arguments: + issues-json-file Path to JSON file with validation issues + criteria Prioritization criteria: severity|impact|effort (default: severity) + +Examples: + $0 validation-results.json + $0 results.json impact + $0 results.json severity + +JSON Structure: +{ + "errors": [{"type": "...", "severity": "critical", ...}], + "warnings": [{"type": "...", "severity": "important", ...}], + "recommendations": [{"type": "...", "severity": "recommended", ...}] +} +EOF + exit 1 +} + +check_dependencies() { + local missing_deps=() + + if ! command -v jq &> /dev/null; then + missing_deps+=("jq") + fi + + if [ ${#missing_deps[@]} -gt 0 ]; then + echo "Error: Missing dependencies: ${missing_deps[*]}" >&2 + echo "Install with: sudo apt-get install ${missing_deps[*]}" >&2 + return 1 + fi + + return 0 +} + +determine_priority() { + local severity="$1" + local type="$2" + + # P0 (Critical) - Blocking issues + if [[ "$severity" == "critical" ]] || \ + [[ "$type" =~ ^(missing_required|invalid_json|security_vulnerability|format_violation)$ ]]; then + echo "0" + return + fi + + # P1 (Important) - Should fix + if [[ "$severity" == "important" ]] || \ + [[ "$type" =~ ^(missing_recommended|documentation_gap|convention_violation|performance)$ ]]; then + echo "1" + return + fi + + # P2 (Recommended) - Nice to have + echo "2" +} + +get_effort_estimate() { + local type="$1" + + # High effort + if [[ "$type" =~ ^(security_vulnerability|performance|architecture)$ ]]; then + echo "high" + return + fi + + # Medium effort + if [[ "$type" =~ ^(documentation_gap|convention_violation|missing_recommended)$ ]]; then + echo "medium" + return + fi + + # Low effort (default) + echo "low" +} + +format_issue() { + local priority="$1" + local message="$2" + local impact="${3:-Unknown impact}" + local effort="${4:-low}" + local fix="${5:-No fix suggestion available}" + + local icon="${PRIORITY_ICONS[$priority]}" + local effort_label="${EFFORT_LABELS[$effort]}" + local effort_time="${EFFORT_TIMES[$effort]}" + + cat <&2 + return 1 + fi + + # Validate JSON syntax + if ! jq empty "$json_file" 2>/dev/null; then + echo "Error: Invalid JSON in $json_file" >&2 + return 1 + fi + + # Count total issues + local total_errors=$(jq '.errors // [] | length' "$json_file") + local total_warnings=$(jq '.warnings // [] | length' "$json_file") + local total_recommendations=$(jq '.recommendations // [] | length' "$json_file") + local total_issues=$((total_errors + total_warnings + total_recommendations)) + + if [[ $total_issues -eq 0 ]]; then + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "ISSUE PRIORITIZATION" + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "" + echo "No issues found! Quality score is perfect." + return 0 + fi + + # Initialize priority counters + declare -A priority_counts=([0]=0 [1]=0 [2]=0) + declare -A priority_issues=([0]="" [1]="" [2]="") + + # Process errors + while IFS= read -r issue; do + local type=$(echo "$issue" | jq -r '.type // "unknown"') + local severity=$(echo "$issue" | jq -r '.severity // "critical"') + local message=$(echo "$issue" | jq -r '.message // "Unknown error"') + local impact=$(echo "$issue" | jq -r '.impact // "Unknown impact"') + local fix=$(echo "$issue" | jq -r '.fix // "No fix available"') + local score_impact=$(echo "$issue" | jq -r '.score_impact // 0') + + local priority=$(determine_priority "$severity" "$type") + local effort=$(get_effort_estimate "$type") + + priority_counts[$priority]=$((priority_counts[$priority] + 1)) + + local formatted_issue=$(format_issue "$priority" "$message" "$impact" "$effort" "$fix") + priority_issues[$priority]+="$formatted_issue" + done < <(jq -c '.errors // [] | .[]' "$json_file") + + # Process warnings + while IFS= read -r issue; do + local type=$(echo "$issue" | jq -r '.type // "unknown"') + local severity=$(echo "$issue" | jq -r '.severity // "important"') + local message=$(echo "$issue" | jq -r '.message // "Unknown warning"') + local impact=$(echo "$issue" | jq -r '.impact // "Unknown impact"') + local fix=$(echo "$issue" | jq -r '.fix // "No fix available"') + + local priority=$(determine_priority "$severity" "$type") + local effort=$(get_effort_estimate "$type") + + priority_counts[$priority]=$((priority_counts[$priority] + 1)) + + local formatted_issue=$(format_issue "$priority" "$message" "$impact" "$effort" "$fix") + priority_issues[$priority]+="$formatted_issue" + done < <(jq -c '.warnings // [] | .[]' "$json_file") + + # Process recommendations + while IFS= read -r issue; do + local type=$(echo "$issue" | jq -r '.type // "unknown"') + local severity=$(echo "$issue" | jq -r '.severity // "recommended"') + local message=$(echo "$issue" | jq -r '.message // "Recommendation"') + local impact=$(echo "$issue" | jq -r '.impact // "Minor quality improvement"') + local fix=$(echo "$issue" | jq -r '.fix // "No fix available"') + + local priority=$(determine_priority "$severity" "$type") + local effort=$(get_effort_estimate "$type") + + priority_counts[$priority]=$((priority_counts[$priority] + 1)) + + local formatted_issue=$(format_issue "$priority" "$message" "$impact" "$effort" "$fix") + priority_issues[$priority]+="$formatted_issue" + done < <(jq -c '.recommendations // [] | .[]' "$json_file") + + # Display results + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "ISSUE PRIORITIZATION" + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "" + echo "Total Issues: $total_issues" + echo "" + + # Display each priority tier + for priority in 0 1 2; do + local count=${priority_counts[$priority]} + local name="${PRIORITY_NAMES[$priority]}" + + if [[ $count -gt 0 ]]; then + echo "Priority $priority ($name): $count" + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo -e "${priority_issues[$priority]}" + fi + done + + # Summary + echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" + echo "Summary:" + echo "- Fix P0 issues first (blocking publication)" + echo "- Address P1 issues for quality improvement" + echo "- Consider P2 improvements for excellence" + + if [[ ${priority_counts[0]} -gt 0 ]]; then + echo "" + echo "⚠️ WARNING: ${priority_counts[0]} blocking issue(s) must be fixed before publication" + fi + + return 0 +} + +# ============================================================================ +# Main +# ============================================================================ + +main() { + # Check arguments + if [[ $# -lt 1 ]]; then + usage + fi + + local json_file="$1" + local criteria="${2:-severity}" + + # Check dependencies + if ! check_dependencies; then + return 1 + fi + + # Validate criteria + if [[ ! "$criteria" =~ ^(severity|impact|effort)$ ]]; then + echo "Error: Invalid criteria '$criteria'. Use: severity|impact|effort" >&2 + return 1 + fi + + # Process issues + process_issues "$json_file" "$criteria" + + return 0 +} + +main "$@" diff --git a/commands/quality-analysis/.scripts/report-generator.py b/commands/quality-analysis/.scripts/report-generator.py new file mode 100755 index 0000000..959c5b8 --- /dev/null +++ b/commands/quality-analysis/.scripts/report-generator.py @@ -0,0 +1,541 @@ +#!/usr/bin/env python3 + +# ============================================================================ +# Quality Report Generator +# ============================================================================ +# Purpose: Generate comprehensive quality reports in multiple formats +# Version: 1.0.0 +# Usage: ./report-generator.py --path --format [options] +# Returns: 0=success, 1=error +# Dependencies: Python 3.6+ +# ============================================================================ + +import sys +import argparse +import json +from datetime import datetime +from pathlib import Path +from typing import Dict, List, Any, Optional + + +class ReportGenerator: + """Generate quality reports in multiple formats.""" + + def __init__(self, path: str, context: Optional[Dict] = None): + """ + Initialize report generator. + + Args: + path: Target path being analyzed + context: Validation context with results + """ + self.path = path + self.context = context or {} + self.timestamp = datetime.now().isoformat() + + def generate(self, format_type: str = "markdown") -> str: + """ + Generate report in specified format. + + Args: + format_type: Report format (markdown, json, html) + + Returns: + Formatted report string + """ + if format_type == "json": + return self._generate_json() + elif format_type == "html": + return self._generate_html() + else: + return self._generate_markdown() + + def _generate_markdown(self) -> str: + """Generate markdown format report.""" + score = self.context.get("score", 0) + rating = self.context.get("rating", "Unknown") + stars = self.context.get("stars", "") + readiness = self.context.get("publication_ready", "Unknown") + + p0_count = len(self.context.get("issues", {}).get("p0", [])) + p1_count = len(self.context.get("issues", {}).get("p1", [])) + p2_count = len(self.context.get("issues", {}).get("p2", [])) + total_issues = p0_count + p1_count + p2_count + + target_type = self.context.get("target_type", "plugin") + + report = f"""# Quality Assessment Report + +**Generated**: {self.timestamp} +**Target**: {self.path} +**Type**: Claude Code {target_type.capitalize()} + +## Executive Summary + +**Quality Score**: {score}/100 {stars} ({rating}) +**Publication Ready**: {readiness} +**Critical Issues**: {p0_count} +**Total Issues**: {total_issues} + +""" + + if score >= 90: + report += "🎉 Excellent! Your plugin is publication-ready.\n\n" + elif score >= 75: + report += "👍 Nearly ready! Address a few important issues to reach excellent status.\n\n" + elif score >= 60: + report += "⚠️ Needs work. Several issues should be addressed before publication.\n\n" + else: + report += "❌ Substantial improvements needed before this is ready for publication.\n\n" + + # Validation layers + report += "## Validation Results\n\n" + layers = self.context.get("validation_layers", {}) + + for layer_name, layer_data in layers.items(): + status = layer_data.get("status", "unknown") + issue_count = len(layer_data.get("issues", [])) + + if status == "pass": + status_icon = "✅ PASS" + elif status == "warnings": + status_icon = f"⚠️ WARNINGS ({issue_count} issues)" + else: + status_icon = f"❌ FAIL ({issue_count} issues)" + + report += f"### {layer_name.replace('_', ' ').title()} {status_icon}\n" + + if issue_count == 0: + report += "- No issues found\n\n" + else: + for issue in layer_data.get("issues", [])[:3]: # Show top 3 + report += f"- {issue.get('message', 'Unknown issue')}\n" + if issue_count > 3: + report += f"- ... and {issue_count - 3} more\n" + report += "\n" + + # Issues breakdown + report += "## Issues Breakdown\n\n" + + report += f"### Priority 0 (Critical): {p0_count} issues\n\n" + if p0_count == 0: + report += "None - excellent!\n\n" + else: + for idx, issue in enumerate(self.context.get("issues", {}).get("p0", []), 1): + report += self._format_issue_markdown(idx, issue) + + report += f"### Priority 1 (Important): {p1_count} issues\n\n" + if p1_count == 0: + report += "None - great!\n\n" + else: + for idx, issue in enumerate(self.context.get("issues", {}).get("p1", []), 1): + report += self._format_issue_markdown(idx, issue) + + report += f"### Priority 2 (Recommended): {p2_count} issues\n\n" + if p2_count == 0: + report += "No recommendations.\n\n" + else: + for idx, issue in enumerate(self.context.get("issues", {}).get("p2", [])[:5], 1): + report += self._format_issue_markdown(idx, issue) + if p2_count > 5: + report += f"... and {p2_count - 5} more recommendations\n\n" + + # Improvement roadmap + roadmap = self.context.get("improvement_roadmap", {}) + if roadmap: + report += "## Improvement Roadmap\n\n" + report += f"### Path to Excellent (90+)\n\n" + report += f"**Current**: {roadmap.get('current_score', score)}/100\n" + report += f"**Target**: {roadmap.get('target_score', 90)}/100\n" + report += f"**Gap**: {roadmap.get('gap', 0)} points\n\n" + + recommendations = roadmap.get("recommendations", []) + if recommendations: + report += "**Top Recommendations**:\n\n" + for idx, rec in enumerate(recommendations[:5], 1): + report += f"{idx}. [{rec.get('score_impact', 0):+d} pts] {rec.get('title', 'Unknown')}\n" + report += f" - Priority: {rec.get('priority', 'Medium')}\n" + report += f" - Effort: {rec.get('effort', 'Unknown')}\n" + report += f" - Impact: {rec.get('impact', 'Unknown')}\n\n" + + # Footer + report += "\n---\n" + report += "Report generated by marketplace-validator-plugin v1.0.0\n" + + return report + + def _format_issue_markdown(self, idx: int, issue: Dict) -> str: + """Format a single issue in markdown.""" + message = issue.get("message", "Unknown issue") + impact = issue.get("impact", "Unknown impact") + effort = issue.get("effort", "unknown") + fix = issue.get("fix", "No fix available") + score_impact = issue.get("score_impact", 0) + + return f"""#### {idx}. {message} [{score_impact:+d} pts] + +**Impact**: {impact} +**Effort**: {effort.capitalize()} +**Fix**: {fix} + +""" + + def _generate_json(self) -> str: + """Generate JSON format report.""" + score = self.context.get("score", 0) + rating = self.context.get("rating", "Unknown") + stars = self.context.get("stars", "") + readiness = self.context.get("publication_ready", "Unknown") + + p0_issues = self.context.get("issues", {}).get("p0", []) + p1_issues = self.context.get("issues", {}).get("p1", []) + p2_issues = self.context.get("issues", {}).get("p2", []) + + report = { + "metadata": { + "generated": self.timestamp, + "target": self.path, + "type": self.context.get("target_type", "plugin"), + "validator_version": "1.0.0" + }, + "executive_summary": { + "score": score, + "rating": rating, + "stars": stars, + "publication_ready": readiness, + "critical_issues": len(p0_issues), + "total_issues": len(p0_issues) + len(p1_issues) + len(p2_issues) + }, + "validation_layers": self.context.get("validation_layers", {}), + "issues": { + "p0": p0_issues, + "p1": p1_issues, + "p2": p2_issues + }, + "improvement_roadmap": self.context.get("improvement_roadmap", {}) + } + + return json.dumps(report, indent=2) + + def _generate_html(self) -> str: + """Generate HTML format report.""" + score = self.context.get("score", 0) + rating = self.context.get("rating", "Unknown") + stars = self.context.get("stars", "") + readiness = self.context.get("publication_ready", "Unknown") + + p0_count = len(self.context.get("issues", {}).get("p0", [])) + p1_count = len(self.context.get("issues", {}).get("p1", [])) + p2_count = len(self.context.get("issues", {}).get("p2", [])) + total_issues = p0_count + p1_count + p2_count + + # Determine score color + if score >= 90: + score_color = "#10b981" # green + elif score >= 75: + score_color = "#3b82f6" # blue + elif score >= 60: + score_color = "#f59e0b" # orange + else: + score_color = "#ef4444" # red + + html = f""" + + + + + Quality Assessment Report + + + +
+

Quality Assessment Report

+
+ Generated: {self.timestamp}
+ Target: {self.path}
+ Type: Claude Code Plugin +
+ +
+
{score}
+
{stars} {rating}
+
{readiness}
+
+ +
+
+
Critical Issues
+
{p0_count}
+
+
+
Important Issues
+
{p1_count}
+
+
+
Recommendations
+
{p2_count}
+
+
+
Total Issues
+
{total_issues}
+
+
+ +
+

Validation Layers

+""" + + # Validation layers + layers = self.context.get("validation_layers", {}) + for layer_name, layer_data in layers.items(): + status = layer_data.get("status", "unknown") + badge_class = "pass" if status == "pass" else ("warning" if status == "warnings" else "fail") + html += f' {layer_name.replace("_", " ").title()}: {status.upper()}\n' + + html += """
+ +
+

Issues Breakdown

+""" + + # Issues + for priority, priority_name in [("p0", "Critical"), ("p1", "Important"), ("p2", "Recommended")]: + issues = self.context.get("issues", {}).get(priority, []) + html += f'

Priority {priority[1]}: {priority_name} ({len(issues)} issues)

\n' + + for issue in issues[:5]: # Show top 5 per priority + message = issue.get("message", "Unknown issue") + impact = issue.get("impact", "Unknown") + effort = issue.get("effort", "unknown") + fix = issue.get("fix", "No fix available") + + html += f"""
+
{message}
+
Impact: {impact}
+
Effort: {effort.capitalize()}
+
Fix: {fix}
+
+""" + + html += """
+ + +
+ + +""" + + return html + + +def main(): + """Main CLI interface.""" + parser = argparse.ArgumentParser( + description="Generate comprehensive quality reports", + formatter_class=argparse.RawDescriptionHelpFormatter + ) + + parser.add_argument( + "--path", + required=True, + help="Target path being analyzed" + ) + + parser.add_argument( + "--format", + choices=["markdown", "json", "html"], + default="markdown", + help="Output format (default: markdown)" + ) + + parser.add_argument( + "--output", + help="Output file path (optional, defaults to stdout)" + ) + + parser.add_argument( + "--context", + help="Path to JSON file with validation context" + ) + + args = parser.parse_args() + + # Load context if provided + context = {} + if args.context: + try: + with open(args.context, 'r') as f: + context = json.load(f) + except FileNotFoundError: + print(f"Warning: Context file not found: {args.context}", file=sys.stderr) + except json.JSONDecodeError as e: + print(f"Error: Invalid JSON in context file: {e}", file=sys.stderr) + return 1 + + # Generate report + generator = ReportGenerator(args.path, context) + report = generator.generate(args.format) + + # Output report + if args.output: + try: + with open(args.output, 'w') as f: + f.write(report) + print(f"Report generated: {args.output}") + except IOError as e: + print(f"Error writing to file: {e}", file=sys.stderr) + return 1 + else: + print(report) + + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/commands/quality-analysis/.scripts/scoring-algorithm.py b/commands/quality-analysis/.scripts/scoring-algorithm.py new file mode 100755 index 0000000..b11be68 --- /dev/null +++ b/commands/quality-analysis/.scripts/scoring-algorithm.py @@ -0,0 +1,239 @@ +#!/usr/bin/env python3 + +# ============================================================================ +# Quality Scoring Algorithm +# ============================================================================ +# Purpose: Calculate quality score (0-100) based on validation results +# Version: 1.0.0 +# Usage: ./scoring-algorithm.py --errors N --warnings N --missing N +# Returns: 0=success, 1=error +# Dependencies: Python 3.6+ +# ============================================================================ + +import sys +import argparse +import json + + +def calculate_quality_score(errors: int, warnings: int, missing_recommended: int) -> int: + """ + Calculate quality score based on validation issues. + + Algorithm: + score = 100 + score -= errors * 20 # Critical errors: -20 each + score -= warnings * 10 # Warnings: -10 each + score -= missing_recommended * 5 # Missing fields: -5 each + return max(0, score) + + Args: + errors: Number of critical errors + warnings: Number of warnings + missing_recommended: Number of missing recommended fields + + Returns: + Quality score (0-100) + """ + score = 100 + score -= errors * 20 + score -= warnings * 10 + score -= missing_recommended * 5 + return max(0, score) + + +def get_rating(score: int) -> str: + """ + Get quality rating based on score. + + Args: + score: Quality score (0-100) + + Returns: + Rating string + """ + if score >= 90: + return "Excellent" + elif score >= 75: + return "Good" + elif score >= 60: + return "Fair" + elif score >= 40: + return "Needs Improvement" + else: + return "Poor" + + +def get_stars(score: int) -> str: + """ + Get star rating based on score. + + Args: + score: Quality score (0-100) + + Returns: + Star rating string + """ + if score >= 90: + return "⭐⭐⭐⭐⭐" + elif score >= 75: + return "⭐⭐⭐⭐" + elif score >= 60: + return "⭐⭐⭐" + elif score >= 40: + return "⭐⭐" + else: + return "⭐" + + +def get_publication_readiness(score: int) -> str: + """ + Determine publication readiness based on score. + + Args: + score: Quality score (0-100) + + Returns: + Publication readiness status + """ + if score >= 90: + return "Yes - Ready to publish" + elif score >= 75: + return "With Minor Changes - Nearly ready" + elif score >= 60: + return "Needs Work - Significant improvements needed" + else: + return "Not Ready - Major overhaul required" + + +def format_output(score: int, errors: int, warnings: int, missing: int, + output_format: str = "text") -> str: + """ + Format score output in requested format. + + Args: + score: Quality score + errors: Error count + warnings: Warning count + missing: Missing field count + output_format: Output format (text, json, compact) + + Returns: + Formatted output string + """ + rating = get_rating(score) + stars = get_stars(score) + readiness = get_publication_readiness(score) + + if output_format == "json": + return json.dumps({ + "score": score, + "rating": rating, + "stars": stars, + "publication_ready": readiness, + "breakdown": { + "base_score": 100, + "errors_penalty": errors * 20, + "warnings_penalty": warnings * 10, + "missing_penalty": missing * 5 + }, + "counts": { + "errors": errors, + "warnings": warnings, + "missing": missing + } + }, indent=2) + + elif output_format == "compact": + return f"{score}/100 {stars} ({rating})" + + else: # text format + error_penalty = errors * 20 + warning_penalty = warnings * 10 + missing_penalty = missing * 5 + + return f"""━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +QUALITY SCORE CALCULATION +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Score: {score}/100 +Rating: {rating} +Stars: {stars} + +Breakdown: + Base Score: 100 + Critical Errors: -{error_penalty} ({errors} × 20) + Warnings: -{warning_penalty} ({warnings} × 10) + Missing Fields: -{missing_penalty} ({missing} × 5) + ───────────────────── + Final Score: {score}/100 + +Publication Ready: {readiness} +""" + + +def main(): + """Main CLI interface.""" + parser = argparse.ArgumentParser( + description="Calculate quality score based on validation results", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=""" +Examples: + %(prog)s --errors 2 --warnings 5 --missing 3 + %(prog)s --errors 0 --warnings 0 --missing 0 + %(prog)s --errors 1 --format json + """ + ) + + parser.add_argument( + "--errors", + type=int, + default=0, + help="Number of critical errors (default: 0)" + ) + + parser.add_argument( + "--warnings", + type=int, + default=0, + help="Number of warnings (default: 0)" + ) + + parser.add_argument( + "--missing", + type=int, + default=0, + help="Number of missing recommended fields (default: 0)" + ) + + parser.add_argument( + "--format", + choices=["text", "json", "compact"], + default="text", + help="Output format (default: text)" + ) + + args = parser.parse_args() + + # Validate inputs + if args.errors < 0 or args.warnings < 0 or args.missing < 0: + print("Error: Counts cannot be negative", file=sys.stderr) + return 1 + + # Calculate score + score = calculate_quality_score(args.errors, args.warnings, args.missing) + + # Format and print output + output = format_output( + score, + args.errors, + args.warnings, + args.missing, + args.format + ) + print(output) + + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/commands/quality-analysis/calculate-score.md b/commands/quality-analysis/calculate-score.md new file mode 100644 index 0000000..3fc4530 --- /dev/null +++ b/commands/quality-analysis/calculate-score.md @@ -0,0 +1,112 @@ +## Operation: Calculate Quality Score + +Calculate comprehensive quality score (0-100) based on validation results with star rating. + +### Parameters from $ARGUMENTS + +Extract these parameters from `$ARGUMENTS`: + +- **path**: Target path to analyze (required) +- **errors**: Critical error count (default: 0) +- **warnings**: Warning count (default: 0) +- **missing**: Missing recommended fields count (default: 0) + +### Scoring Algorithm + +Execute the quality scoring algorithm using `.scripts/scoring-algorithm.py`: + +**Algorithm**: +``` +score = 100 +score -= (errors × 20) # Critical errors: -20 points each +score -= (warnings × 10) # Warnings: -10 points each +score -= (missing × 5) # Missing recommended: -5 points each +score = max(0, score) # Floor at 0 +``` + +**Rating Thresholds**: +- **90-100**: Excellent ⭐⭐⭐⭐⭐ (publication-ready) +- **75-89**: Good ⭐⭐⭐⭐ (ready with minor improvements) +- **60-74**: Fair ⭐⭐⭐ (needs work) +- **40-59**: Needs Improvement ⭐⭐ (substantial work needed) +- **0-39**: Poor ⭐ (major overhaul required) + +### Workflow + +1. **Parse Arguments** + ``` + Extract path, errors, warnings, missing from $ARGUMENTS + Validate that path exists + Set defaults for missing parameters + ``` + +2. **Calculate Score** + ```bash + Invoke Bash tool to execute: + python3 .claude/commands/quality-analysis/.scripts/scoring-algorithm.py \ + --errors $errors \ + --warnings $warnings \ + --missing $missing + ``` + +3. **Format Output** + ``` + Display results in user-friendly format with: + - Numeric score (0-100) + - Rating (Excellent/Good/Fair/Needs Improvement/Poor) + - Star rating (⭐⭐⭐⭐⭐) + - Publication readiness status + ``` + +### Examples + +```bash +# Calculate score with validation results +/quality-analysis score path:. errors:2 warnings:5 missing:3 + +# Calculate perfect score +/quality-analysis score path:. errors:0 warnings:0 missing:0 + +# Calculate score with only errors +/quality-analysis score path:. errors:3 +``` + +### Error Handling + +- **Missing path**: Request path parameter +- **Invalid counts**: Negative numbers default to 0 +- **Script not found**: Provide clear error message with remediation +- **Python not available**: Fallback to bash calculation + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +QUALITY SCORE CALCULATION +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Target: + +Score: <0-100>/100 +Rating: +Stars: <⭐⭐⭐⭐⭐> + +Breakdown: + Base Score: 100 + Critical Errors: - + Warnings: - + Missing Fields: - + ───────────────────── + Final Score: /100 + +Publication Ready: +``` + +### Integration Notes + +This operation is typically invoked by: +- `full-analysis.md` as first step +- `validation-orchestrator` after comprehensive validation +- Direct user invocation for score-only calculation + +**Request**: $ARGUMENTS diff --git a/commands/quality-analysis/full-analysis.md b/commands/quality-analysis/full-analysis.md new file mode 100644 index 0000000..71e2c8e --- /dev/null +++ b/commands/quality-analysis/full-analysis.md @@ -0,0 +1,330 @@ +## Operation: Full Quality Analysis + +Execute comprehensive quality analysis orchestrating all sub-operations to generate complete assessment. + +### Parameters from $ARGUMENTS + +Extract these parameters from `$ARGUMENTS`: + +- **path**: Target path to analyze (required) +- **context**: Path to validation context JSON file with prior results (optional) +- **format**: Report output format - markdown|json|html (default: markdown) +- **output**: Output file path for report (optional) + +### Full Analysis Workflow + +This operation orchestrates all quality-analysis sub-operations to provide a complete quality assessment. + +**1. Load Validation Context** +``` +IF context parameter provided: + Read validation results from JSON file + Extract: + - Errors count + - Warnings count + - Missing fields count + - Validation layer results + - Detailed issue list +ELSE: + Use default values: + - errors: 0 + - warnings: 0 + - missing: 0 +``` + +**2. Calculate Base Score** +``` +Read calculate-score.md operation instructions +Execute scoring with validation results: + +python3 .scripts/scoring-algorithm.py \ + --errors $errors \ + --warnings $warnings \ + --missing $missing \ + --format json + +Capture: +- Quality score (0-100) +- Rating (Excellent/Good/Fair/Needs Improvement/Poor) +- Star rating (⭐⭐⭐⭐⭐) +- Publication readiness status +``` + +**3. Prioritize All Issues** +``` +Read prioritize-issues.md operation instructions + +IF context has issues: + Write issues to temporary JSON file + Execute issue prioritization: + + bash .scripts/issue-prioritizer.sh $temp_issues_file + + Capture: + - P0 (Critical) issues with details + - P1 (Important) issues with details + - P2 (Recommended) issues with details +ELSE: + Skip (no issues to prioritize) +``` + +**4. Generate Improvement Suggestions** +``` +Read suggest-improvements.md operation instructions +Generate actionable recommendations: + +Target score: 90 (publication-ready) +Current score: $calculated_score + +Generate suggestions for: +- Quick wins (< 30 min, high impact) +- This week improvements (< 2 hours) +- Long-term enhancements + +Include: +- Score impact per suggestion +- Effort estimates +- Priority assignment +- Detailed fix instructions +``` + +**5. Generate Comprehensive Report** +``` +Read generate-report.md operation instructions +Execute report generation: + +python3 .scripts/report-generator.py \ + --path $path \ + --format $format \ + --context $aggregated_context \ + --output $output + +Report includes: +- Executive summary +- Quality score and rating +- Validation layer breakdown +- Prioritized issues (P0/P1/P2) +- Improvement recommendations +- Detailed findings +``` + +**6. Aggregate and Display Results** +``` +Combine all outputs into unified assessment: + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +COMPREHENSIVE QUALITY ANALYSIS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Target: +Type: +Analyzed: + +QUALITY SCORE: <0-100>/100 <⭐⭐⭐⭐⭐> +Rating: +Publication Ready: + +CRITICAL ISSUES: +IMPORTANT ISSUES: +RECOMMENDATIONS: + +[Executive Summary - 2-3 sentences on readiness] + +[If not publication-ready, show top 3 quick wins] + +[Report file location if output specified] +``` + +### Workflow Steps + +1. **Initialize Analysis** + ``` + Validate path exists + Load validation context if provided + Set up temporary files for intermediate results + ``` + +2. **Execute Operations Sequentially** + ``` + Step 1: Calculate Score + └─→ Invoke scoring-algorithm.py + └─→ Store result in context + + Step 2: Prioritize Issues (if issues exist) + └─→ Invoke issue-prioritizer.sh + └─→ Store categorized issues in context + + Step 3: Generate Suggestions + └─→ Analyze score gap + └─→ Create actionable recommendations + └─→ Store in context + + Step 4: Generate Report + └─→ Invoke report-generator.py + └─→ Aggregate all context data + └─→ Format in requested format + └─→ Output to file or stdout + ``` + +3. **Present Summary** + ``` + Display high-level results + Show publication readiness + Highlight critical blockers (if any) + Show top quick wins + Provide next steps + ``` + +### Examples + +```bash +# Full analysis with validation context +/quality-analysis full-analysis path:. context:"@validation-results.json" + +# Full analysis generating HTML report +/quality-analysis full-analysis path:. format:html output:quality-report.html + +# Full analysis with JSON output +/quality-analysis full-analysis path:. context:"@results.json" format:json output:analysis.json + +# Basic full analysis (no prior context) +/quality-analysis full-analysis path:. +``` + +### Error Handling + +- **Missing path**: Request target path parameter +- **Invalid context file**: Continue with limited data, show warning +- **Script execution failures**: Show which operation failed, provide fallback +- **Output write errors**: Fall back to stdout with warning +- **No issues found**: Congratulate on perfect quality, skip issue operations + +### Output Format + +**Terminal Output**: +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +COMPREHENSIVE QUALITY ANALYSIS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Target: /path/to/plugin +Type: Claude Code Plugin +Analyzed: 2025-10-13 14:30:00 + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +QUALITY SCORE +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +85/100 ⭐⭐⭐⭐ (Good) +Publication Ready: With Minor Changes + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +ISSUES SUMMARY +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Critical (P0): 0 ✅ +Important (P1): 3 ⚠️ +Recommended (P2): 5 💡 + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +EXECUTIVE SUMMARY +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Your plugin is nearly ready for publication! No critical blockers +found. Address 3 important issues to reach excellent status (90+). +Quality foundation is solid with good documentation and security. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +TOP QUICK WINS +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +1. [+10 pts] Add CHANGELOG.md (15 minutes) + Impact: Improves version tracking + Fix: Create CHANGELOG.md with version history + +2. [+3 pts] Add 2 more keywords (5 minutes) + Impact: Better discoverability + Fix: Add relevant keywords to plugin.json + +3. [+2 pts] Add repository URL (2 minutes) + Impact: Professional appearance + Fix: Add repository field to plugin.json + +After Quick Wins: 100/100 ⭐⭐⭐⭐⭐ (Excellent) + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +DETAILED REPORT +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Full report saved to: quality-report.md + +Next Steps: +1. Review detailed report for all findings +2. Implement quick wins (22 minutes total) +3. Re-run validation to verify improvements +4. Submit to OpenPlugins marketplace + +Questions? Consult: docs.claude.com/plugins +``` + +### Integration Notes + +This operation is the **primary entry point** for complete quality assessment. + +**Invoked by**: +- `validation-orchestrator` after comprehensive validation +- `marketplace-validator` agent for submission readiness +- Direct user invocation for full assessment + +**Orchestrates**: +- `calculate-score.md` - Quality scoring +- `prioritize-issues.md` - Issue categorization +- `suggest-improvements.md` - Actionable recommendations +- `generate-report.md` - Comprehensive reporting + +**Data Flow**: +``` +Validation Results + ↓ +Calculate Score → score, rating, stars + ↓ +Prioritize Issues → P0/P1/P2 categorization + ↓ +Suggest Improvements → actionable recommendations + ↓ +Generate Report → formatted comprehensive report + ↓ +Display Summary → user-friendly terminal output +``` + +### Performance + +- **Execution Time**: 2-5 seconds (depending on issue count) +- **I/O Operations**: Minimal (uses temporary files for large datasets) +- **Memory Usage**: Low (streaming JSON processing) +- **Parallelization**: Sequential (each step depends on previous) + +### Quality Assurance + +**Validation Steps**: +1. Verify all scripts are executable +2. Check Python 3.6+ availability +3. Validate JSON context format +4. Verify write permissions for output +5. Ensure scoring algorithm consistency + +**Testing**: +```bash +# Test with perfect plugin +/quality-analysis full-analysis path:./test-fixtures/perfect-plugin + +# Test with issues +/quality-analysis full-analysis path:./test-fixtures/needs-work + +# Test report formats +/quality-analysis full-analysis path:. format:json +/quality-analysis full-analysis path:. format:html +/quality-analysis full-analysis path:. format:markdown +``` + +**Request**: $ARGUMENTS diff --git a/commands/quality-analysis/generate-report.md b/commands/quality-analysis/generate-report.md new file mode 100644 index 0000000..24b674b --- /dev/null +++ b/commands/quality-analysis/generate-report.md @@ -0,0 +1,293 @@ +## Operation: Generate Quality Report + +Generate comprehensive quality report in multiple formats (markdown, JSON, HTML) with detailed findings and recommendations. + +### Parameters from $ARGUMENTS + +Extract these parameters from `$ARGUMENTS`: + +- **path**: Target path to analyze (required) +- **format**: Output format - markdown|json|html (default: markdown) +- **output**: Output file path (optional, defaults to stdout) +- **context**: Path to validation context JSON file with prior results (optional) + +### Report Structure + +**1. Executive Summary** +- Overall quality score and star rating +- Publication readiness determination +- Key findings at-a-glance +- Critical blockers (if any) + +**2. Validation Layers** +- Schema validation results (pass/fail with details) +- Security scan results (vulnerabilities found) +- Documentation quality assessment +- Best practices compliance check + +**3. Issues Breakdown** +- Priority 0 (Critical): Must fix before publication +- Priority 1 (Important): Should fix for quality +- Priority 2 (Recommended): Nice to have improvements + +**4. Improvement Roadmap** +- Prioritized action items with effort estimates +- Expected score improvement per fix +- Timeline to reach publication-ready (90+ score) + +**5. Detailed Findings** +- Full validation output from each layer +- Code examples and fix suggestions +- References to best practices documentation + +### Workflow + +1. **Load Validation Context** + ``` + IF context parameter provided: + Read validation results from context file + ELSE: + Use current validation state + + Extract: + - Quality score + - Validation layer results + - Issue lists + - Target metadata + ``` + +2. **Generate Report Sections** + ```python + Execute .scripts/report-generator.py with: + - Path to target + - Format (markdown|json|html) + - Validation context data + - Output destination + + Script generates: + - Executive summary + - Validation layer breakdown + - Prioritized issues + - Improvement suggestions + - Detailed findings + ``` + +3. **Format Output** + ``` + IF output parameter specified: + Write report to file + Display confirmation with file path + ELSE: + Print report to stdout + ``` + +4. **Display Summary** + ``` + Show brief summary: + - Report generated successfully + - Format used + - Output location (if file) + - Key metrics (score, issues) + ``` + +### Examples + +```bash +# Generate markdown report to stdout +/quality-analysis report path:. format:markdown + +# Generate JSON report to file +/quality-analysis report path:. format:json output:quality-report.json + +# Generate HTML report with context +/quality-analysis report path:. format:html context:"@validation-results.json" output:report.html + +# Quick markdown report from validation results +/quality-analysis report path:. context:"@comprehensive-validation.json" +``` + +### Error Handling + +- **Missing path**: Request target path +- **Invalid format**: List supported formats (markdown, json, html) +- **Context file not found**: Continue with limited data, warn user +- **Invalid JSON context**: Show parsing error, suggest validation +- **Write permission denied**: Show error, suggest alternative output location +- **Python not available**: Fallback to basic text report + +### Output Format + +**Markdown Report**: +```markdown +# Quality Assessment Report + +Generated: 2025-10-13 14:30:00 +Target: /path/to/plugin +Type: Claude Code Plugin + +## Executive Summary + +**Quality Score**: 85/100 ⭐⭐⭐⭐ (Good) +**Publication Ready**: With Minor Changes +**Critical Issues**: 0 +**Total Issues**: 8 + +Your plugin is nearly ready for publication! Address 3 important issues to reach excellent status. + +## Validation Results + +### Schema Validation ✅ PASS +- All required fields present +- Valid JSON syntax +- Correct semver format + +### Security Scan ✅ PASS +- No secrets exposed +- All URLs use HTTPS +- File permissions correct + +### Documentation ⚠️ WARNINGS (3 issues) +- Missing CHANGELOG.md (-10 pts) +- README could use 2 more examples (-5 pts) +- No architecture documentation + +### Best Practices ✅ PASS +- Naming convention correct +- Keywords appropriate (5/7) +- Category properly set + +## Issues Breakdown + +### Priority 0 (Critical): 0 issues +None - excellent! + +### Priority 1 (Important): 3 issues + +#### 1. Add CHANGELOG.md [+10 pts] +Missing version history and change documentation. + +**Impact**: -10 quality score +**Effort**: Low (15 minutes) +**Fix**: Create CHANGELOG.md following Keep a Changelog format +```bash +# Create changelog +cat > CHANGELOG.md < + + + Quality Assessment Report + + + + + + + + + +``` + +### Integration Notes + +This operation is invoked by: +- `full-analysis.md` as final step to consolidate results +- `validation-orchestrator` for comprehensive reporting +- Direct user invocation for custom reports + +The report aggregates data from: +- `calculate-score.md` output +- `prioritize-issues.md` categorization +- `suggest-improvements.md` recommendations +- All validation layer results + +**Request**: $ARGUMENTS diff --git a/commands/quality-analysis/prioritize-issues.md b/commands/quality-analysis/prioritize-issues.md new file mode 100644 index 0000000..e7baf11 --- /dev/null +++ b/commands/quality-analysis/prioritize-issues.md @@ -0,0 +1,178 @@ +## Operation: Prioritize Issues + +Categorize and prioritize validation issues by severity and impact using P0/P1/P2 tier system. + +### Parameters from $ARGUMENTS + +Extract these parameters from `$ARGUMENTS`: + +- **issues**: Path to JSON file with issues or inline JSON string (required) +- **criteria**: Prioritization criteria - severity|impact|effort (default: severity) + +### Prioritization Tiers + +**Priority 0 (P0) - Critical - Must Fix** +- Invalid JSON syntax (blocks parsing) +- Missing required fields (name, version, description, author, license) +- Security vulnerabilities (exposed secrets, dangerous patterns) +- Format violations (invalid semver, malformed URLs) +- Blocks: Publication and installation + +**Priority 1 (P1) - Important - Should Fix** +- Missing recommended fields (repository, homepage, keywords) +- Documentation gaps (incomplete README, missing CHANGELOG) +- Convention violations (naming, structure) +- Performance issues (slow scripts, inefficient patterns) +- Impact: Reduces quality score significantly + +**Priority 2 (P2) - Recommended - Nice to Have** +- Additional keywords for discoverability +- Enhanced examples and documentation +- Expanded test coverage +- Quality improvements and polish +- Impact: Minor quality score boost + +### Workflow + +1. **Parse Issue Data** + ``` + IF issues parameter starts with "@": + Read JSON from file (remove @ prefix) + ELSE IF issues is valid JSON: + Parse inline JSON + ELSE: + Error: Invalid issues format + ``` + +2. **Categorize Issues** + ```bash + Execute .scripts/issue-prioritizer.sh with issues data + Categorize each issue based on: + - Severity (critical, important, recommended) + - Impact on publication readiness + - Blocking status + - Effort to fix + ``` + +3. **Sort and Format** + ``` + Group issues by priority (P0, P1, P2) + Sort within each priority by impact + Format with appropriate icons: + - P0: ❌ (red X - blocking) + - P1: ⚠️ (warning - should fix) + - P2: 💡 (lightbulb - suggestion) + ``` + +4. **Generate Summary** + ``` + Count issues per priority + Calculate total fix effort + Estimate score improvement potential + ``` + +### Examples + +```bash +# Prioritize from validation results file +/quality-analysis prioritize issues:"@validation-results.json" + +# Prioritize inline JSON +/quality-analysis prioritize issues:'{"errors": [{"type": "missing_field", "field": "license"}]}' + +# Prioritize with impact criteria +/quality-analysis prioritize issues:"@results.json" criteria:impact +``` + +### Error Handling + +- **Missing issues parameter**: Request issues data +- **Invalid JSON format**: Show JSON parsing error with line number +- **Empty issues array**: Return "No issues found" message +- **File not found**: Show file path and suggest correct path +- **Script execution error**: Fallback to basic categorization + +### Output Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +ISSUE PRIORITIZATION +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Total Issues: +Estimated Fix Time: