Initial commit
This commit is contained in:
427
commands/analyze-skills.md
Normal file
427
commands/analyze-skills.md
Normal file
@@ -0,0 +1,427 @@
|
||||
---
|
||||
description: Analyze AI conversation exports to generate reusable Custom Skills
|
||||
---
|
||||
|
||||
# Analyze Skills Command
|
||||
|
||||
You are a Claude Skills Architect analyzing a user's complete AI conversation history to identify, prioritize, and automatically generate custom Claude Skills. Custom Skills are reusable instruction sets with proper YAML frontmatter, supporting documentation, and templates that help Claude consistently produce high-quality outputs for recurring tasks.
|
||||
|
||||
**ultrathink**: Use extended thinking capabilities when you encounter:
|
||||
- Large conversation datasets (>50 conversations) requiring deep pattern analysis
|
||||
- Complex cross-platform deduplication decisions
|
||||
- Ambiguous skill boundary determinations
|
||||
- Statistical validation of pattern significance
|
||||
- Strategic tradeoffs in skill consolidation
|
||||
|
||||
You decide when extended reasoning will improve analysis quality. Trust your judgment.
|
||||
|
||||
## Your Mission
|
||||
Perform comprehensive analysis of conversation exports to:
|
||||
1. Identify all potential custom skill opportunities
|
||||
2. Eliminate redundancies and optimize skill boundaries
|
||||
3. Generate complete, ready-to-use skill packages
|
||||
4. Provide implementation roadmap and maintenance guidance
|
||||
5. **Enable incremental processing** - skip previously analyzed conversations and build on prior work
|
||||
|
||||
**Analysis Approach:**
|
||||
- Use extended reasoning to identify non-obvious patterns across conversations
|
||||
- Think deeply about skill boundaries and overlap resolution
|
||||
- Consider temporal patterns and user expertise evolution
|
||||
- Validate pattern significance statistically before recommending skills
|
||||
- Reason through cross-platform deduplication decisions carefully
|
||||
|
||||
## Input Format
|
||||
The user should have their conversation export files in the `data-exports/` directory structure. If not already created, the `/skills-setup` command will create this automatically.
|
||||
|
||||
Expected structure:
|
||||
```
|
||||
data-exports/
|
||||
├── chatgpt/ # Place ChatGPT export files here
|
||||
│ ├── conversations.json
|
||||
│ ├── user.json
|
||||
│ ├── shared_conversations.json
|
||||
│ └── message_feedback.json (optional)
|
||||
└── claude/ # Place Claude export files here
|
||||
├── conversations.json
|
||||
├── projects.json
|
||||
└── users.json
|
||||
```
|
||||
|
||||
**Note**: If you haven't run `/skills-setup` yet, use it first to create the necessary directory structure and get detailed export instructions.
|
||||
|
||||
### Claude Export Format (data-exports/claude/):
|
||||
1. **conversations.json** - Complete conversation history with messages, timestamps, and metadata
|
||||
2. **projects.json** - Project information including descriptions, documentation, and workflows
|
||||
3. **users.json** - User account information (for privacy considerations and expertise assessment)
|
||||
|
||||
### ChatGPT Export Format (data-exports/chatgpt/):
|
||||
1. **conversations.json** - Conversation history with mapping structure and message objects
|
||||
2. **user.json** - User profile information with account details
|
||||
3. **shared_conversations.json** - Shared conversation metadata with titles and IDs
|
||||
4. **message_feedback.json** - User feedback on AI responses (if available)
|
||||
5. **shopping.json** - Transaction and purchase data (if available)
|
||||
|
||||
### Platform Detection:
|
||||
Automatically detect available platforms by scanning both data-exports/ directories and adapt processing accordingly.
|
||||
|
||||
## Analysis Framework
|
||||
|
||||
This command uses the **[shared analysis methodology](../shared/analysis-methodology.md)** with export-specific enhancements.
|
||||
|
||||
### Phase 0: Analysis Scope Determination (Export-Specific)
|
||||
|
||||
1. **Check for Previous Analysis Log**:
|
||||
- If user provides a previous analysis log (from prior runs), parse it to identify:
|
||||
- Previously analyzed conversation IDs and their analysis dates
|
||||
- Generated skills and their source conversations
|
||||
- File modification dates or content hashes of processed files
|
||||
- Analysis metadata (dates, conversation counts, skill counts)
|
||||
|
||||
2. **Determine Analysis Scope**:
|
||||
- Compare current conversation files with previous analysis log
|
||||
- Identify new conversations (not in previous log)
|
||||
- Identify potentially modified conversations (based on message counts, dates, or user indication)
|
||||
- Flag conversations that need analysis vs. those to skip for efficiency
|
||||
|
||||
3. **Output Analysis Plan**:
|
||||
- List conversations to be analyzed (new + potentially modified)
|
||||
- List conversations being skipped (unchanged from previous run)
|
||||
- Estimated processing scope and rationale
|
||||
- Expected time and complexity of analysis
|
||||
|
||||
### Phase 1: Data Processing & Pattern Discovery
|
||||
|
||||
**Use extended reasoning to identify subtle patterns across large conversation sets.**
|
||||
|
||||
1. **Platform Detection and Data Parsing** (Export-Specific):
|
||||
- Auto-detect export format (Claude vs ChatGPT)
|
||||
- Parse conversations, projects, user data based on platform
|
||||
- Extract expertise indicators and usage patterns
|
||||
|
||||
2. **Apply [Shared Pattern Discovery](../shared/analysis-methodology.md#phase-1-pattern-discovery--classification)**:
|
||||
- **Data-driven domain discovery** (let actual topics emerge - DO NOT force into predefined categories)
|
||||
- Task types (creation, transformation, analysis, troubleshooting, curation)
|
||||
- Explicit and implicit pattern markers
|
||||
- Niche & specialized pattern detection (hobbyist domains, creative work, prompt engineering, etc.)
|
||||
- Temporal pattern detection
|
||||
- User expertise evolution over time
|
||||
|
||||
3. **Export-Specific Enhancements**:
|
||||
- Cross-reference with project data (Claude exports):
|
||||
- How many projects demonstrate similar patterns?
|
||||
- Do project descriptions reinforce conversation patterns?
|
||||
- Project success indicators and user satisfaction
|
||||
- Message feedback analysis (ChatGPT exports):
|
||||
- User feedback patterns on AI responses
|
||||
- Quality improvement opportunities
|
||||
|
||||
**Think deeply about:**
|
||||
- Are these truly distinct patterns or variations of the same workflow?
|
||||
- What makes this pattern recurring vs. one-off requests?
|
||||
- How do patterns evolve across the user's conversation timeline?
|
||||
|
||||
**Terminal Output - Domain Diversity Visualization:**
|
||||
|
||||
After completing pattern discovery, display an ASCII chart showing domain distribution to validate data-driven discovery:
|
||||
|
||||
```
|
||||
📊 Domain Distribution Analysis
|
||||
|
||||
Business & Strategy ████████████░░░░░░░░ 12 patterns (32%)
|
||||
Creative & Writing ██████████░░░░░░░░░░ 10 patterns (27%)
|
||||
Image Prompting ████████░░░░░░░░░░░░ 8 patterns (22%)
|
||||
Learning & Education ████░░░░░░░░░░░░░░░░ 4 patterns (11%)
|
||||
Recipe & Cooking ██░░░░░░░░░░░░░░░░░░ 2 patterns (5%)
|
||||
Gaming & Design █░░░░░░░░░░░░░░░░░░░ 1 pattern (3%)
|
||||
|
||||
✅ Domain Diversity: 6 distinct topic areas detected
|
||||
✅ No predefined categorization - domains emerged from your data
|
||||
```
|
||||
|
||||
This validates that the analysis discovered diverse patterns beyond traditional business/coding domains.
|
||||
|
||||
### Phase 2-4: Core Analysis
|
||||
|
||||
Apply the **[shared analysis methodology](../shared/analysis-methodology.md)** phases:
|
||||
|
||||
- **Phase 2**: Frequency & Temporal Analysis with project data cross-referencing
|
||||
- **Phase 3**: Skill-Worthiness Scoring (0-50 composite scale)
|
||||
- **Phase 4**: Relationship Mapping & Overlap Analysis
|
||||
|
||||
See [shared methodology](../shared/analysis-methodology.md) for complete details.
|
||||
|
||||
### Phase 5: Cross-Platform Pattern Deduplication (Export-Specific)
|
||||
|
||||
When processing mixed datasets (both ChatGPT and Claude exports), perform comprehensive deduplication before skill generation.
|
||||
|
||||
See **[shared methodology - Cross-Platform Deduplication](../shared/analysis-methodology.md#cross-platform-deduplication-export-analysis-only)** for:
|
||||
- Content similarity detection
|
||||
- Deduplication classification rules
|
||||
- Pattern frequency recalculation
|
||||
- Unified skill design preparation
|
||||
- Deduplication validation
|
||||
|
||||
**Export-Specific Advantages:**
|
||||
- Access to complete conversation history (not just recent/accessible)
|
||||
- Project metadata integration (Claude)
|
||||
- Message feedback data (ChatGPT)
|
||||
- Temporal analysis across months/years
|
||||
|
||||
### Phase 6: Skill Generation & Optimization
|
||||
|
||||
**Use extended reasoning to optimize skill boundaries and maximize user value.**
|
||||
|
||||
Apply **[shared methodology - Prioritization Matrix](../shared/analysis-methodology.md#phase-5-prioritization-matrix)** and boundary optimization strategies.
|
||||
|
||||
**Export-Specific Enhancements:**
|
||||
- Leverage project success data for impact validation
|
||||
- Use message feedback for quality improvement insights
|
||||
- Apply historical trend analysis for strategic pattern identification
|
||||
|
||||
## Output Generation Options
|
||||
|
||||
Ask user to choose:
|
||||
|
||||
**Option A: Analysis Report Only**
|
||||
- Comprehensive analysis with recommendations
|
||||
- No file generation
|
||||
- Implementation guidance only
|
||||
|
||||
**Option B: Complete Implementation Package** (Recommended)
|
||||
- Full analysis plus ready-to-use skills
|
||||
- Proper folder structure with all supporting files
|
||||
- Testing and validation guidance
|
||||
|
||||
**Option C: Incremental Implementation**
|
||||
- Start with top 3-5 skills
|
||||
- Provide complete package for priority skills
|
||||
- Expansion roadmap for additional skills
|
||||
|
||||
**Option D: Custom Specification**
|
||||
- User-defined subset of skills
|
||||
- Specific modifications or requirements
|
||||
- Tailored to particular use cases
|
||||
|
||||
## File Generation (Option B/C)
|
||||
|
||||
**Note**: If these directories don't exist, they will be automatically created by the analysis process.
|
||||
|
||||
### Create Analysis Reports
|
||||
Generate timestamped reports in `reports/{TIMESTAMP}/`:
|
||||
|
||||
1. **`skills-analysis-log.json`** (Root directory) - Machine-readable incremental processing data
|
||||
|
||||
**Example structure:**
|
||||
```json
|
||||
{
|
||||
"analysis_date": "YYYY-MM-DDTHH:MM:SSZ",
|
||||
"platform_detected": "claude|chatgpt|mixed",
|
||||
"total_conversations": 150,
|
||||
"report_directory": "reports/2025-01-23_22-40-00",
|
||||
"conversations_analyzed": [
|
||||
{
|
||||
"id": "conv_123",
|
||||
"platform": "chatgpt|claude",
|
||||
"file": "data-exports/chatgpt/conversations.json",
|
||||
"message_count": 45,
|
||||
"first_message_date": "2024-01-01T10:00:00Z",
|
||||
"last_message_date": "2024-01-10T14:20:00Z",
|
||||
"analysis_hash": "sha256:abc123...",
|
||||
"topics_identified": ["coding", "documentation"],
|
||||
"patterns_found": 3
|
||||
}
|
||||
],
|
||||
"deduplication_summary": {
|
||||
"cross_platform_duplicates_removed": 45,
|
||||
"workflow_instances_merged": 12,
|
||||
"frequency_adjustments": {
|
||||
"newsletter_critique": {"before": 1225, "after": 987},
|
||||
"business_communication": {"before": 709, "after": 643}
|
||||
}
|
||||
},
|
||||
"skills_generated": [
|
||||
{
|
||||
"skill_name": "newsletter-critique-specialist",
|
||||
"source_conversations": ["conv_123", "conv_789"],
|
||||
"frequency_score": 8,
|
||||
"impact_score": 9,
|
||||
"platform_coverage": "both",
|
||||
"generated_files": [
|
||||
"generated-skills/newsletter-critique-specialist/SKILL.md",
|
||||
"generated-skills/newsletter-critique-specialist/reference.md"
|
||||
]
|
||||
}
|
||||
],
|
||||
"analysis_metadata": {
|
||||
"total_patterns_identified": 25,
|
||||
"patterns_consolidated": 8,
|
||||
"patterns_deduplicated": 6,
|
||||
"final_skill_count": 5,
|
||||
"processing_time_minutes": 45
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **`comprehensive-skills-analysis.md`** - Complete pattern analysis with skill recommendations and prioritization visualization
|
||||
3. **`implementation-guide.md`** - Actionable deployment roadmap
|
||||
|
||||
**Report Visualization Requirements:**
|
||||
|
||||
Include a Mermaid quadrant chart in `comprehensive-skills-analysis.md` showing the prioritization matrix:
|
||||
|
||||
```markdown
|
||||
## 📊 Skill Prioritization Matrix
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base'}}%%
|
||||
quadrantChart
|
||||
title Skill Prioritization: Frequency vs Impact
|
||||
x-axis Low Frequency --> High Frequency
|
||||
y-axis Low Impact --> High Impact
|
||||
quadrant-1 Strategic
|
||||
quadrant-2 Quick Wins
|
||||
quadrant-3 Defer
|
||||
quadrant-4 Automate
|
||||
[Skill Name 1]: [freq_score/10, impact_score/10]
|
||||
[Skill Name 2]: [freq_score/10, impact_score/10]
|
||||
[Skill Name 3]: [freq_score/10, impact_score/10]
|
||||
```
|
||||
\```
|
||||
|
||||
**Legend:**
|
||||
- **Quick Wins** (top-right): High frequency, high impact - implement first
|
||||
- **Strategic** (top-left): Lower frequency but high value - critical capabilities
|
||||
- **Automate** (bottom-right): High frequency, simpler - nice efficiency gains
|
||||
- **Defer** (bottom-left): Low priority - consider simple prompts instead
|
||||
|
||||
**Calculations:**
|
||||
- X-axis (Frequency): Use frequency score (0-10) from skill-worthiness evaluation
|
||||
- Y-axis (Impact): Average of complexity + time savings + error reduction scores (0-10)
|
||||
```
|
||||
|
||||
### Generate Skill Packages
|
||||
For each approved skill, create complete folder structure in `generated-skills/`:
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required - main skill with YAML frontmatter)
|
||||
├── reference.md (detailed methodology and frameworks)
|
||||
├── examples.md (additional examples and use cases)
|
||||
├── templates/ (reusable templates for outputs)
|
||||
│ ├── template-1.md
|
||||
│ └── template-2.md
|
||||
└── scripts/ (utility scripts if applicable)
|
||||
└── helper-script.py
|
||||
```
|
||||
|
||||
**Auto-creation**: The `generated-skills/` directory will be created automatically when you select Option B or C.
|
||||
|
||||
### SKILL.md Generation Template
|
||||
```yaml
|
||||
---
|
||||
name: [skill-name] # Only lowercase letters, numbers, and hyphens
|
||||
description: [CRITICAL: Must include BOTH what skill does AND when to use it. Written in third person. Include key trigger terms.]
|
||||
---
|
||||
|
||||
# [Skill Name]
|
||||
|
||||
## Instructions
|
||||
[Clear, step-by-step guidance - KEEP UNDER 500 LINES TOTAL]
|
||||
|
||||
1. **[Phase 1 Name]**
|
||||
- [Specific instruction 1]
|
||||
- [Specific instruction 2]
|
||||
|
||||
2. **[Apply Framework/Method]** from [reference.md](reference.md):
|
||||
- [Framework element 1]
|
||||
- [Framework element 2]
|
||||
|
||||
3. **[Use Templates]** from [templates/](templates/):
|
||||
- [Template 1 description and usage]
|
||||
- [Template 2 description and usage]
|
||||
|
||||
4. **[Quality Standards]**:
|
||||
- [Standard 1]
|
||||
- [Standard 2]
|
||||
|
||||
## Examples
|
||||
|
||||
### [Example Scenario 1]
|
||||
**User Request**: "[Realistic user request]"
|
||||
|
||||
**Response using methodology**:
|
||||
```
|
||||
[Complete example showing proper skill usage]
|
||||
```
|
||||
|
||||
For more examples, see [examples.md](examples.md).
|
||||
For detailed methodology, see [reference.md](reference.md).
|
||||
```
|
||||
|
||||
## Quality Standards
|
||||
|
||||
All quality standards follow the **[shared analysis methodology](../shared/analysis-methodology.md#quality-standards)**:
|
||||
|
||||
- Pattern validation requirements (frequency, consistency, evidence)
|
||||
- Skill consolidation rules (max 8-12 skills, clear boundaries)
|
||||
- Skill package generation standards
|
||||
- Anti-patterns to avoid
|
||||
|
||||
**Export-Specific Enhancements:**
|
||||
- Minimum frequency: 50+ occurrences OR high strategic value (with complete history available)
|
||||
- Cross-platform evidence: Include examples from both platforms when available
|
||||
- Project data validation: Cross-reference patterns with project success metrics
|
||||
|
||||
## Instructions for Execution
|
||||
|
||||
1. **Initialize Timestamp**: Create `TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)`
|
||||
2. **Create Reports Directory**: `mkdir -p reports/{TIMESTAMP}`
|
||||
3. **Check for Previous Analysis Log**: Look for existing `skills-analysis-log.json` in root directory
|
||||
4. **Scan Data Directories**: Check `data-exports/chatgpt/` and `data-exports/claude/` for available platforms
|
||||
5. **Determine analysis scope** using Phase 0 if previous log exists
|
||||
6. **Start with user choice** of output option (A/B/C/D)
|
||||
7. **Perform complete analysis** following all phases for determined scope
|
||||
8. **Execute cross-platform deduplication** if both ChatGPT and Claude data detected (Phase 5)
|
||||
9. **Generate output files**:
|
||||
- Update/create `skills-analysis-log.json` in root directory
|
||||
- Create `reports/{TIMESTAMP}/comprehensive-skills-analysis.md`
|
||||
- Create `reports/{TIMESTAMP}/implementation-guide.md`
|
||||
10. **Generate skill packages** in `generated-skills/` if requested (Option B/C)
|
||||
11. **Validate all content** using quality framework and analysis standards
|
||||
12. **Cleanup Phase**:
|
||||
- Remove temporary analysis scripts from `scripts/` directory
|
||||
- Delete intermediate data processing files (*.tmp, *.cache, etc.)
|
||||
- Remove empty directories created during processing
|
||||
- Clean up any Python virtual environments or temporary dependencies
|
||||
- Remove duplicate or staging files from skill generation process
|
||||
13. **Archive Organization** (Optional):
|
||||
- Compress older reports directories (keep last 3-5 runs)
|
||||
- Move temporary logs to archive subdirectory
|
||||
- Consolidate debug output into single log file
|
||||
14. **Cleanup Validation**:
|
||||
- Verify all essential outputs remain intact:
|
||||
- `skills-analysis-log.json` (root)
|
||||
- `reports/{TIMESTAMP}/` directory with analysis reports
|
||||
- `generated-skills/` directory with skill packages
|
||||
- Confirm no critical files were accidentally removed
|
||||
- Display cleanup summary showing what was removed vs. retained
|
||||
|
||||
### Quality Focus Requirements
|
||||
|
||||
Apply **[shared methodology quality standards](../shared/analysis-methodology.md#quality-standards)** with export-specific validation:
|
||||
- Eliminate generic patterns and focus on specific workflows
|
||||
- Consolidate overlapping skills (max 8-12, recommend top 5-8)
|
||||
- Validate frequency claims post-deduplication
|
||||
- Prioritize by genuine impact (>30 min/week time savings)
|
||||
- Platform-agnostic design for all generated skills
|
||||
|
||||
### For Incremental Processing
|
||||
If user provides previous analysis log:
|
||||
- Parse the log to understand what was previously analyzed
|
||||
- Skip unchanged conversations (based on IDs and metadata)
|
||||
- Focus on new or modified conversations only
|
||||
- Re-run deduplication if new platform data added
|
||||
- Integrate new findings with previous skill recommendations
|
||||
- Update the analysis log with new data
|
||||
|
||||
**Data Location**: JSON files are located in `data-exports/chatgpt/` and `data-exports/claude/` subdirectories. The system will automatically detect available platform(s) and process files accordingly.
|
||||
267
commands/extract-exports.md
Normal file
267
commands/extract-exports.md
Normal file
@@ -0,0 +1,267 @@
|
||||
---
|
||||
description: Automatically extract and organize AI conversation export zip files into proper directory structure
|
||||
---
|
||||
|
||||
# Extract Exports Command
|
||||
|
||||
Automatically extract AI conversation export zip files (Claude and/or ChatGPT) and organize them into the proper directory structure for skills analysis.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Detects export zip files** in your current directory
|
||||
2. **Identifies platform type** (Claude vs ChatGPT) by examining contents
|
||||
3. **Creates directory structure** if it doesn't exist
|
||||
4. **Extracts and organizes files** into correct locations
|
||||
5. **Validates file placement** and reports results
|
||||
6. **Cleans up** temporary files and optionally removes original zips
|
||||
|
||||
## Expected Directory Structure After Extraction
|
||||
|
||||
```
|
||||
your-project/
|
||||
├── data-exports/
|
||||
│ ├── claude/ # Claude export files
|
||||
│ │ ├── conversations.json
|
||||
│ │ ├── projects.json
|
||||
│ │ └── users.json
|
||||
│ └── chatgpt/ # ChatGPT export files
|
||||
│ ├── conversations.json
|
||||
│ ├── user.json
|
||||
│ ├── shared_conversations.json
|
||||
│ └── message_feedback.json
|
||||
├── reports/ # Analysis reports (created when needed)
|
||||
└── generated-skills/ # Generated skills (created when needed)
|
||||
```
|
||||
|
||||
## How to Use
|
||||
|
||||
1. **Download your export zip files** from Claude and/or ChatGPT
|
||||
2. **Place zip files in your current directory** (where you want the analysis to happen)
|
||||
3. **Run this command**: The system will handle the rest automatically
|
||||
|
||||
## Supported Export Formats
|
||||
|
||||
### Claude Exports
|
||||
Expected zip contents:
|
||||
- `conversations.json` (required)
|
||||
- `projects.json` (optional)
|
||||
- `users.json` (optional)
|
||||
|
||||
### ChatGPT Exports
|
||||
Expected zip contents:
|
||||
- `conversations.json` (required)
|
||||
- `user.json` (optional)
|
||||
- `shared_conversations.json` (optional)
|
||||
- `message_feedback.json` (optional)
|
||||
- `shopping.json` (optional - will be ignored)
|
||||
|
||||
## Instructions for Execution
|
||||
|
||||
1. **Scan Current Directory**:
|
||||
- Look for `*.zip` files in current directory
|
||||
- Report found zip files to user for confirmation
|
||||
|
||||
2. **Create Directory Structure**:
|
||||
- Create `data-exports/` directory if it doesn't exist
|
||||
- Create `data-exports/claude/` subdirectory
|
||||
- Create `data-exports/chatgpt/` subdirectory
|
||||
- Create `reports/` and `generated-skills/` directories for future use
|
||||
|
||||
3. **Process Each Zip File**:
|
||||
- Extract to temporary directory (`temp_extract_TIMESTAMP/`)
|
||||
- Examine contents to identify platform type
|
||||
- Look for key indicator files:
|
||||
- Claude: `conversations.json` + `projects.json` present
|
||||
- ChatGPT: `conversations.json` + `user.json` present
|
||||
- Mixed/Unknown: Ask user to specify platform
|
||||
|
||||
4. **Platform Detection Logic**:
|
||||
```
|
||||
If contains "projects.json" → Claude export
|
||||
Else if contains "user.json" → ChatGPT export
|
||||
Else if only "conversations.json" → Ask user to specify
|
||||
Else → Invalid export format
|
||||
```
|
||||
|
||||
5. **File Organization**:
|
||||
- **For Claude exports**: Move all JSON files to `data-exports/claude/`
|
||||
- **For ChatGPT exports**: Move all JSON files to `data-exports/chatgpt/`
|
||||
- Skip non-JSON files (README, etc.)
|
||||
- Handle file conflicts by asking user preference (overwrite/skip/backup)
|
||||
|
||||
6. **Validation**:
|
||||
- Verify required files (`conversations.json`) are present
|
||||
- Check file sizes are reasonable (not empty, not suspiciously large)
|
||||
- Validate JSON format of critical files
|
||||
- Report any missing optional files
|
||||
|
||||
7. **Cleanup Options**:
|
||||
- Remove temporary extraction directories
|
||||
- Ask user about original zip files:
|
||||
- **Keep**: Leave original zips in place
|
||||
- **Archive**: Move to `archives/` subdirectory
|
||||
- **Delete**: Remove original zips completely
|
||||
|
||||
8. **Final Report**:
|
||||
- Summary of extracted files by platform
|
||||
- Location of organized files
|
||||
- Any warnings or issues encountered
|
||||
- Next steps recommendation (run `/analyze-skills`)
|
||||
|
||||
## User Interaction Flow
|
||||
|
||||
```
|
||||
Found the following zip files:
|
||||
• claude_export_2024_01_20.zip (2.3 MB)
|
||||
• chatgpt_export_jan_2024.zip (4.1 MB)
|
||||
|
||||
Proceed with extraction and organization? [Y/n]
|
||||
|
||||
✅ Creating directory structure...
|
||||
✅ Extracting claude_export_2024_01_20.zip...
|
||||
→ Detected: Claude export (found projects.json)
|
||||
→ Moving files to data-exports/claude/
|
||||
→ Files: conversations.json, projects.json, users.json
|
||||
|
||||
✅ Extracting chatgpt_export_jan_2024.zip...
|
||||
→ Detected: ChatGPT export (found user.json)
|
||||
→ Moving files to data-exports/chatgpt/
|
||||
→ Files: conversations.json, user.json, shared_conversations.json
|
||||
|
||||
✅ Validation complete:
|
||||
• Claude: 1,247 conversations in conversations.json
|
||||
• ChatGPT: 892 conversations in conversations.json
|
||||
|
||||
What should I do with the original zip files?
|
||||
[K]eep them / [A]rchive them / [D]elete them: A
|
||||
|
||||
✅ Original zips moved to archives/
|
||||
|
||||
🎉 Export organization complete!
|
||||
|
||||
Your files are now ready for analysis:
|
||||
• Claude exports: data-exports/claude/
|
||||
• ChatGPT exports: data-exports/chatgpt/
|
||||
|
||||
Next step: Run `/analyze-skills` to identify skill opportunities
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues & Solutions
|
||||
|
||||
**No zip files found**:
|
||||
- Check current directory for `*.zip` files
|
||||
- Verify zip files are conversation exports (not other types)
|
||||
- Provide guidance on downloading exports if needed
|
||||
|
||||
**Corrupted or invalid zip files**:
|
||||
- Report which zip file has issues
|
||||
- Suggest re-downloading from the platform
|
||||
- Continue with other valid zip files
|
||||
|
||||
**Missing required files**:
|
||||
- Report missing `conversations.json`
|
||||
- Explain impact on analysis capability
|
||||
- Suggest contacting platform support
|
||||
|
||||
**JSON parsing errors**:
|
||||
- Report which file has JSON format issues
|
||||
- Attempt to continue with other files
|
||||
- Suggest platform support if file appears corrupted
|
||||
|
||||
**Directory permission issues**:
|
||||
- Check write permissions for current directory
|
||||
- Provide clear error message and resolution steps
|
||||
- Suggest alternative directory if needed
|
||||
|
||||
**Disk space issues**:
|
||||
- Check available disk space before extraction
|
||||
- Estimate space needed based on zip file sizes
|
||||
- Provide cleanup recommendations if space is low
|
||||
|
||||
## File Conflict Resolution
|
||||
|
||||
When files already exist in target directories:
|
||||
|
||||
**Conversations.json exists**:
|
||||
```
|
||||
Found existing conversations.json in data-exports/claude/
|
||||
• Existing: 1,156 conversations (last modified: 2024-01-15)
|
||||
• New: 1,247 conversations (from zip file)
|
||||
|
||||
Choose action:
|
||||
[O]verwrite with new file
|
||||
[B]ackup existing and use new
|
||||
[S]kip extraction (keep existing)
|
||||
[C]ompare and merge (advanced)
|
||||
|
||||
Your choice: B
|
||||
|
||||
✅ Existing file backed up as conversations_backup_2024-01-20.json
|
||||
✅ New file extracted as conversations.json
|
||||
```
|
||||
|
||||
## Integration with Other Commands
|
||||
|
||||
**Seamless workflow with existing commands**:
|
||||
- After successful extraction → Suggest running `/analyze-skills`
|
||||
- If directory structure missing → Automatically create (no need for `/skills-setup`)
|
||||
- Before analysis → Check for proper file organization
|
||||
|
||||
**State validation**:
|
||||
- Check if exports are already extracted before running
|
||||
- Detect incremental updates (new exports vs existing data)
|
||||
- Provide smart recommendations based on current state
|
||||
|
||||
## Quality Standards
|
||||
|
||||
**File Validation Requirements**:
|
||||
- Verify JSON files are valid JSON format
|
||||
- Check that conversations.json contains actual conversation data
|
||||
- Validate file sizes are within reasonable ranges (not empty, not huge)
|
||||
- Ensure required fields exist in JSON structure
|
||||
|
||||
**Security Considerations**:
|
||||
- Only extract to controlled subdirectories
|
||||
- Sanitize file names to prevent directory traversal
|
||||
- Validate zip contents before extraction
|
||||
- Limit extraction to reasonable file sizes
|
||||
|
||||
**User Experience Standards**:
|
||||
- Clear progress indicators during extraction
|
||||
- Descriptive error messages with resolution steps
|
||||
- Confirmation prompts for destructive actions
|
||||
- Helpful next-step recommendations
|
||||
|
||||
## Advanced Features
|
||||
|
||||
**Incremental Processing Support**:
|
||||
- Detect if exports contain new conversations vs existing data
|
||||
- Smart merge options for updated exports
|
||||
- Preserve existing analysis logs and reports
|
||||
|
||||
**Batch Processing**:
|
||||
- Handle multiple zip files from same platform
|
||||
- Merge multiple ChatGPT exports if user has multiple accounts
|
||||
- Consolidate multiple Claude exports from different time periods
|
||||
|
||||
**Cross-Platform Intelligence**:
|
||||
- Detect potential duplicate conversations across platforms
|
||||
- Flag cross-platform analysis opportunities
|
||||
- Prepare data for unified analysis workflow
|
||||
|
||||
## Commands Integration
|
||||
|
||||
This command works seamlessly with:
|
||||
- **`/skills-setup`**: No longer needed if this command is run first
|
||||
- **`/analyze-skills`**: Ready to run immediately after extraction
|
||||
- **`/skills-troubleshoot`**: Can diagnose extraction-related issues
|
||||
|
||||
**Recommended workflow**:
|
||||
1. Download exports from Claude/ChatGPT
|
||||
2. Run `/extract-exports` (this command)
|
||||
3. Run `/analyze-skills` for pattern analysis
|
||||
4. Implement recommended skills
|
||||
|
||||
Ready to automatically organize your conversation exports!
|
||||
226
commands/skills-setup.md
Normal file
226
commands/skills-setup.md
Normal file
@@ -0,0 +1,226 @@
|
||||
---
|
||||
description: Guide you through setting up AI conversation exports for skills analysis
|
||||
---
|
||||
|
||||
# Skills Analysis Setup Guide
|
||||
|
||||
I'll walk you through setting up your AI conversation exports for analysis and help you get the most value from the Claude Skills Analyzer.
|
||||
|
||||
## 📋 What You'll Need
|
||||
|
||||
To analyze your AI usage patterns and generate Custom Skills, you need:
|
||||
|
||||
1. **AI conversation exports** (Claude and/or ChatGPT)
|
||||
2. **Organized directory structure** for your data
|
||||
3. **At least 20+ conversations** for basic analysis (50+ recommended for robust patterns)
|
||||
|
||||
## 📥 How to Export Your Conversations
|
||||
|
||||
### For Claude Conversations:
|
||||
|
||||
**Step 1: Request Export**
|
||||
1. Go to [claude.ai/settings](https://claude.ai/settings)
|
||||
2. Click on **"Privacy"** in the left sidebar
|
||||
3. Look for **"Request data export"** or similar option
|
||||
4. Click the button to request your export
|
||||
|
||||
**Step 2: Wait for Email**
|
||||
- Claude will send you an email when your export is ready
|
||||
- This typically takes 24 hours (sometimes sooner)
|
||||
- The email will contain a download link
|
||||
|
||||
**Step 3: Extract and Save**
|
||||
1. Download the ZIP file from the email
|
||||
2. **Option A - Automatic** (Recommended): Place the ZIP file in your project directory and run `/extract-exports` - it will automatically extract and organize everything for you
|
||||
3. **Option B - Manual**: Extract all files (you'll have `conversations.json`, `projects.json`, etc.) and save them to your `data-exports/claude/` folder (we'll create this next)
|
||||
|
||||
**Files you should see:**
|
||||
- `conversations.json` (required) - Your conversation history
|
||||
- `projects.json` (optional) - Project information and metadata
|
||||
- `users.json` (optional) - Account information
|
||||
|
||||
**⚠️ Important Note:**
|
||||
Claude exports from claude.ai **DO NOT include Claude Code conversations**. Only conversations from the web interface (claude.ai) are included in the export. If you primarily use Claude Code, your export may have fewer conversations than expected.
|
||||
|
||||
### For ChatGPT Conversations:
|
||||
|
||||
**Step 1: Request Export**
|
||||
1. Go to [chatgpt.com/settings/general](https://chatgpt.com/settings/general)
|
||||
2. Scroll down to **"Data controls"** section
|
||||
3. Click **"Export data"**
|
||||
4. Select what you want to export (conversations are usually pre-selected)
|
||||
5. Confirm the export request
|
||||
|
||||
**Step 2: Wait for Email**
|
||||
- ChatGPT will send you an email when your export is ready
|
||||
- This usually takes 2-4 hours
|
||||
- The email will contain a download link
|
||||
|
||||
**Step 3: Extract and Save**
|
||||
1. Download the ZIP file from the email
|
||||
2. **Option A - Automatic** (Recommended): Place the ZIP file in your project directory and run `/extract-exports` - it will automatically extract and organize everything for you
|
||||
3. **Option B - Manual**: Extract all files (you'll have `conversations.json`, `user.json`, etc.) and save them to your `data-exports/chatgpt/` folder (we'll create this next)
|
||||
|
||||
**Files you should see:**
|
||||
- `conversations.json` (required) - Your conversation history
|
||||
- `user.json` (optional) - Account information
|
||||
- `shared_conversations.json` (optional) - Shared conversations metadata
|
||||
- `message_feedback.json` (optional) - Your feedback on responses
|
||||
|
||||
### Troubleshooting Export:
|
||||
|
||||
**Can't find Settings?**
|
||||
- Claude: Look for your account icon (bottom left) → Settings → Privacy & Data
|
||||
- ChatGPT: Click your account name (bottom left) → Settings → scroll to Data controls
|
||||
|
||||
**Export not arriving?**
|
||||
- Check spam/junk email folder
|
||||
- Wait a bit longer (can take up to 24 hours)
|
||||
- Try requesting again if it's been >24 hours
|
||||
|
||||
**Files look different?**
|
||||
- Export formats can vary slightly between updates
|
||||
- As long as you have `conversations.json`, the analysis will work
|
||||
- Missing optional files is fine
|
||||
|
||||
---
|
||||
|
||||
## 🗂️ Directory Structure Setup
|
||||
|
||||
I can create the proper directory structure for you automatically. Here's what will be created:
|
||||
|
||||
```
|
||||
your-project/
|
||||
├── data-exports/
|
||||
│ ├── claude/ # Place Claude exports here
|
||||
│ └── chatgpt/ # Place ChatGPT exports here
|
||||
├── reports/ # Analysis reports will appear here
|
||||
└── generated-skills/ # Generated skills will appear here
|
||||
```
|
||||
|
||||
**Ready to create these directories?**
|
||||
|
||||
Just confirm and I'll set up:
|
||||
- ✅ `data-exports/claude/` - for your Claude conversation exports
|
||||
- ✅ `data-exports/chatgpt/` - for your ChatGPT conversation exports
|
||||
- ✅ `reports/` - for timestamped analysis reports
|
||||
- ✅ `generated-skills/` - for your generated Custom Skills
|
||||
|
||||
Confirm when you're ready!
|
||||
|
||||
**💡 Quick Tip**: Once you have the ZIP files from Claude/ChatGPT, you can use the `/extract-exports` command to automatically extract and organize everything for you! Just place the ZIP files in your project directory and run the command.
|
||||
|
||||
## ⚡ Quick Start Checklist
|
||||
|
||||
**Setup Phase:**
|
||||
- [ ] Directories created (`data-exports/`, `reports/`, `generated-skills/`)
|
||||
- [ ] At least one platform export requested (Claude and/or ChatGPT)
|
||||
- [ ] Waiting for export email to arrive (24 hours or less)
|
||||
|
||||
**Data Phase:**
|
||||
- [ ] Export files downloaded and extracted
|
||||
- [ ] JSON files placed in correct `data-exports/` subdirectories
|
||||
- [ ] You have at least 20+ conversations (more is better for patterns)
|
||||
|
||||
**Analysis Phase:**
|
||||
- [ ] Ready to run `/analyze-skills` when exports are ready
|
||||
- [ ] Know which output option you want (A, B, C, or D)
|
||||
- [ ] Have time for analysis to complete (2-5 minutes typically)
|
||||
|
||||
**Completion:**
|
||||
- [ ] Analysis finished and reports generated
|
||||
- [ ] Review skills recommendations
|
||||
- [ ] Generate or implement skills as desired
|
||||
|
||||
## 🎯 What the Analysis Will Find
|
||||
|
||||
The plugin will identify patterns like:
|
||||
|
||||
### Business & Communication
|
||||
- Email drafting templates
|
||||
- Proposal writing workflows
|
||||
- Client communication patterns
|
||||
- Meeting preparation structures
|
||||
|
||||
### Development & Code
|
||||
- Code review methodologies
|
||||
- Documentation standards
|
||||
- Debugging approaches
|
||||
- Architecture decision patterns
|
||||
|
||||
### Content & Writing
|
||||
- Blog post structures
|
||||
- Newsletter formats
|
||||
- Social media workflows
|
||||
- Research methodologies
|
||||
|
||||
### Personal Productivity
|
||||
- Task planning approaches
|
||||
- Decision-making frameworks
|
||||
- Learning note-taking systems
|
||||
- Goal-setting patterns
|
||||
|
||||
## 📊 Analysis Options
|
||||
|
||||
When you run `/analyze-skills`, you'll choose from:
|
||||
|
||||
- **Option A**: Analysis report only (insights and recommendations)
|
||||
- **Option B**: Complete implementation package (ready-to-use skills)
|
||||
- **Option C**: Incremental implementation (top 3-5 skills)
|
||||
- **Option D**: Custom specification (your defined requirements)
|
||||
|
||||
## 🔒 Privacy & Security
|
||||
|
||||
- **Local processing**: All analysis happens on your machine
|
||||
- **No data upload**: Your conversations never leave your system
|
||||
- **Anonymized output**: Generated skills remove personal information
|
||||
- **Git protection**: Export files are automatically ignored by version control
|
||||
|
||||
## 🚀 Expected Results
|
||||
|
||||
After analysis, you'll get:
|
||||
|
||||
### Analysis Reports (`reports/timestamp/`)
|
||||
- **Comprehensive analysis** with pattern evidence
|
||||
- **Implementation guide** with deployment roadmap
|
||||
- **Processing log** for incremental future runs
|
||||
|
||||
### Generated Skills (`generated-skills/skill-name/`)
|
||||
- **SKILL.md** - Main skill with YAML frontmatter
|
||||
- **reference.md** - Detailed methodology
|
||||
- **examples.md** - Usage examples
|
||||
- **templates/** - Reusable output templates
|
||||
|
||||
## ❓ Common Questions
|
||||
|
||||
**Q: How many conversations do I need?**
|
||||
A: Minimum 20-30 for basic analysis, 50+ for meaningful patterns, 100+ for comprehensive insights.
|
||||
|
||||
**Q: Are Claude Code conversations included in exports?**
|
||||
A: No. Claude exports from claude.ai only include web interface conversations. Claude Code conversations are stored separately and are not included in data exports. If you primarily use Claude Code, consider using ChatGPT exports or the web-based workflow-pattern-analyzer skill instead.
|
||||
|
||||
**Q: Can I analyze both Claude and ChatGPT together?**
|
||||
A: Yes! The plugin performs smart cross-platform deduplication and creates unified skills.
|
||||
|
||||
**Q: What if I don't have many conversations?**
|
||||
A: Start with what you have. The plugin supports incremental processing, so you can re-run analysis as you accumulate more conversations.
|
||||
|
||||
**Q: How long does analysis take?**
|
||||
A: Typically 2-5 minutes for 100 conversations, longer for larger datasets or complex patterns.
|
||||
|
||||
## 🛠️ Next Steps
|
||||
|
||||
1. **Set up your directories**: Create the folder structure above (or let `/extract-exports` do it automatically)
|
||||
2. **Export your data**: Follow the export guides for your platforms
|
||||
3. **Organize your exports**: Run `/extract-exports` to automatically extract and organize ZIP files, or manually place JSON files in the appropriate directories
|
||||
4. **Run the analysis**: Use `/analyze-skills` when ready
|
||||
5. **Implement skills**: Start with the highest-impact recommendations
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
- **Export regularly**: Update your analysis monthly as you accumulate more conversations
|
||||
- **Start small**: Begin with Option C (incremental) to test with your top skills
|
||||
- **Customize skills**: Edit generated skills to match your specific needs
|
||||
- **Share patterns**: Generated skills are great for team standardization
|
||||
|
||||
Ready to get started? Let me know if you'd like help creating the directory structure or have questions about the export process!
|
||||
216
commands/skills-troubleshoot.md
Normal file
216
commands/skills-troubleshoot.md
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
description: Troubleshoot common issues with skills analysis setup and execution
|
||||
---
|
||||
|
||||
# Skills Analysis Troubleshooting
|
||||
|
||||
I'll help you diagnose and fix common issues with the Claude Skills Analyzer.
|
||||
|
||||
## 🔍 Quick Diagnostics
|
||||
|
||||
Let me check your current setup and identify any issues:
|
||||
|
||||
### Directory Structure Check
|
||||
First, let me verify your project has the required directories:
|
||||
|
||||
```
|
||||
Expected structure:
|
||||
├── data-exports/
|
||||
│ ├── claude/ # Claude export files
|
||||
│ └── chatgpt/ # ChatGPT export files
|
||||
├── reports/ # Analysis outputs
|
||||
└── generated-skills/ # Generated skill packages
|
||||
```
|
||||
|
||||
### Data Files Check
|
||||
Looking for these required files:
|
||||
|
||||
**Claude exports** (`data-exports/claude/`):
|
||||
- [ ] `conversations.json` - Required
|
||||
- [ ] `projects.json` - Optional but helpful
|
||||
- [ ] `users.json` - Optional
|
||||
|
||||
**ChatGPT exports** (`data-exports/chatgpt/`):
|
||||
- [ ] `conversations.json` - Required
|
||||
- [ ] `user.json` - Optional
|
||||
- [ ] `shared_conversations.json` - Optional
|
||||
|
||||
## 🚨 Common Issues & Solutions
|
||||
|
||||
### Issue: "No conversation files detected"
|
||||
|
||||
**Causes:**
|
||||
- Files in wrong directories
|
||||
- Incorrect file names
|
||||
- Empty or corrupted JSON files
|
||||
|
||||
**Solutions:**
|
||||
1. **Check file locations**: Ensure JSON files are in correct `data-exports/` subdirectories
|
||||
2. **Verify file names**: Must match exactly (case-sensitive)
|
||||
3. **Validate JSON**: Open files in text editor to check they're valid JSON
|
||||
4. **Check file sizes**: Empty files (0 bytes) won't work
|
||||
|
||||
### Issue: "Analysis produces no patterns"
|
||||
|
||||
**Causes:**
|
||||
- Too few conversations (need 20+ minimum)
|
||||
- Conversations too short or simple
|
||||
- No recurring patterns in usage
|
||||
|
||||
**Solutions:**
|
||||
1. **Accumulate more data**: Export again after more AI usage
|
||||
2. **Lower thresholds**: Adjust frequency requirements in analysis
|
||||
3. **Check conversation quality**: Need substantial back-and-forth interactions
|
||||
4. **Try different timeframes**: Use older exports if available
|
||||
|
||||
### Issue: "Plugin command not found"
|
||||
|
||||
**Causes:**
|
||||
- Plugin not properly installed
|
||||
- Claude Code needs restart
|
||||
- Marketplace not added correctly
|
||||
|
||||
**Solutions:**
|
||||
1. **Verify installation**:
|
||||
```shell
|
||||
/plugin list
|
||||
```
|
||||
2. **Restart Claude Code**: Close and reopen
|
||||
3. **Reinstall plugin**:
|
||||
```shell
|
||||
/plugin uninstall claude-skills-analyzer@hirefrank
|
||||
/plugin install claude-skills-analyzer@hirefrank
|
||||
```
|
||||
|
||||
### Issue: "JSON parsing errors"
|
||||
|
||||
**Causes:**
|
||||
- Incomplete export downloads
|
||||
- File corruption during transfer
|
||||
- Unsupported export format versions
|
||||
|
||||
**Solutions:**
|
||||
1. **Re-download exports**: Get fresh copies from AI platforms
|
||||
2. **Check file integrity**: Verify files open properly in text editor
|
||||
3. **Try smaller batches**: Export smaller date ranges if available
|
||||
|
||||
### Issue: "Skills generation fails"
|
||||
|
||||
**Causes:**
|
||||
- Insufficient write permissions
|
||||
- Conflicting files in output directories
|
||||
- Pattern analysis errors
|
||||
|
||||
**Solutions:**
|
||||
1. **Check permissions**: Ensure you can write to project directory
|
||||
2. **Clear output directories**: Remove old `reports/` and `generated-skills/` content
|
||||
3. **Try incremental analysis**: Start with Option A (report only)
|
||||
|
||||
## 🔧 Manual Diagnostics
|
||||
|
||||
### Check Your Export Files
|
||||
```shell
|
||||
# Navigate to your data directory
|
||||
cd data-exports
|
||||
|
||||
# Check file sizes (should be >1KB)
|
||||
ls -la claude/
|
||||
ls -la chatgpt/
|
||||
|
||||
# Preview file contents (first few lines)
|
||||
head -5 claude/conversations.json
|
||||
head -5 chatgpt/conversations.json
|
||||
```
|
||||
|
||||
### Validate JSON Structure
|
||||
```shell
|
||||
# Check if files are valid JSON (on systems with jq)
|
||||
jq . claude/conversations.json > /dev/null && echo "Claude JSON valid"
|
||||
jq . chatgpt/conversations.json > /dev/null && echo "ChatGPT JSON valid"
|
||||
```
|
||||
|
||||
## 📊 Data Requirements
|
||||
|
||||
### Minimum Requirements
|
||||
- **20+ conversations** total across platforms
|
||||
- **Average 3+ exchanges** per conversation
|
||||
- **Variety of topics/tasks** represented
|
||||
- **JSON files >1KB** in size
|
||||
|
||||
### Optimal Requirements
|
||||
- **50+ conversations** for meaningful patterns
|
||||
- **Mix of short and long conversations**
|
||||
- **Regular usage patterns** over time
|
||||
- **Both platforms** represented (if you use both)
|
||||
|
||||
## 🎯 Quick Fixes
|
||||
|
||||
### Create Missing Directories
|
||||
```shell
|
||||
mkdir -p data-exports/claude data-exports/chatgpt reports generated-skills
|
||||
```
|
||||
|
||||
### Test Plugin Installation
|
||||
```shell
|
||||
# Check if plugin is available
|
||||
/help | grep analyze-skills
|
||||
|
||||
# List installed plugins
|
||||
/plugin list | grep claude-skills-analyzer
|
||||
```
|
||||
|
||||
### Reset and Retry
|
||||
```shell
|
||||
# Clear any partial outputs
|
||||
rm -rf reports/* generated-skills/*
|
||||
|
||||
# Re-run analysis with fresh start
|
||||
/analyze-skills
|
||||
```
|
||||
|
||||
## 📞 Getting Additional Help
|
||||
|
||||
### For Setup Issues
|
||||
- Review `/skills-setup` for complete setup guide
|
||||
- Check plugin README at `plugins/claude-skills-analyzer/README.md`
|
||||
- Verify you have the latest plugin version
|
||||
|
||||
### For Analysis Issues
|
||||
- Try Option A (Analysis Report Only) first
|
||||
- Review conversation export quality
|
||||
- Consider smaller dataset for initial testing
|
||||
|
||||
### For Technical Issues
|
||||
- Check [GitHub Issues](https://github.com/hirefrank/hirefrank-marketplace/issues)
|
||||
- Report bugs with error messages and setup details
|
||||
- Join discussions for community support
|
||||
|
||||
## 💡 Pro Troubleshooting Tips
|
||||
|
||||
1. **Start small**: Test with a small, known-good dataset first
|
||||
2. **Check permissions**: Ensure you can read/write in the project directory
|
||||
3. **Update regularly**: Keep the plugin updated to latest version
|
||||
4. **Document patterns**: Note what types of conversations work best
|
||||
5. **Incremental approach**: Use existing analysis logs for efficiency
|
||||
|
||||
## 🔄 Still Having Issues?
|
||||
|
||||
If problems persist:
|
||||
|
||||
1. **Gather diagnostics**:
|
||||
- Plugin version (`/plugin list`)
|
||||
- Error messages (exact text)
|
||||
- File sizes and locations
|
||||
- Claude Code version
|
||||
|
||||
2. **Try minimal test**:
|
||||
- Create new directory
|
||||
- Add just 1-2 export files
|
||||
- Run analysis with Option A
|
||||
|
||||
3. **Get support**:
|
||||
- Open GitHub issue with full details
|
||||
- Include anonymized error logs
|
||||
- Describe expected vs actual behavior
|
||||
|
||||
Let me know what specific issue you're experiencing and I'll provide targeted help!
|
||||
Reference in New Issue
Block a user