32 KiB
name, description
| name | description |
|---|---|
| discover-project-skills | Analyze codebase to discover technologies and project patterns, then generate contextual skills for both |
Discover Project Skills
Overview
This skill analyzes your codebase to automatically discover:
- Technologies used: Frameworks, databases, messaging systems, API protocols
- Project-specific patterns: Architecture flows, module organization, design patterns
- Contextual skills: Generates reusable skill files that chain together
When to use:
- Starting work on a new codebase
- After major architectural changes
- When onboarding to a project
- To document project patterns as skills
Output:
- Technology skills (e.g.,
kafka-patterns.md,grpc-best-practices.md) - Project skills (e.g.,
api-layer-patterns.md,service-layer-patterns.md) - Saved to
.claude/skills/for automatic discovery
Phase 0: Pre-check Existing Skills
Before generating skills, check for existing generated skills:
- Check if
.claude/skills/directory exists in current working directory - If exists, scan for files containing marker:
<!-- Generated by discover-project-skills --> - Count existing generated skills
If existing skills found:
- Present to user: "Found N existing skills: [list names]"
- Ask using AskUserQuestion tool:
Question: "How should I proceed with existing skills?" Options: - Regenerate All: Delete all existing generated skills and create fresh - Update Specific: Ask which areas to regenerate, keep others - Cancel: Exit without making changes
If .claude/skills/ doesn't exist:
- Create directory:
mkdir -p .claude/skills - Inform user: "Created .claude/skills/ directory for generated skills"
Error handling:
- Cannot create directory → Suggest alternative location or check permissions
- Cannot read existing skills → Warn user, offer to continue without checking
Phase 1: Quick Technology Scan
Goal: Quickly identify major technologies without deep analysis.
Hybrid Scanning Strategy
Estimate project size first:
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.java" -o -name "*.kt" -o -name "*.py" -o -name "*.go" \) | wc -l
Choose scanning method:
- < 50 files: Use Read/Glob/Grep tools directly
- >= 50 files: Use Task tool with Explore subagent
Package Files to Scan
Node.js:
- Read
package.jsonif exists - Check dependencies for:
- TypeScript:
typescriptin devDependencies ortsconfig.jsonexists - Express:
expressin dependencies - NestJS:
@nestjs/corein dependencies - MongoDB:
mongodb,mongoosein dependencies - PostgreSQL:
pg,typeorm,prismain dependencies - gRPC:
@grpc/grpc-js,@grpc/proto-loaderin dependencies - GraphQL:
graphql,apollo-server,@nestjs/graphqlin dependencies
- TypeScript:
Java/Kotlin:
- Read
pom.xml,build.gradle, orbuild.gradle.ktsif exists - Check for:
- Kotlin:
build.gradle.ktsfile orkotlin-stdlibdependency - Ktor:
io.ktor:ktor-server-*dependencies - Spring Boot:
spring-boot-starter-*dependencies - Kafka:
kafka-clients,spring-kafkadependencies - PostgreSQL:
postgresqlJDBC driver - MongoDB:
mongodb-driver,spring-data-mongodb - gRPC:
grpc-*,protobuf-javadependencies - GraphQL:
graphql-java,spring-boot-starter-graphql
- Kotlin:
Python:
- Read
requirements.txtorpyproject.tomlif exists - Check for:
- FastAPI:
fastapi - PostgreSQL:
psycopg2,asyncpg - MongoDB:
pymongo,motor - GraphQL:
graphene,strawberry-graphql,ariadne
- FastAPI:
Go:
- Read
go.modif exists - Check for:
- gRPC:
google.golang.org/grpc - Kafka:
github.com/segmentio/kafka-go,github.com/Shopify/sarama - PostgreSQL:
github.com/lib/pq,gorm.io/driver/postgres - MongoDB:
go.mongodb.org/mongo-driver
- gRPC:
REST API Detection
Look for HTTP method annotations/decorators in code:
- Node:
app.get(),app.post(),@Get(),@Post()decorators - Java/Kotlin:
@RestController,@GetMapping,@PostMapping,@PutMapping,@DeleteMapping - Python:
@app.get,@app.post,@routedecorators
Directory Structure Scan
Scan top 2 directory levels for these patterns:
/api,/controllers,/handlers,/routes→ API layer/service,/domain,/business→ Service layer/repository,/dao,/data→ Database access layer/messaging,/events,/kafka→ Messaging patterns/grpc,/proto→ gRPC usage/graphql,/schema→ GraphQL usage
Present Findings
Group discoveries by category:
**Detected Technologies:**
Languages: TypeScript, Java
Frameworks: Express, Spring Boot
Databases: PostgreSQL, MongoDB
Messaging: Kafka
API Protocols: REST, gRPC
Error handling:
- No package files found → Ask user to point to key files manually
- File read errors → Skip that file, continue with others
- No technologies detected → Present findings anyway, ask user for hints
Phase 2: Focus Selection (MANDATORY)
Goal: Let user choose which areas to analyze deeply.
CRITICAL: This phase MUST NOT be skipped. Always call AskUserQuestion.
Step 1: Group findings from Phase 1
Organize detected technologies into categories:
- API Protocols: REST (if HTTP methods detected), gRPC (if proto files), GraphQL
- Messaging Systems: Kafka, RabbitMQ, SQS (with producer/consumer distinction)
- Database Access: PostgreSQL, MongoDB, Redis
- Framework Patterns: Ktor, Spring Boot, NestJS, Express, FastAPI
- Language Patterns: Kotlin (coroutines, flows), TypeScript (decorators), Python (async)
- Project Architecture: Service layer, event processing, repository pattern
Step 2: Build multi-select question
IMPORTANT: AskUserQuestion supports 2-4 options per question. If more than 4 categories detected, either:
- Group related categories (e.g., "Backend Technologies" combining Framework + Database)
- Prioritize most important categories
- Use multiple questions (not recommended for this use case)
Create AskUserQuestion with all detected areas:
Example for Kotlin/Ktor/Kafka project:
{
"question": "I detected the following technologies and patterns. Which areas should I analyze deeply for skill generation?",
"header": "Select Areas",
"multiSelect": true,
"options": [
{
"label": "REST API Endpoints",
"description": "Extract endpoint inventory, routing patterns, handlers (detected in src/api/)"
},
{
"label": "Ktor Framework Patterns",
"description": "Routing, middleware, serialization, error handling, dependency injection"
},
{
"label": "Kotlin Language Patterns",
"description": "Coroutines, flows, sealed classes, extension functions"
},
{
"label": "Kafka Messaging",
"description": "Producer/consumer patterns, error handling, retry logic (detected in src/messaging/)"
},
{
"label": "PostgreSQL Database",
"description": "Query patterns, transaction management, repository pattern (detected in src/repository/)"
},
{
"label": "Service Layer Architecture",
"description": "Business logic organization, service patterns (detected in src/services/)"
},
{
"label": "Testing Patterns",
"description": "Test structure, mocking, integration tests"
}
]
}
Step 3: Validate user selection
IF selections is empty THEN:
Ask: "No areas selected. Do you want to cancel skill generation?"
IF yes THEN: Exit gracefully
IF no THEN: Re-prompt with options
Store selections for Phase 3:
selected_areas = selections
depth_strategy = "deep" IF selection count <= 2, ELSE "medium"
Step 4: Report selection to user
✓ Selected for deep analysis:
- REST API Endpoints
- Ktor Framework Patterns
- Kafka Messaging
Analysis depth: Deep (2-3 areas selected)
Proceeding to Phase 3...
Error handling:
- No technologies detected in Phase 1 → Ask user to provide hints or cancel
- User selects all options → Set depth to "medium", warn about breadth vs. depth
Phase 3: Deep Analysis
Goal: Extract patterns, flows, and examples from selected areas.
For each selected area:
Module/System Analysis
-
Identify entry points:
- Controllers (REST), handlers (events), main packages
- Use Glob to find:
**/*Controller.ts,**/*Handler.java, etc.
-
Find key abstractions:
- Interfaces, base classes, common utilities
- Use Grep to search for:
interface,abstract class,extends
-
Map directory structure:
- Note naming conventions (PascalCase, camelCase, snake_case)
- Identify module boundaries
-
Detect dependencies:
- Which modules depend on which (based on directory structure)
Deep Tech Analysis (Framework & Language Patterns)
When user selects framework or language patterns (e.g., "Ktor Framework", "Kotlin Language Patterns"), spawn focused Task agents.
Step 1: Determine tech-specific analysis needs
Map selection to analysis mandate:
| Selection | Analysis Focus | Output Skills |
|---|---|---|
| Ktor Framework | Routing config, middleware, serialization, error handling, DI | ktor-routing-patterns, ktor-middleware-patterns |
| Spring Boot | Controllers, services, repos, config, AOP, security | spring-boot-patterns, spring-security-patterns |
| Kotlin Language | Coroutines, flows, sealed classes, extensions, delegation | kotlin-coroutines-patterns, kotlin-idioms |
| Kafka Messaging | Producer configs, consumer groups, error/retry, serialization | kafka-producer-patterns, kafka-consumer-patterns |
| Express Framework | Routing, middleware, error handling, async patterns | express-routing-patterns, express-middleware |
Step 2: Spawn Task agent with focused mandate
Example for Ktor Framework:
Task agent prompt:
"Extract Ktor framework patterns from this Kotlin codebase:
**Routing Patterns:**
- Route organization (file structure, grouping)
- Route definitions (path parameters, query params)
- Route nesting and modularization
- Provide 3+ file examples with line numbers
**Middleware Patterns:**
- Authentication/authorization setup
- Logging and monitoring middleware
- CORS configuration
- Custom middleware examples
- Provide file references for each
**Serialization Patterns:**
- Content negotiation setup
- JSON serialization config (kotlinx.serialization, Jackson, Gson)
- Request/response body handling
- Provide file examples
**Error Handling:**
- Status pages configuration
- Exception handling approach
- Error response format
- Provide file examples
**Dependency Injection:**
- How dependencies are provided to routes
- DI framework used (Koin, manual, etc.)
- Provide setup file references
For each pattern:
1. Describe what it does (1-2 sentences)
2. Provide file path with line number
3. Note any gotchas or best practices
Return findings as structured data for skill generation."
**Invoke the Task tool:**
```javascript
Task({
subagent_type: "Explore",
description: "Extract Ktor framework patterns",
prompt: `[Insert the full prompt text from above, starting with "Extract Ktor framework patterns from this Kotlin codebase:"]`,
thoroughness: "very thorough"
})
The Task agent will analyze the codebase and report back with findings.
**Step 3: Process agent findings**
Agent returns structured data:
```javascript
ktor_findings = {
routing_patterns: [
{
name: "Route grouping by feature",
description: "Routes organized by domain feature in separate files",
examples: [
"src/api/routes/UserRoutes.kt:12 - User management routes",
"src/api/routes/OrderRoutes.kt:8 - Order management routes"
]
},
// ... more patterns
],
middleware_patterns: [...],
serialization_patterns: [...],
error_handling_patterns: [...],
di_patterns: [...]
}
Step 4: Determine skill split strategy
Based on findings volume:
if routing_patterns.length >= 3 AND middleware_patterns.length >= 3:
// Generate separate skills
skills_to_generate = ["ktor-routing-patterns", "ktor-middleware-patterns"]
else if total_patterns <= 5:
// Consolidate into single skill
skills_to_generate = ["ktor-patterns"]
else:
// Medium split
skills_to_generate = ["ktor-framework-patterns", "ktor-advanced-patterns"]
Step 5: Store findings for Phase 4
tech_analysis = {
"ktor": ktor_findings,
"kotlin-coroutines": coroutines_findings,
// ... other tech analyses
}
// These will be used in Phase 4 for skill generation
Error handling:
- Agent times out → Fall back to shallow analysis with Grep
- Agent finds no patterns → Notify user, ask to skip or provide hints
- Agent returns incomplete data → Use what's available, warn user
Flow Analysis
For synchronous flows (REST APIs):
- Pick 2-3 representative endpoints
- Note the pattern: Controller → Service → Repository → Database
- Document error handling approach (try/catch, Result types, exceptions)
- Note validation patterns
For asynchronous flows (Events):
- Find event producers (where events are published)
- Find event consumers (where events are processed)
- Document message structure
- Note retry/failure handling
REST Endpoint Extraction (When "REST API Endpoints" Selected)
Goal: Build comprehensive endpoint inventory for rest-endpoints.md skill.
Step 1: Detect route definition patterns
Based on detected framework, search for route definitions:
Ktor:
# Find routing blocks
grep -rn "routing {" src/ --include="*.kt"
grep -rn "route(" src/ --include="*.kt"
grep -rn "get(" src/ --include="*.kt"
grep -rn "post(" src/ --include="*.kt"
grep -rn "put(" src/ --include="*.kt"
grep -rn "delete(" src/ --include="*.kt"
grep -rn "patch(" src/ --include="*.kt"
Spring Boot:
grep -rn "@GetMapping" src/ --include="*.java" --include="*.kt"
grep -rn "@PostMapping" src/ --include="*.java" --include="*.kt"
grep -rn "@PutMapping" src/ --include="*.java" --include="*.kt"
grep -rn "@DeleteMapping" src/ --include="*.java" --include="*.kt"
grep -rn "@PatchMapping" src/ --include="*.java" --include="*.kt"
grep -rn "@RestController" src/ --include="*.java" --include="*.kt"
Express/NestJS:
grep -rn "app.get(" src/ --include="*.ts" --include="*.js"
grep -rn "router.post(" src/ --include="*.ts" --include="*.js"
grep -rn "@Get(" src/ --include="*.ts"
grep -rn "@Post(" src/ --include="*.ts"
grep -rn "@Patch(" src/ --include="*.ts"
Error handling:
- If no routes found: Skip to next framework or ask user for routing file location
- If 100+ routes found: Sample representative endpoints (e.g., one from each controller), group by module/feature
Step 2: Extract endpoint metadata
Handler identification by framework:
- Ktor: Handler is the code block inside route definition (same file, inline)
- Spring Boot: Handler is the method name with the mapping annotation
- Express/NestJS: Handler is the decorated method or callback function
For each route definition found:
- Extract HTTP method (GET, POST, PUT, DELETE, PATCH)
- Extract path pattern (e.g.,
/api/v1/users/{id}) - Identify handler function/class
- Record file path and line number
Example extraction:
File: src/api/routes/UserRoutes.kt:23
Method: GET
Path: /api/v1/users
Handler: UserController.listUsers()
File: src/api/routes/UserRoutes.kt:45
Method: POST
Path: /api/v1/users
Handler: UserController.createUser()
Step 3: Group endpoints by resource
Organize by path prefix:
User Management (/api/v1/users):
- GET /api/v1/users → UserController.kt:23
- GET /api/v1/users/{id} → UserController.kt:45
- POST /api/v1/users → UserController.kt:67
- PUT /api/v1/users/{id} → UserController.kt:89
- DELETE /api/v1/users/{id} → UserController.kt:112
Order Management (/api/v1/orders):
- GET /api/v1/orders → OrderController.kt:18
- POST /api/v1/orders → OrderController.kt:34
Step 4: Detect path structure patterns
Analyze extracted paths:
- Base path prefix (e.g.,
/api/v1/) - Versioning strategy (v1, v2 in path vs. header)
- Resource naming (plural nouns:
/users,/orders) - ID parameter patterns (
/{id},/{uuid}) - Nested resources (
/users/{id}/orders)
Step 5: Store findings for Phase 4
rest_analysis = {
endpoints: grouped_endpoints, // List of {method, path, handler, file:line}
path_patterns: detected_patterns,
base_path: base_path,
versioning: versioning_strategy,
file_references: unique_controller_files
}
Pattern Extraction
Look for:
- Design patterns: Repository pattern, Service layer, Factory, Strategy
- Architectural style: Layered, hexagonal, event-driven, CQRS
- Testing approaches: Unit tests, integration tests, test utilities
- Common utilities: Logging, metrics, error handling helpers
Chain Detection (Directory-Based Proxy)
Build skill relationship graph based on directories:
If directories exist:
/api+/service+/repository→ Chain:api-patterns→service-layer-patterns→database-access-patterns/api+/messaging→ Chain:api-patterns→messaging-patterns/events/producers+/events/consumers→ Chain:event-producer-patterns→event-consumer-patterns
Store findings for Phase 4:
- File paths with line numbers for examples
- Pattern names and descriptions
- Skill chain relationships
Error handling:
- Selected area has no patterns → Notify user, ask to skip or provide hints
- File references invalid → Validate before including, skip if invalid
- Analysis timeout → Fall back to shallow analysis, notify user
Phase 4: Skill Generation
Goal: Generate skill files from analysis findings.
When to Generate Skills
Technology Skills:
Generate when technology is detected AND has 2+ file examples.
Examples: postgresql-patterns.md, kafka-patterns.md, grpc-patterns.md
Project Skills:
Generate when architectural layer is detected AND has clear patterns.
Examples: api-layer-patterns.md, service-layer-patterns.md, event-processing-patterns.md
Technology Skill Template
<!-- Generated by discover-project-skills on YYYY-MM-DD -->
<!-- Invocation metadata:
paths: src/dir1/, src/dir2/
keywords: keyword1, keyword2, keyword3
tech: tech1, tech2
-->
---
name: <technology>-patterns
description: <Technology> <what-it-does>. Use when <trigger-keywords>, especially in <file-patterns>, or <scenarios>.
---
# <Technology> Patterns
## Overview
[How this technology is used in this codebase - 2-3 sentences]
## Key Patterns
- **Pattern 1**: [Description]
- Example: `path/to/file.ext:123`
- **Pattern 2**: [Description]
- Example: `path/to/another.ext:456`
## Common Gotchas
- [What to watch for - based on code inspection]
- [Common mistake to avoid]
## Related Skills
When working with <technology>, also consider:
- `related-skill-1` - [When to use]
- `related-skill-2` - [When to use]
## Key Files
- `path/to/config:line` - [Configuration]
- `path/to/example:line` - [Reference implementation]
Project Skill Template
<!-- Generated by discover-project-skills on YYYY-MM-DD -->
<!-- Invocation metadata:
paths: src/dir1/, src/dir2/
keywords: keyword1, keyword2, keyword3
tech: tech1, tech2
-->
---
name: <subsystem>-<aspect>
description: <Subsystem> <aspect> <what-it-does>. Use when <trigger-keywords>, especially in <file-patterns>, or <scenarios>.
---
# <Subsystem> <Aspect>
## Architecture
[Text description of architecture - 3-4 sentences]
## Key Components
- **Component 1** (`path/to/component`): [Purpose and responsibility]
- **Component 2** (`path/to/component`): [Purpose and responsibility]
## Flows
### Synchronous Flow
1. [Step 1 - what happens]
2. [Step 2 - what happens]
3. [Step 3 - what happens]
### Asynchronous Flow
1. [Event trigger]
2. [Processing steps]
3. [Side effects]
## Error Handling
[How errors are handled in this subsystem]
## Testing Strategy
[How this subsystem is tested]
## Related Skills
This subsystem typically interacts with:
- `related-skill-1` - [For X functionality]
- `related-skill-2` - [For Y functionality]
## Key Files
- `path/to/file:line` - [Purpose]
- `path/to/file:line` - [Purpose]
Description Template Requirements
Every description MUST include all five elements:
- Technology/Pattern name: "PostgreSQL database access patterns"
- What it does: "database queries and ORM usage"
- Trigger keywords: "database, SQL, queries, repository pattern"
- File patterns: "src/repository/, src/dao/, *.repository.ts"
- Scenarios: "implementing CRUD operations, optimizing queries"
Example:
description: PostgreSQL database access patterns. Use when working with database queries, SQL operations, repository pattern, especially in src/repository/ or src/dao/ files, or implementing CRUD operations and query optimization.
Skill Generation Process
For each skill to generate:
-
Fill template with findings:
- Use data from Phase 3 analysis
- Include actual file paths with line numbers
- Add 2-4 concrete examples per pattern
-
Build Related Skills section:
- Use chain detection from Phase 3
- Link to skills that are typically used together
- Add "when to use" context for each
-
Token budget validation:
character_count = len(skill_content) estimated_tokens = character_count / 4 if estimated_tokens > 500: # Optimization steps: - Remove mermaid diagram if present - Keep only top 3-4 patterns - Consolidate Key Files section - Remove redundant descriptions if still > 500: # Warn user and ask to proceed or simplify further -
Add marker comment:
- First line:
<!-- Generated by discover-project-skills on YYYY-MM-DD --> - Helps Phase 0 detect generated skills
- First line:
-
Write to .claude/skills/:
filename = f".claude/skills/{skill_name}.md" Write skill content to filename -
Validate file was created:
- Read back the file to confirm
- Log skill name for final summary
Metadata Generation Logic
For each skill being generated:
Step 1: Extract path patterns
paths = []
for file_ref in skill_findings.file_references:
// Extract directory pattern from file path
// src/api/routes/UserRoutes.kt → src/api/, src/api/routes/
dir = extract_directory(file_ref)
// IMPORTANT: Always include trailing slash to indicate directory pattern
// Example: src/api/routes/UserRoutes.kt → src/api/routes/
paths.add(dir)
// Deduplicate and generalize
paths = unique(paths)
paths = generalize_patterns(paths)
generalize_patterns(paths) {
// Generalize path patterns when multiple sibling directories detected
// Rules:
// 1. Keep specific if only one path: ["src/api/"] → ["src/api/"]
// 2. Generalize siblings: ["src/api/routes/", "src/api/controllers/"] → ["src/api/"]
// 3. Keep both if different parents: ["src/api/", "src/services/"] → both kept
// 4. Don't over-generalize to project root unless all paths share no common prefix
// Algorithm:
// - Group paths by parent directory
// - If 2+ paths share same parent, use parent instead
// - Return deduplicated list
}
Step 2: Build keyword list
keywords = []
// Add technology name
keywords.add(tech_name) // "ktor", "kafka", "postgresql"
// Add pattern types from findings
for pattern in skill_findings.patterns:
keywords.add(pattern.type) // "routing", "middleware", "consumer"
// Add common terms from description
keywords.add_all(extract_terms(description))
extract_terms(description) {
// Extract meaningful technical terms from description
// 1. Split on spaces, punctuation
// 2. Lowercase all terms
// 3. Filter out stopwords: (the, a, an, when, use, with, for, working, especially)
// 4. Keep technical terms: HTTP, REST, API, SQL, etc.
// 5. Keep domain nouns: endpoint, route, consumer, query, database
// 6. Return unique list
}
// Add scenario keywords
keywords.add_all(scenario_terms) // "request handling", "serialization"
keywords = unique(keywords)
Step 3: Determine tech tags
tech_tags = []
// Primary technology
tech_tags.add(primary_tech) // "ktor"
// Related technologies detected
if language:
tech_tags.add(language) // "kotlin"
if additional_tech:
tech_tags.add_all(additional_tech) // ["kafka", "postgresql"]
tech_tags = unique(tech_tags)
Step 3.5: Validate metadata
Before formatting, validate that required data exists:
// Validation checks
if (paths.length === 0) {
paths = ["./"] // Default to project root
}
if (keywords.length === 0) {
console.warn(`No keywords generated for skill ${skill_name}`)
keywords = [primary_tech] // Fallback to tech name only
}
if (tech_tags.length === 0) {
throw new Error("Cannot generate metadata without tech tags")
}
Step 4: Format metadata comment
metadata_comment = `<!-- Invocation metadata:
paths: ${paths.join(", ")}
keywords: ${keywords.join(", ")}
tech: ${tech_tags.join(", ")}
-->`
Example for Ktor routing skill:
<!-- Invocation metadata:
paths: src/api/, src/routes/, src/api/routes/
keywords: ktor, routing, route, endpoint, HTTP, REST, path, handler
tech: ktor, kotlin
-->
Example for Kafka consumer skill:
<!-- Invocation metadata:
paths: src/messaging/kafka/, src/events/consumers/
keywords: kafka, consumer, event, message, consume, deserialize, retry
tech: kafka, kotlin
-->
Example Skill Chains
Standard REST API → Database:
# In rest-api-patterns.md
## Related Skills
- `service-layer-patterns` - For business logic implementation
- `postgresql-access` - For database operations
Event-Driven Flow:
# In kafka-producer-patterns.md
## Related Skills
- `service-layer-patterns` - For business logic before publishing
- `kafka-consumer-patterns` - For understanding message consumers
REST Endpoints Skill Template (Special Case)
When REST endpoint extraction was performed in Phase 3, generate dedicated rest-endpoints.md skill:
<!-- Generated by discover-project-skills on YYYY-MM-DD -->
<!-- Invocation metadata:
paths: <api paths from extraction>
keywords: REST, API, endpoint, HTTP, route, <methods>
tech: <framework>
-->
---
name: rest-endpoints
description: REST API endpoint inventory for this project. Use when working with API endpoints, HTTP routes, REST controllers, especially in <api-paths>, or implementing new endpoints or modifying existing ones.
---
# REST API Endpoints
## Endpoint Inventory
### <Resource Group 1>
- `<METHOD> <path>` → <handler-file>:<line>
- Description: [If available from comments/docs]
- `<METHOD> <path>` → <handler-file>:<line>
### <Resource Group 2>
- `<METHOD> <path>` → <handler-file>:<line>
- `<METHOD> <path>` → <handler-file>:<line>
## Path Structure Patterns
- **Base path**: `<base-path>`
- **Versioning**: <strategy description>
- **Resource naming**: <pattern description>
- **ID parameters**: <pattern description>
## Request/Response Patterns
[Common patterns for request bodies, response formats, status codes]
## Authentication
[Authentication approach detected: JWT, OAuth, API keys, etc.]
## Related Skills
- `<framework>-routing-patterns` - For routing implementation details
- `service-layer-patterns` - For business logic called by endpoints
- `<database>-patterns` - For data access from endpoints
## Key Files
- `<main-routes-file>` - Route definitions
- `<controller-dir>` - Handler implementations
REST Endpoints Skill Generation Trigger
In Phase 4, after generating other skills:
if rest_analysis exists AND rest_analysis.endpoints.length > 0:
// Build endpoint inventory section
endpoint_inventory = group_by_resource(rest_analysis.endpoints)
// Build path patterns section
path_patterns = {
base_path: rest_analysis.base_path,
versioning: rest_analysis.versioning,
resource_naming: analyze_naming(rest_analysis.endpoints),
id_params: analyze_params(rest_analysis.endpoints)
}
// Generate skill
skill_content = fill_rest_endpoints_template(
endpoints: endpoint_inventory,
path_patterns: path_patterns,
file_references: rest_analysis.file_references
)
// Generate metadata
metadata = generate_metadata(
paths: extract_unique_dirs(rest_analysis.file_references),
keywords: ["REST", "API", "endpoint", "HTTP", "route", ...detected_methods],
tech: [framework_name]
)
// Write skill file
write_skill(".claude/skills/rest-endpoints.md", metadata + skill_content)
Error handling:
- Skill exceeds 500 tokens after optimization → Warn user, ask to proceed or simplify
- Cannot write skill file → Report error with path, suggest alternative location
- Skill generation interrupted → Save progress so far, allow resume
Error Handling
Phase 0 Errors
- Cannot create .claude/skills/: Suggest alternative location or check permissions
- Cannot read existing skills: Warn user, offer to continue without checking
Phase 1 Errors
- No package files found: Ask user to point to key files manually
- Cannot determine project size: Default to Task/Explore agents (safer for large projects)
- File read errors: Skip that file, continue with others
- No technologies detected: Present findings anyway, ask user for hints
Phase 2 Errors
- User selects invalid option: Re-prompt with valid options
- No areas selected: Confirm if user wants to cancel
Phase 3 Errors
- Selected area has no patterns: Notify user, ask to skip or provide hints
- File references invalid: Validate before including, skip if invalid
- Deep analysis timeout: Fall back to shallow analysis, notify user
Phase 4 Errors
- Skill exceeds 500 tokens after optimization: Warn user, ask to proceed or simplify
- Cannot write skill file: Report error with path, suggest alternative location
- Skill generation interrupted: Save progress, allow resume
General Error Strategy
- Never fail silently - always inform user
- Provide actionable suggestions for resolution
- Allow graceful degradation (partial results better than none)
- Continue with other skills if one fails
After Generation
Once all skills are generated:
-
Commit generated skills:
git add .claude/skills/ git commit -m "docs: add generated project skills Generated by discover-project-skills skill: - [list skill names] These skills document project patterns and can be imported/shared across team members." -
Present summary to user:
✅ Generated N skills in .claude/skills/: **Technology Skills:** - postgresql-patterns.md - kafka-patterns.md **Project Skills:** - api-layer-patterns.md - service-layer-patterns.md Skills are committed and ready to use. Claude will automatically discover them when working on related features. -
Remind about maintenance:
💡 Re-run /cc-meta-skills:discover-project-skills when: - Major architectural changes are made - New technologies are adopted - Patterns evolve significantly The skill will detect existing generated skills and offer to update them.
Key Principles
Scanning Strategy:
- Use Task/Explore agents for large codebases (>50 files)
- Use direct tools (Read/Glob/Grep) for small projects (<50 files)
- Progressive disclosure: quick scan → focused deep dive
Token Budget:
- Keep skills concise (<500 tokens each)
- Use file references with line numbers, not code snippets
- Validate token count after generation
- Optimize if over budget
Skill Chaining:
- Always include "Related Skills" sections
- Use directory structure as proxy for architectural layers
- Build chains based on typical flow patterns
- Double reinforcement: explicit chains + context matching
Descriptions:
- Make descriptions rich with trigger keywords
- Include file patterns for context matching
- Add scenarios that trigger skill usage
- Follow template: name + what-it-does + keywords + patterns + scenarios
Quality:
- Each skill includes at least 3 file references
- Each skill includes 2+ concrete examples/patterns
- Validate findings with user before generating
- Handle errors gracefully with actionable feedback