Files
gh-timurkhakhalev-cc-plugin…/skills/steerings-generator/pack-reference.md
2025-11-30 09:02:04 +08:00

395 lines
13 KiB
Markdown

# Interview Pack Reference
This document lists all available interview packs for tacit knowledge extraction. Each pack targets a specific knowledge area and guides context-aware question generation.
## Available Packs
### 1. Codebase Topology & Ownership
**Pack ID:** `codebase-topology-ownership`
**Goal:** Clarify how the repo is organized, what "ownership" means, and when to reuse vs. create new code.
**Topics Covered:**
- Shared types, schemas, and business logic organization
- "Do-not-touch" zones and boundaries
- When to create new modules vs. reuse existing
- Ownership decisions for packages/folders
- Workspace dependencies and versioning
**Topic Questions (for dynamic question generation):**
- Where do shared types, schemas, and business logic live?
- What files or directories are considered "do-not-touch" zones?
- When is it acceptable to add a new module vs. reuse an existing one?
- How do you decide ownership of a package or folder?
- What are the boundaries for editing shared libraries or common utilities?
- How do versioning and `workspace:*` dependencies affect this structure?
**When to Use:**
- Starting a new project and defining structure
- Onboarding new team members to codebase organization
- Refactoring or reorganizing existing codebases
- Documenting navigation conventions for agents
**Generated Steering:** `steerings/code-ownership.md`
---
### 2. Architecture & Design Invariants
**Pack ID:** `architecture-design-invariants`
**Goal:** Surface the invisible structure that keeps the system stable.
**Topics Covered:**
- Architectural layering patterns (controller/service/repo, hexagonal, CQRS)
- Dependency direction rules
- Naming and import alias conventions
- Design patterns (events vs. sync calls, DTOs, decorators)
- Caching, concurrency, idempotency
- Performance budgets
- API conventions and versioning
**Topic Questions (for dynamic question generation):**
- What architectural layering patterns are enforced (controller/service/repo, hexagonal, CQRS, etc.)?
- Which dependency directions are allowed or forbidden?
- What naming or import alias rules exist (`@core/*`, `@features/*`, etc.)?
- What design patterns are mandatory or discouraged (events vs. sync calls, DTOs, decorators)?
- What are your caching, concurrency, or idempotency expectations?
- How do you define acceptable performance budgets (latency, memory, N+1 queries)?
- What are the conventions for APIs (REST, GraphQL, gRPC) and versioning/compatibility?
**When to Use:**
- Establishing architecture guidelines for new projects
- Aligning team on design patterns
- Creating steerings for how agents should structure features
- Preventing architectural drift
**Generated Steering:** `steerings/architecture-invariants.md`
---
### 3. Business Domain Contracts
**Pack ID:** `business-domain-contracts`
**Goal:** Capture business logic invariants that constrain code behavior.
**Topics Covered:**
- Key domain entities and their invariants
- Permissions, roles, and entitlements
- Licensing tiers and feature flags
- Money-handling and compliance rules (audit logs, consent, retention)
- Region-specific or client-specific quirks
- Domain model naming and event rules
**Topic Questions (for dynamic question generation):**
- What are the key domain entities and their invariants?
- How are permissions, roles, and entitlements enforced?
- Are there licensing tiers or feature flags that affect code paths?
- What money-handling or compliance rules shape the design (audit logs, consent, retention)?
- Are there region-specific or client-specific quirks?
- What are the naming or event rules for domain models?
**When to Use:**
- Documenting business rules that must be preserved
- Onboarding to domain-specific constraints
- Creating steerings for business logic implementation
- Ensuring compliance requirements are explicit
**Generated Steering:** `steerings/domain-invariants.md`
---
### 4. Quality & Style Assurance
**Pack ID:** `quality-style-assurance`
**Goal:** Uncover what "high-quality" and "mergeable" mean in this context.
**Topics Covered:**
- Naming, formatting, and readability standards
- Definition of Done (before merging PR)
- Dependency discipline (adding/removing libraries)
- Logging, error handling, and observability rules
- Quality check steps (lint, type-check, tests)
- Examples of unacceptable commits/PRs
**Topic Questions (for dynamic question generation):**
- What naming, formatting, and readability standards are non-negotiable?
- What must be true before merging a PR ("Definition of Done")?
- How do you enforce dependency discipline (adding/removing libs)?
- What are the rules for logging, error handling, and observability?
- What steps are included in a quality check (`pnpm qc`, lint, type-check, tests)?
- What are examples of commits or PRs that would be rejected outright?
**When to Use:**
- Establishing code quality standards
- Defining "done" criteria for agents
- Creating pre-commit or CI check guidelines
- Documenting what makes code mergeable
**Generated Steering:** `steerings/quality-and-style.md`
---
### 5. Testing & Verification Strategy
**Pack ID:** `testing-verification-strategy`
**Goal:** Understand how the team proves code correctness and stability.
**Topics Covered:**
- Required test types per layer (unit, integration, e2e, contract)
- Test folder and fixture structure
- Coverage targets and philosophy
- Handling flaky or brittle tests
- Known fragile modules requiring extra attention
- Test data and mock strategies (real DB, Docker, fake API)
**Topic Questions (for dynamic question generation):**
- What test types are required for each layer (unit, integration, e2e, contract)?
- How are test folders and fixtures structured?
- What is your coverage target or philosophy (e.g. 70% meaningful coverage vs. 100%)?
- How do you treat flaky or brittle tests?
- What are "known fragile" modules or flows that require extra attention?
- How are test data and mocks handled (real DB, Docker, fake API, etc.)?
**When to Use:**
- Defining testing requirements for new projects
- Creating steerings for test generation
- Documenting test structure conventions
- Setting coverage expectations
**Generated Steering:** `steerings/testing-strategy.md`
---
### 6. Risk & Historical Landmines
**Pack ID:** `risk-historical-landmines`
**Goal:** Map the organization's collective memory of past failures.
**Topics Covered:**
- Risky or easy-to-break modules/flows
- Intentional "looks wrong" code
- Legacy shims and vendor workarounds
- Design debts everyone should know
- Incidents that led to "never again" rules
- Changes requiring special review
**Topic Questions (for dynamic question generation):**
- What modules or flows are risky or easy to break?
- What parts of the system "look wrong but are intentional"?
- Are there legacy shims, vendor workarounds, or design debts everyone should know?
- Which incidents led to new "never again" rules?
- What kind of changes are considered dangerous or need special review?
**When to Use:**
- Preserving institutional knowledge
- Warning agents about dangerous areas
- Documenting known issues and workarounds
- Creating "be careful here" steerings
**Generated Steering:** `steerings/risk-registry.md`
---
### 7. Security, Data & Compliance
**Pack ID:** `security-data-compliance`
**Goal:** Surface hidden security expectations that engineers internalize but rarely document.
**Topics Covered:**
- Authentication/authorization boundaries
- Secrets and token storage/access
- Unsafe patterns (direct DB access, unvalidated inputs, inline secrets)
- PII/PHI classification and protection
- Redaction and anonymization rules for logs/metrics
- Allowed inbound/outbound service communications
- Incident, breach, and audit event handling
**Topic Questions (for dynamic question generation):**
- What are the authentication/authorization boundaries?
- How should secrets and tokens be stored and accessed?
- What patterns are unsafe (direct DB access, unvalidated inputs, inline secrets)?
- How is PII or PHI classified and protected?
- What redaction or anonymization rules apply to logs and metrics?
- What are allowed inbound/outbound service communications?
- How are incidents, breaches, or audit events handled?
**When to Use:**
- Documenting security requirements
- Creating steerings for secure coding
- Ensuring compliance with data protection rules
- Defining authentication/authorization patterns
**Generated Steering:** `steerings/security-and-compliance.md`
---
### 8. Delivery Lifecycle & Change Flow
**Pack ID:** `delivery-lifecycle-change-flow`
**Goal:** Make implicit shipping rules explicit.
**Topics Covered:**
- Branching model and commit message format
- PR templates and changelog requirements
- Required artifacts (migration plan, rollout doc, flag plan)
- Rollout → canary → rollback flow
- Configuration management (`.env` → validated config → injected)
- Required approvals and sign-offs
**Topic Questions (for dynamic question generation):**
- What is the preferred branching model and commit message format?
- How should PR templates and changelogs be filled out?
- What artifacts are required for a feature (migration plan, rollout doc, flag plan)?
- What is the usual rollout → canary → rollback flow?
- How is configuration managed (`.env` → validated config → injected)?
- What approvals or sign-offs are required before deploy?
**When to Use:**
- Defining deployment workflows
- Creating steerings for release processes
- Documenting change management procedures
- Setting PR and commit standards
**Generated Steering:** `steerings/delivery-lifecycle.md`
---
## How to Choose Packs
### By Development Phase
**Starting a New Project:**
- Run: 1 (Topology), 2 (Architecture), 4 (Quality), 5 (Testing)
- Establishes foundational structure and standards
**Scaling an Existing Project:**
- Run: All available packs
- Captures current practices before they become implicit
**Onboarding Phase:**
- Run: 1 (Topology), 2 (Architecture), 6 (Risk), 7 (Security)
- Documents "how we do things here" and "what to watch out for"
**Production Readiness:**
- Run: 5 (Testing), 7 (Security), 8 (Delivery)
- Ensures shipping and operational standards are defined
### By Knowledge Gap
**"Where should this code go?"**
- Run: 1 (Codebase Topology)
**"What patterns should we follow?"**
- Run: 2 (Architecture Invariants), 4 (Quality & Style)
**"How should we test?"**
- Run: 5 (Testing & Verification)
**"What are the risks?"**
- Run: 6 (Risk & Historical Landmines)
**"How do we ship safely?"**
- Run: 7 (Security), 8 (Delivery Lifecycle)
**"What business rules matter?"**
- Run: 3 (Business Domain Contracts)
---
## Pack Usage Model
Packs serve as **topic guidance** for generating context-aware questions. They do NOT contain pre-written questions.
**How packs are used:**
1. **User selects pack** (e.g., "Architecture & Design Invariants")
2. **Explore agent analyzes codebase** (finds existing patterns)
3. **Questions are generated dynamically** using:
- Findings from Explore (what exists)
- Topics from pack guidance (what to ask about, including future plans)
4. **Interview conducted** with generated questions
5. **Steering generated** in simplified format
**Example:**
For "Testing & Verification Strategy":
- Explore finds: A few test files in `src/__tests__/`, jest.config.js, no coverage config
- Pack guidance: "What test types should be required? How should test folders be structured? What coverage target?"
- Generated questions combine both:
- "I see Jest tests in `src/__tests__/`. Should this pattern continue, or move tests closer to source files?"
- "What test types should be required for each layer (unit, integration, e2e)?"
- "What coverage target makes sense for this project?"
---
## Extensibility
To add a new interview pack:
1. **Add to interview-packs.md:** New section with goal and topic bullets
2. **Add to this file:** Pack ID, goal, topics, when to use, generated steering name
3. **Update SKILL.md:** Add Explore prompt guidance for the new pack
4. **Add to outcome mapping:** Steering filename for the new pack
The tacit-knowledge skill will automatically support new packs when they're added to these reference files.
---
## Outcome Mapping
Each interview pack generates a specific steering document:
| Pack | Generated Steering |
| ----------------------------------- | -------------------------------------- |
| 1. Codebase Topology & Ownership | `steerings/code-ownership.md` |
| 2. Architecture & Design Invariants | `steerings/architecture-invariants.md` |
| 3. Business Domain Contracts | `steerings/domain-invariants.md` |
| 4. Quality & Style Assurance | `steerings/quality-and-style.md` |
| 5. Testing & Verification Strategy | `steerings/testing-strategy.md` |
| 6. Risk & Historical Landmines | `steerings/risk-registry.md` |
| 7. Security, Data & Compliance | `steerings/security-and-compliance.md` |
| 8. Delivery Lifecycle & Change Flow | `steerings/delivery-lifecycle.md` |