5.2 KiB
🧠 Interview Questions Pack
Purpose: structured interviews to elicit tacit development rules that later become
steerings/documents. Audience: senior engineers, leads, or architects who already maintain part of the repo and its delivery flow.
1. Codebase Topology & Ownership
Goal: clarify how the repo is organized, what "ownership" means, and when to reuse vs. create new code.
- Where do shared types, schemas, and business logic live?
- What files or directories are considered "do-not-touch" zones?
- When is it acceptable to add a new module vs. reuse an existing one?
- How do you decide ownership of a package or folder?
- What are the boundaries for editing shared libraries or common utilities?
- How do versioning and
workspace:*dependencies affect this structure?
2. Architecture & Design Invariants
Goal: surface the invisible structure that keeps the system stable.
- What architectural layering patterns are enforced (controller/service/repo, hexagonal, CQRS, etc.)?
- Which dependency directions are allowed or forbidden?
- What naming or import alias rules exist (
@core/*,@features/*, etc.)? - What design patterns are mandatory or discouraged (events vs. sync calls, DTOs, decorators)?
- What are your caching, concurrency, or idempotency expectations?
- How do you define acceptable performance budgets (latency, memory, N+1 queries)?
- What are the conventions for APIs (REST, GraphQL, gRPC) and versioning/compatibility?
3. Business Domain Contracts
Goal: capture business logic invariants that constrain code behavior.
- What are the key domain entities and their invariants?
- How are permissions, roles, and entitlements enforced?
- Are there licensing tiers or feature flags that affect code paths?
- What money-handling or compliance rules shape the design (audit logs, consent, retention)?
- Are there region-specific or client-specific quirks?
- What are the naming or event rules for domain models?
4. Quality & Style Assurance
Goal: uncover what "high-quality" and "mergeable" mean in this context.
- What naming, formatting, and readability standards are non-negotiable?
- What must be true before merging a PR ("Definition of Done")?
- How do you enforce dependency discipline (adding/removing libs)?
- What are the rules for logging, error handling, and observability?
- What steps are included in a quality check (
pnpm qc, lint, type-check, tests)? - What are examples of commits or PRs that would be rejected outright?
5. Testing & Verification Strategy
Goal: understand how the team proves code correctness and stability.
- What test types are required for each layer (unit, integration, e2e, contract)?
- How are test folders and fixtures structured?
- What is your coverage target or philosophy (e.g. 70% meaningful coverage vs. 100%)?
- How do you treat flaky or brittle tests?
- What are "known fragile" modules or flows that require extra attention?
- How are test data and mocks handled (real DB, Docker, fake API, etc.)?
6. Risk & Historical Landmines
Goal: map the organization's collective memory of past failures.
- What modules or flows are risky or easy to break?
- What parts of the system "look wrong but are intentional"?
- Are there legacy shims, vendor workarounds, or design debts everyone should know?
- Which incidents led to new "never again" rules?
- What kind of changes are considered dangerous or need special review?
7. Security, Data & Compliance
Goal: surface hidden security expectations that engineers internalize but rarely document.
- What are the authentication/authorization boundaries?
- How should secrets and tokens be stored and accessed?
- What patterns are unsafe (direct DB access, unvalidated inputs, inline secrets)?
- How is PII or PHI classified and protected?
- What redaction or anonymization rules apply to logs and metrics?
- What are allowed inbound/outbound service communications?
- How are incidents, breaches, or audit events handled?
8. Delivery Lifecycle & Change Flow
Goal: make implicit shipping rules explicit.
- What is the preferred branching model and commit message format?
- How should PR templates and changelogs be filled out?
- What artifacts are required for a feature (migration plan, rollout doc, flag plan)?
- What is the usual rollout → canary → rollback flow?
- How is configuration managed (
.env→ validated config → injected)? - What approvals or sign-offs are required before deploy?
📘 Outcome Mapping
Each interview section contributes to a steering document:
| Interview Section | Derived Steering |
|---|---|
| Codebase Topology & Ownership | steerings/code-ownership.md |
| Architecture & Design Invariants | steerings/architecture-invariants.md |
| Business Domain Contracts | steerings/domain-invariants.md |
| Quality & Style Assurance | steerings/quality-and-style.md |
| Testing & Verification Strategy | steerings/testing-strategy.md |
| Risk & Historical Landmines | steerings/risk-registry.md |
| Security, Data & Compliance | steerings/security-and-compliance.md |
| Delivery Lifecycle & Change Flow | steerings/delivery-lifecycle.md |