Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:01:27 +08:00
commit 585e3d35c2
11 changed files with 532 additions and 0 deletions

47
commands/initDoc.md Normal file
View File

@@ -0,0 +1,47 @@
---
description: Generate great doc system for this project
---
# /initDoc
## Actions
0. STEP 0:
- Obtain the current project structure.
- Read key files, such as various README.md / package.json / go.mod / pyproject.toml ...
1. **Step 1: Global Investigation (using `scout`)**
- Launch concurrent `scout` agents to explore the codebase and produce reports.
2. **Step 2: Propose Core Concepts & Get User Selection**
- After scouting is complete, perform a synthesis step: Read all scout reports and generate a list of _candidate_ core concepts (e.g., "Authentication", "Billing Engine", "API Gateway").
- Use the `AskUserQuestion` tool to present this list to the user as a multiple-choice question: "I've analyzed the project and found these potential core concepts. Please select the ones you want to document now:".
3. **Step 3: Generate Concise Foundational Documents**
- In parallel, launch dedicated `recorder` agents to create essential, project-wide documents.
- **Task for Recorder A (Project Overview):** "Create `overview/project-overview.md`. Analyze all scout reports to define the project's purpose, primary function, and tech stack."
- **Task for Recorder B (Coding Conventions):** "Create a *concise* `reference/coding-conventions.md`. Analyze project config files (`.eslintrc`, `.prettierrc`) and extract only the most important, high-level rules."
- **Mode:** These recorders MUST operate in `content-only` mode.
4. **Step 4: Document User-Selected Concepts**
- Based on the user's selection from Step 2, for each _selected_ concept, concurrently invoke a `recorder` agent.
- The prompt for this `recorder` will be highly specific to control scope and detail:
"**Task:** Holistically document the **`<selected_concept_name>`**.
**1. Read all relevant scout reports and source code...**
**2. Generate a small, hierarchical set of documents:**
- **Optionally, create ONE `overview` document** if the concept is large enough to require its own high-level summary (e.g., `overview/authentication-overview.md`).
- **Create 1-2 primary `architecture` documents.** This is mandatory and should be the core 'LLM Retrieval Map'.
- **Create 1-2 primary `guide` documents** that explain the most common workflow for this concept (e.g., `how-to-authenticate-a-user.md`).
- **Optionally, create 1-2 concise `reference` documents** ONLY if there are critical, well-defined data structures or API specs. Do not create reference docs for minor details.
**3. Operate in `content-only` mode.**"
5. **Step 5: Final Indexing**
- After all `recorder` agents from both Step 3 and Step 4 have completed, invoke a single `recorder` in `full` mode to build the final `index.md` from scratch.
6. **Step 6: Cleanup**
- Delete the temporary scout reports in `/llmdoc/agent/`.

26
commands/review.md Normal file
View File

@@ -0,0 +1,26 @@
---
description: "Triggers a code review for specific files or the current context."
argument-hint: "[file paths or description of changes]"
---
# /c2:review
This command initiates a code review process using the `reviewer` agent.
## When to use
- **Use when:** You have completed a coding task and want to verify the quality and security of your changes.
- **Use when:** You want to audit specific files for potential issues.
## Actions
1. **Step 1: Identify Scope**
- Determine which files need to be reviewed based on the user's input or recent activity.
2. **Step 2: Launch Reviewer**
- Invoke the `reviewer` agent with the identified scope.
- Pass any specific focus areas mentioned by the user (e.g., "check for security issues").
3. **Step 3: Report Findings**
- The `reviewer` agent will output a prioritized list of issues and suggestions.
- Present this report to the user and ask if they want to proceed with fixing the issues (which would typically be handled by a `worker` or `scout` -> `worker` flow).

33
commands/what.md Normal file
View File

@@ -0,0 +1,33 @@
---
description: "Clarifies a vague user request by asking clarifying questions."
argument-hint: ""
---
# /what
This command is used internally when a user's request is too vague to be acted upon. It reads the project documentation to understand the context and then asks the user targeted, option-based questions to clarify their intent.
## When to use
- **Use when:** This command is typically used by the main assistant AI, not directly by the user. It's triggered when the user's prompt is ambiguous (e.g., "fix it", "add a thing").
- **Goal:** To turn a vague request into a concrete, actionable plan.
## Actions
1. **Step 1: Gather Context**
- Read the documentation index at `<projectRootPath>/llmdoc/index.md` and other high-level documents to understand the project's purpose, architecture, and features.
2. **Step 2: Formulate Clarifying Questions**
- Based on the documentation and the user's vague request, formulate a set of clarifying questions.
- The questions should be option-based whenever possible to guide the user toward a specific outcome. For example, instead of "What do you want to do?", ask "Are you trying to: (a) Add a new API endpoint, (b) Modify an existing feature, or (c) Fix a bug?".
3. **Step 3: Ask the User**
- Use the `AskUserQuestion` tool to present the questions to the user.
4. **Step 4: Formulate Investigation Task**
- Based on the user's clarified response, your goal is to formulate a set of concrete **investigation questions**.
- **Do NOT jump to a solution.** The purpose of this command is to clarify "what the user wants to know", not "how to implement it".
- Invoke the `/scout` command with the clear, factual questions you have formulated. For example, if the user now wants to "add a user endpoint", the next step is to ask `/scout` to investigate "What is the current API routing structure?" and "What conventions are used for defining data models?".