41 KiB
description, argument-hint
| description | argument-hint | |
|---|---|---|
| Generate a high-quality GitHub issue optimized for agent implementation |
|
GHI - Generate High-Quality Issue
Creates a GitHub issue optimized for the agent implementation framework. Works for any type of issue: features, bugs, refactoring, database changes, API endpoints, UI components, etc.
Focuses on WHAT (requirements and acceptance criteria) while letting agents determine HOW (implementation details).
Input
Feature/Bug Description:
$ARGUMENTS
Philosophy: What vs How (WITH Operational Context)
✅ ALWAYS SPECIFY (What + Where + How to Deploy)
- Requirements: Exact components, properties, types, behavior, constraints
- End state: What the system should look like when done
- Execution context: Which repo, environment, how to access resources
- Deployment steps: How to deploy this change to production/staging
- Verification commands: How to prove it worked
- Acceptance criteria: Code written AND deployed AND verified
- Context: Why this matters, who needs it, business requirements
- Technical details: Names, types, validation rules, error messages
- Constraints: Performance limits, security requirements, edge cases
❌ DON'T SPECIFY (Implementation Details)
- Complete function/component implementations
- Large blocks of working code (> 10-15 lines)
- Specific algorithms or data structures
- Order of internal code operations
✅ OK TO INCLUDE (Examples & Operations)
- Short code snippets showing shape/format (< 10 lines)
- Type definitions:
{ email: string, category?: string } - Example API responses:
{ id: 5, status: 'active' } - Error message formats:
"Invalid email format" - Small syntax examples for clarity
- Full deployment commands (copy-paste ready)
- Full verification queries (SQL, curl, AWS CLI)
- Database connection methods (SSM, port forwarding)
- Service restart commands (systemd, PM2)
💡 GUIDE (What to Figure Out)
- "You will need to figure out how to implement this" (algorithm/code structure)
- "Determine the correct approach for this problem" (design patterns)
- "Structure the code following existing patterns" (code organization)
- "Decide how to test this functionality" (test structure)
- "Figure out how to handle edge cases" (error handling logic)
⚠️ CRITICAL: Deployment is Part of Implementation
Issues are NOT complete when code is written. Issues are complete when:
- Code written and tested locally
- Code deployed to appropriate environment (production, staging, database)
- Deployment verified with commands (queries, health checks, AWS verification)
- Service restarted if applicable
- Changes confirmed working in target environment
Steps
1. Research Repository Context
# Examine repository structure
gh repo view
# Check existing patterns (migrations, tests, etc.)
# Use Read tool to examine similar files
# Review recent issues for style
gh issue list --limit 10
2. Analyze Requirements
- Extract core requirements from description
- Identify affected components (tables, APIs, UI)
- Determine scope boundaries
- List what needs to change vs what stays the same
3. Structure the Issue
Create a GitHub issue with these sections:
Summary (2-3 sentences)
High-level overview of what's being built/fixed and why.
Context & Background
- What's the current limitation?
- Why does this matter?
- Who needs this feature?
- What problem does it solve?
Execution Context (REQUIRED - Where & How to Deploy)
⚠️ CRITICAL: This section is MANDATORY. Without it, implementers won't know where to work or how to deploy.
Specify the operational context for this issue:
Repository:
- Which repo: Backend (tvl-voice-agent-poc-be) or Frontend (tvl-voice-agent-poc-fe)
- GitHub URL: https://github.com/Mygentic-AI/[repo-name]
- Branch strategy: Create feature branch from main, PR back to main
Environment:
- Target environment: Production, Staging, Development, or Database
- If database: Instance ID, IP address, database name
- If service: Service name, instance, port
- Access method: SSM, Port forwarding, Direct connection
Resource Access (if applicable):
- Database connection: How to connect (SSM port forwarding, send-command)
- Service access: How to restart (systemd, PM2)
- Credentials: Where they're stored (SSM Parameter Store, .env)
- Files/directories: Where code should be written
Example for Database Migration:
**Repository:** Backend (https://github.com/Mygentic-AI/tvl-voice-agent-poc-be)
**Environment:** Production PostgreSQL 15 (EC2 i-0db0779332d396cf9)
- Private IP: 10.1.10.55:5432 (NO public IP)
- Database: `tlv_voice_agent_poc`
- Access: SSM port forwarding or send-command
**Files to Create/Modify:**
- `backend/sql/06_migration_name.sql` - Write migration here
- `backend/src/database/models.py` - Update SQLAlchemy models
Example for API Endpoint:
**Repository:** Backend (https://github.com/Mygentic-AI/tvl-voice-agent-poc-be)
**Environment:** Production backend API (EC2 i-0edca1a07573abcd0)
- Service: `tlv-backend-api.service` (systemd)
- Working directory: `/root/tvl-voice-agent-poc-be`
- Port: 8000
**Files to Create/Modify:**
- `backend/src/api/endpoint_name.py` - Create new router
- `backend/src/api/main.py` - Register new router
Example for Frontend Component:
**Repository:** Frontend (https://github.com/Mygentic-AI/tvl-voice-agent-poc-fe)
**Environment:** Production frontend (EC2 i-025975283037f7e3a)
- Service: `tvl-frontend` (PM2)
- Working directory: `/var/www/tvl-voice-agent-poc-fe`
- Port: 3000
**Files to Create/Modify:**
- `frontend/src/components/ComponentName.tsx` - New component
- `frontend/src/app/page.tsx` - Import and use component
Requirements (Detailed WHAT)
Specify the end state in detail. What should exist when this is complete? Be specific about:
- Components to create or modify
- Exact names, types, parameters, properties
- Behavior, validation rules, constraints
- Relationships and dependencies
- Default values and edge cases
- Error handling expectations
Adapt to the type of issue:
For Database Changes:
**New Table: `table_name`**
- Columns: `column_name` TYPE, constraints
- Primary key, foreign keys, indexes
- Unique constraints
**Modify Table: `existing_table`**
- Add column: `new_column` TYPE
- Change: existing_column constraints
- Default values for existing rows
For API Endpoints:
**Endpoint:** `POST /api/resource`
- Parameters: `param_name` (type, required/optional, validation rules)
- Request body schema: exact shape
- Response codes: 200, 400, 404, 500 with response shapes
- Authentication: required/optional
- Rate limiting: X requests per minute
- Error messages: specific text for each error case
For UI Components:
**Component:** ComponentName
- Props: `propName` (type, required/optional, default value)
- State: what state it manages
- Behavior: user interactions and outcomes
- Validation: input rules and error messages
- Accessibility: ARIA labels, keyboard navigation
- Responsive: breakpoints and mobile behavior
For Bug Fixes:
**Current Behavior:**
[Exact description of the bug]
**Expected Behavior:**
[Exact description of correct behavior]
**Affected Components:**
- Component/function/file names
- Conditions under which bug occurs
- Data that triggers the issue
For AWS/Infrastructure Operations:
**Resources to Create/Modify:**
- Resource type: Lambda Layer / Lambda Function / S3 Bucket / DynamoDB Table
- Name: exact resource name
- Region: me-central-1
- Configuration: detailed settings
**Verification Steps** (CRITICAL)
After implementing AWS operations, you MUST verify they succeeded:
**For Lambda Layers:**
```bash
# Verify layer exists
aws lambda list-layer-versions \
--layer-name LAYER_NAME \
--region REGION \
--max-items 1
# Expected: Shows version 1 with expected description
For Lambda Functions:
# Verify function updated
aws lambda get-function-configuration \
--function-name FUNCTION_NAME \
--region REGION
# Expected: Shows layer ARN in Layers array (if attaching layer)
# Expected: Shows updated environment variables (if updating config)
For S3/DynamoDB/Other Resources:
# Use appropriate describe/get commands
aws s3 ls s3://bucket-name
aws dynamodb describe-table --table-name table-name
Why verification matters:
- AWS CLI commands may show errors but resource was actually created
- Background operations may timeout but still succeed
- Exit codes alone are unreliable
- Always verify actual AWS state before assuming failure
Acceptance Criteria additions for AWS:
- Resource verified to exist:
aws [service] [describe-command]shows resource - Configuration verified: resource has expected settings
- No manual verification needed - script includes verification checks
#### **Implementation Guidance** (What to Figure Out)
List the problems the agent needs to solve (without prescribing solutions):
**For Database Changes:**
```markdown
To implement this, you will need to:
- Figure out how to write the SQL statements for schema changes
- Determine the correct order to execute statements in
- Structure the migration file following existing patterns
- Decide how to handle existing data during migration
- Test that the schema changes work correctly
For API Endpoints:
To implement this, you will need to:
- Determine where to add the route handler in the existing structure
- Figure out how to structure input validation
- Decide how to handle database transactions for this operation
- Implement proper error handling and logging
- Write tests that cover success and error cases
For UI Components:
To implement this, you will need to:
- Determine the appropriate component structure (functional vs class)
- Figure out state management approach (local state, context, Redux)
- Decide how to handle form validation
- Implement responsive styling following existing patterns
- Write component tests and integration tests
For Bug Fixes:
To implement this, you will need to:
- Investigate the root cause of the issue
- Determine the minimal change required to fix it
- Figure out how to prevent regression
- Write tests that reproduce the bug before fixing
- Verify the fix doesn't break existing functionality
For AWS/Infrastructure Operations:
To implement this, you will need to:
- Determine the correct AWS CLI commands to create/modify resources
- Figure out the proper order of operations (create layer before attaching, etc.)
- Structure scripts with error handling and verification
- Decide how to handle failures and rollback
- Verify resources exist after operations complete
**⚠️ IMPORTANT - Error Messages Don't Always Mean Failure:**
If you see error messages during AWS operations (timeouts, connection closed, socket errors, etc.), **verify the actual AWS state** before assuming failure. AWS CLI commands sometimes show errors even when the operation succeeded.
**Always verify:**
1. Run the verification command (see Verification Steps in Requirements section)
2. Check if the resource actually exists and is configured correctly
3. Only retry if verification confirms the resource doesn't exist or is misconfigured
**Example:**
- Error message: "Connection closed before receiving valid response"
- Verification: `aws lambda list-layer-versions --layer-name X` shows the layer exists with version 1
- Conclusion: Operation succeeded despite error message - continue workflow
**Do NOT:**
- Assume failure based solely on error output
- Retry operations without checking if they already succeeded
- Report failure to user without verifying actual state
Deployment & Verification (REQUIRED - How to Deploy)
⚠️ CRITICAL: This section is MANDATORY. Implementation includes deployment - not just writing code.
This issue is NOT complete until ALL of the following are done:
- ✅ Code written and tested locally
- ✅ Code committed to feature branch in the correct repository
- ✅ Changes deployed to appropriate environment (Production, Staging, or Database)
- ✅ Deployment verified with commands (queries, health checks, AWS CLI)
- ✅ Service restarted if needed (systemd, PM2)
- ✅ Changes confirmed working in target environment
Provide explicit deployment steps adapted to the issue type:
For Database Migrations:
## Deployment Steps
**Step 1: Test Locally (Port Forwarding)**
```bash
# Terminal 1: Start port forwarding
aws ssm start-session \
--target i-0db0779332d396cf9 \
--document-name AWS-StartPortForwardingSession \
--parameters '{"portNumber":["5432"],"localPortNumber":["5433"]}' \
--region me-central-1
# Terminal 2: Test migration
psql -h localhost -p 5433 -U tlv_admin -d tlv_voice_agent_poc -f backend/sql/06_migration_name.sql
Step 2: Deploy to Production (SSM Send-Command)
# Copy SQL file to server and execute
aws ssm send-command \
--instance-ids "i-0db0779332d396cf9" \
--document-name "AWS-RunShellScript" \
--region me-central-1 \
--parameters "commands=[\"sudo -i -u postgres psql -d tlv_voice_agent_poc -f /tmp/migration.sql\"]" \
--query 'Command.CommandId' \
--output text
Step 3: Verify Deployment
-- Verify schema changes
SELECT column_name, data_type FROM information_schema.columns
WHERE table_name = 'table_name' AND column_name = 'new_column';
-- Expected: 1 row showing new column
Step 4: Update Models (if SQLAlchemy)
# On backend server
cd /root/tvl-voice-agent-poc-be
# Edit models.py with new columns
sudo systemctl restart tlv-backend-api.service
curl http://localhost:8000/health # Verify service health
**For Backend API Endpoints:**
```markdown
## Deployment Steps
**Step 1: Write and Test Code Locally**
```bash
# Run tests
pytest tests/api/test_new_endpoint.py
python -m black . # Format
python -m mypy . # Type check
Step 2: Commit and Push
git add backend/src/api/new_endpoint.py
git commit -m "Add new API endpoint for X feature"
git push origin feature/new-endpoint
Step 3: Deploy to Production
# SSH to backend server
aws ssm start-session --target i-0edca1a07573abcd0 --region me-central-1
# On server
cd /root/tvl-voice-agent-poc-be
git pull origin main
source venv/bin/activate
pip install -r requirements.txt # if dependencies changed
sudo systemctl restart tlv-backend-api.service
Step 4: Verify Deployment
# Check service status
sudo systemctl status tlv-backend-api.service
# Test endpoint
curl -X POST http://localhost:8000/api/new-endpoint \
-H "Authorization: Bearer TOKEN" \
-d '{"param": "value"}'
# Expected: 200 OK with response data
**For Frontend Components:**
```markdown
## Deployment Steps
**Step 1: Build and Test Locally**
```bash
npm run lint # ESLint
npm run typecheck # TypeScript
npm run build # Production build
npm run start # Test build locally
Step 2: Commit and Push
git add frontend/src/components/NewComponent.tsx
git commit -m "Add NewComponent for X feature"
git push origin feature/new-component
Step 3: Deploy to Production
# SSH to frontend server
aws ssm start-session --target i-025975283037f7e3a --region me-central-1
# On server
cd /var/www/tvl-voice-agent-poc-fe
git pull origin main
npm ci # if package.json changed
npm run build
pm2 restart tvl-frontend
Step 4: Verify Deployment
# Check PM2 status
pm2 status
# Check application logs
pm2 logs tvl-frontend --lines 50
# Test frontend
curl http://localhost:3000
# Expected: 200 OK
Step 5: Test in Browser
- Navigate to https://your-domain.com
- Verify new component renders correctly
- Test user interactions work as expected
For AWS Infrastructure:
## Deployment Steps
**Step 1: Create/Modify Resource**
```bash
aws [service] [create-command] \
--name resource-name \
--region me-central-1 \
[other parameters]
Step 2: ALWAYS Verify Resource Exists
aws [service] [describe-command] \
--name resource-name \
--region me-central-1
# Expected: Shows resource with expected configuration
Step 3: Update Dependent Resources (if needed)
# Example: Attach layer to Lambda function
aws lambda update-function-configuration \
--function-name FUNCTION_NAME \
--layers LAYER_ARN
Step 4: Verify End-to-End
# Test the complete workflow works
aws lambda invoke --function-name FUNCTION_NAME output.json
cat output.json # Verify expected response
#### **Acceptance Criteria** (What Done Looks Like)
**⚠️ CRITICAL:** Acceptance criteria must include BOTH code quality AND deployment verification.
**Format:** Use "AND verified" pattern for every criterion:
**Code Quality Criteria:**
- [ ] All tests pass AND verified: `pytest` exits with code 0, all tests green
- [ ] Linting passes AND verified: `npm run lint` or `python -m black .` exits with code 0
- [ ] TypeScript compiles AND verified: `tsc --noEmit` exits with code 0 (if TypeScript project)
- [ ] Build succeeds AND verified: `npm run build` exits with code 0
**Deployment Criteria (REQUIRED):**
- [ ] Code committed to feature branch AND verified: `git log` shows commit
- [ ] **Changes deployed to [environment] AND verified**: [specific verification command]
- [ ] **Service restarted (if needed) AND verified**: `systemctl status` or `pm2 status` shows running
- [ ] **Deployment health check passes AND verified**: `curl [health endpoint]` returns 200 OK
**Functional Criteria** (specific to your feature):
- [ ] [Feature-specific outcome] AND verified: [command/query that proves it]
- [ ] [Feature-specific outcome] AND verified: [command/query that proves it]
**Examples:**
**For Database Migration:**
```markdown
- [ ] All tests pass AND verified: `pytest` exits with code 0
- [ ] Migration file created AND verified: `ls backend/sql/06_migration.sql` shows file
- [ ] **Migration deployed to production database AND verified**:
```sql
SELECT column_name FROM information_schema.columns
WHERE table_name = 'users' AND column_name = 'new_column';
-- Returns 1 row
- Models updated AND verified:
grep "new_column" backend/src/database/models.pyshows column definition - Backend service restarted AND verified:
sudo systemctl status tlv-backend-api.serviceshows active (running) - Can insert data into new column AND verified:
INSERT INTO users (new_column) VALUES ('test')succeeds
**For API Endpoint:**
```markdown
- [ ] All tests pass AND verified: `pytest tests/api/test_endpoint.py` exits with code 0
- [ ] Linting passes AND verified: `python -m black .` exits with code 0
- [ ] Code committed AND verified: `git log --oneline -1` shows commit message
- [ ] **Code deployed to production backend AND verified**: `ssh server` then `git log` shows latest commit
- [ ] **Backend service restarted AND verified**: `sudo systemctl status tlv-backend-api.service` shows active
- [ ] **Endpoint responds correctly AND verified**:
```bash
curl -X POST http://localhost:8000/api/endpoint \
-H "Authorization: Bearer TOKEN" \
-d '{"test": "data"}'
# Returns 200 OK with expected response
- Returns 400 for invalid input AND verified:
curlwith bad data returns 400 - Requires authentication AND verified:
curlwithout token returns 401
**For Frontend Component:**
```markdown
- [ ] All tests pass AND verified: `npm test` exits with code 0
- [ ] Linting passes AND verified: `npm run lint` exits with code 0
- [ ] TypeScript compiles AND verified: `npm run typecheck` exits with code 0
- [ ] Build succeeds AND verified: `npm run build` exits with code 0
- [ ] Code committed AND verified: `git log --oneline -1` shows commit
- [ ] **Code deployed to production frontend AND verified**: SSH to server, `git log` shows commit
- [ ] **Frontend rebuilt AND verified**: `ls .next/` shows fresh build files with recent timestamps
- [ ] **PM2 restarted AND verified**: `pm2 status` shows tvl-frontend online
- [ ] **Component renders AND verified**: Navigate to https://domain.com/page, component visible in browser
- [ ] User interaction works AND verified: Click button, expected behavior occurs
For AWS Infrastructure:
- [ ] **Resource created AND verified**:
```bash
aws lambda list-layer-versions --layer-name LAYER_NAME --region me-central-1
# Shows version 1
- Configuration correct AND verified:
aws lambda get-function-configuration --function-name FUNCTION_NAME # Shows layer ARN in Layers array - Function works end-to-end AND verified:
aws lambda invoke --function-name FUNCTION_NAME output.json cat output.json # Shows expected result
**⚠️ Remember:** An issue is NOT done until deployment is verified. "Code written" ≠ "Issue complete"
**Writing Acceptance Criteria - Use "AND verified" Pattern:**
Format each criterion with a verification command that proves it worked:
**Examples:**
- [ ] Branch merged AND verified: `gh pr list --search 120 --state merged` shows PR as MERGED
- [ ] Database table created AND verified: `SELECT COUNT(*) FROM table_name` returns data
- [ ] File created AND verified: `ls path/to/file.py` shows file exists
- [ ] Function works AND verified: `pytest test_function.py::test_success` passes
- [ ] API returns 200 AND verified: `curl -X POST /api/endpoint` returns status 200
**Don't use vague criteria:**
❌ "Works correctly" - not measurable
❌ "Function created" - process, not outcome
❌ "Documentation updated" - too vague
**Do use specific, verifiable criteria:**
✅ "Returns user data with email AND verified: response includes 'email' field"
✅ "Handles rate limit AND verified: 429 response returns retry-after header"
✅ "All tests pass AND verified: `pytest` exit code 0"
Technical Constraints (Boundaries)
- Don't break existing single-account users
- Must support up to 10 accounts per user
- Must maintain referential integrity
- Performance: Queries must remain under 100ms
Out of Scope (What NOT to Do)
- UI for managing multiple accounts (separate issue)
- OAuth flow changes (separate issue)
- Account switching functionality (separate issue)
4. Create the GitHub Issue
# Create issue with appropriate labels
gh issue create \
--title "Support multiple Google accounts per user" \
--body "$(cat issue-body.md)" \
--label "enhancement" \
--label "database"
# Display issue URL
echo "✓ Issue created: [URL]"
Output Format Template
## Summary
[2-3 sentence overview]
## Context & Background
**Current Limitation:**
[What doesn't work today]
**Why This Matters:**
[Business value, user impact]
**Who Needs This:**
[User personas or use cases]
## Requirements
[Adapt this section to your issue type. Specify the end state in detail.]
### [For Database Issues]
**New Table: `table_name`**
- Columns with types and constraints
- Primary key, foreign keys, indexes
### [For API Issues]
**Endpoint:** `POST /api/resource`
- Parameters: name, type, validation rules
- Response codes and shapes
- Authentication requirements
### [For UI Issues]
**Component:** ComponentName
- Props: name, type, required/optional
- Behavior: user interactions
- Styling: responsive breakpoints
- Accessibility: ARIA labels, keyboard nav
### [For Bug Fixes]
**Current Behavior:** [exact bug description]
**Expected Behavior:** [correct behavior]
**Affected Components:** [list of files/functions]
### [For Refactoring]
**Current State:** [duplication/issues]
**Target State:** [consolidated/improved]
**Files Affected:** [list to create/modify]
## Implementation Guidance
To implement this, you will need to:
- [Problem agent needs to solve 1]
- [Problem agent needs to solve 2]
- [Problem agent needs to solve 3]
## Testing Expectations
**What tests to write** (be specific with test names and behaviors):
- [ ] **Unit test**: `test_function_name_success()` - Verify function returns expected output with valid input
- [ ] **Unit test**: `test_function_name_handles_error()` - Verify function handles error gracefully (e.g., rate limit 429, invalid input)
- [ ] **Unit test**: `test_function_name_edge_case()` - Verify function handles edge cases (empty input, very long string, special characters)
- [ ] **Integration test**: `test_complete_workflow()` - Test end-to-end flow from input to final output
- [ ] **Edge case tests**: Empty values, null, very large inputs (1000+ chars), special characters, Unicode
**What must pass:**
- [ ] All pytest tests pass (Python projects) - `pytest` exits with code 0
- [ ] All npm test passes (JavaScript projects) - `npm test` exits with code 0
- [ ] ESLint passes with no errors - `npm run lint` or `eslint .` exits with code 0
- [ ] TypeScript compilation succeeds - `tsc --noEmit` exits with code 0
- [ ] Build succeeds - `npm run build` or equivalent exits with code 0
- [ ] Test coverage ≥ 80% for new code (if project uses coverage)
- [ ] No test warnings or errors in output
**Examples of specific test requirements:**
For API endpoint:
- [ ] `test_post_endpoint_valid_data()` - Returns 201 with created resource
- [ ] `test_post_endpoint_invalid_email()` - Returns 400 with error message
- [ ] `test_post_endpoint_duplicate()` - Returns 409 conflict
- [ ] `test_post_endpoint_unauthorized()` - Returns 401 without auth token
For UI component:
- [ ] `test_component_renders()` - Component renders without errors
- [ ] `test_component_handles_click()` - Clicking button calls onClick handler
- [ ] `test_component_shows_error()` - Error message displays when validation fails
- [ ] `test_component_keyboard_nav()` - Tab/Enter keyboard navigation works
For database operation:
- [ ] `test_insert_record()` - Successfully inserts valid record
- [ ] `test_unique_constraint()` - Raises error on duplicate key
- [ ] `test_foreign_key()` - Prevents orphaned records
- [ ] `test_query_performance()` - Query completes in < 100ms
## Security Requirements
[If handling authentication, sensitive data, or user input]
- **Authentication:** [Required/optional, what level]
- **Authorization:** [What permissions needed]
- **Input Validation:** [What inputs must be validated and how]
- **Data Privacy:** [What data needs encryption/protection]
- **Vulnerabilities to Avoid:** [SQL injection, XSS, etc.]
## Performance Requirements
[OPTIONAL - Only if there's a specific known performance concern. Most issues don't need this. Build first, measure, then optimize separately if needed.]
**Include this section ONLY if:**
- There's a known performance bottleneck being addressed
- The feature handles large-scale data (millions of records)
- There's a user-facing latency requirement
**Examples when needed:**
- **Response Time:** "< 100ms for search queries on 1M emails"
- **Scale:** "Support up to 10 Google accounts per user"
- **Load:** "Handle 1000 concurrent OAuth refreshes"
**If no specific performance concern:** Omit this section entirely. Performance optimization can be a separate issue after measuring actual performance.
## Error Handling
**Error Messages:**
- [Specific user-facing error text for each error case]
**Error Codes:**
- [HTTP status codes and when to return them]
**Fallback Behavior:**
- [What happens when operation fails]
## Documentation Requirements
**Writing Documentation Requirements - Be Hyper-Specific:**
❌ **Vague (causes review failures):**
- "Update documentation"
- "Add to README"
- "Document the changes"
- "Update API docs"
✅ **Specific (agent knows exactly what to do):**
- "README.md - Add `search_web` to 'Available Functions' list in section 6 (around line 115)"
- "Check if `FEATURE-LIST.md` exists in `/features/` - if yes, add feature entry; if no, skip this step"
- "Add docstring to `search_web()` function with Args, Returns, and Example sections"
- "Update `/wiki/api-reference.md` - add new endpoint under 'Search Endpoints' section"
**Template for Documentation Requirements:**
**Must update:**
- [ ] **README.md** - Specific file, section, and content to add:
- Section: [Exact section name, e.g., "Available Functions", "API Endpoints"]
- Location: [Approximate line number or "after X section"]
- Content to add: [Exact text or format, e.g., "`- 🔍 **Search Functions**: `search_web` - Perplexity AI web search with model selection`"]
- Example: "README.md - Add `search_web` to Available Functions list (section 6, around line 115) with format: `🔍 Search: \`search_web\` - Web search via Perplexity AI`"
- [ ] **Check conditional files** - Files that may or may not exist:
- `FEATURE-LIST.md` in `/features/` - If exists, add feature entry; if not, skip
- `API.md` in `/docs/` - If exists, document new endpoints; if not, skip
- Format: "Check if [filename] exists in [path] - if yes, [action]; if no, skip this step"
- [ ] **Inline code comments** - Specific functions/classes needing documentation:
- Function: `function_name` in `path/to/file.py` - Add docstring with:
- Description: [What it does in 1-2 sentences]
- Args section: Document each parameter with type and description
- Returns section: Document return type and what it represents
- Example section: Show sample usage (optional but recommended)
- Complex logic: Lines X-Y in `file.py` - Add inline comments explaining [specific algorithm/logic]
- [ ] **API/Wiki documentation** - External documentation to update:
- File: `/wiki/api-reference.md` or `/docs/api/endpoints.md`
- Section: [Specific section name]
- Content: Document the following:
- Endpoint/Function: `POST /api/endpoint` or `function_name()`
- Parameters: List each with type, required/optional, validation rules
- Response format: Exact JSON/object shape
- Error codes: List possible errors with messages
- Example: Show request/response example
**Examples:**
For a new function:
```markdown
**Must update:**
- [ ] README.md - Add `search_web` to Available Functions list (section 6, around line 115)
- Format: `- 🔍 **Search Functions**: \`search_web\` - Perplexity AI web search with model selection (✅ Implemented)`
- [ ] Check if `docs/FEATURE-LIST.md` exists - if yes, add entry for web search feature; if no, skip
- [ ] Add docstring to `search_web()` in `voice-agent/src/functions/perplexity.py`:
- Description: "Search the web using Perplexity AI"
- Args: `raw_arguments` (dict), `context` (RunContext)
- Returns: `str` (search results)
- Example: User asks "What's the weather?", agent calls search_web
For an API endpoint:
**Must update:**
- [ ] README.md - Add `/api/accounts` endpoint to API Endpoints section (after line 89)
- Format: `POST /api/accounts - Create new account`
- [ ] `/docs/api/accounts.md` - Create new file documenting:
- Endpoint: `POST /api/accounts`
- Parameters: `email` (string, required), `category` (string, optional)
- Response: 201 with `{ id, email, category, created_at }`
- Errors: 400 (invalid email), 409 (duplicate)
- [ ] Add inline comments to `src/api/accounts.ts`:
- Lines 45-67: Explain validation logic for email uniqueness check
For database changes:
**Must update:**
- [ ] `/wiki/database-schema.md` - Add `user_accounts` table to Tables section
- Include: Column names, types, constraints, indexes
- Include: Relationships to other tables (foreign keys)
- [ ] Check if `/docs/MIGRATIONS.md` exists - if yes, document migration process; if no, skip
- [ ] Add comments to migration file explaining each step
Common Pitfalls to Avoid
Purpose: Help agents anticipate and avoid common mistakes for this type of issue.
For AWS/Infrastructure Operations:
- ❌ Don't assume failure based on error messages - verify actual AWS state
- ❌ Don't forget to update IAM permissions if creating new resources
- ❌ Don't hardcode region - use environment variable or config
- ❌ Don't retry operations without checking if they already succeeded
- ✅ Do verify resources exist after creation (see Verification Steps)
- ✅ Do include rollback procedures in scripts
- ✅ Do test with AWS CLI before putting in scripts
For API Endpoints:
- ❌ Don't forget input validation before database operations
- ❌ Don't return stack traces to users - log them server-side
- ❌ Don't forget rate limiting for public endpoints
- ❌ Don't skip authentication checks
- ✅ Do validate all inputs against schema
- ✅ Do return consistent error format
- ✅ Do log all errors with context (user ID, timestamp, parameters)
For Database Changes:
- ❌ Don't forget migration rollback script
- ❌ Don't assume column order - use explicit column names
- ❌ Don't forget indexes for foreign keys
- ❌ Don't forget to handle existing data
- ✅ Do test migration on production data copy
- ✅ Do include data validation after migration
- ✅ Do document breaking changes
For UI Components:
- ❌ Don't forget mobile breakpoints (320px, 768px, 1024px)
- ❌ Don't forget ARIA labels for accessibility
- ❌ Don't forget loading states and error boundaries
- ❌ Don't forget keyboard navigation (Tab, Enter, Escape)
- ✅ Do test on actual mobile devices
- ✅ Do include proper focus management
- ✅ Do follow existing component patterns
For Function/Feature Implementation:
- ❌ Don't forget edge cases (empty input, null, very long strings)
- ❌ Don't forget error handling for all failure paths
- ❌ Don't forget to clean up temporary files/resources
- ❌ Don't forget to update related documentation
- ✅ Do write tests before implementation (TDD when possible)
- ✅ Do follow existing code patterns in the project
- ✅ Do verify changes work end-to-end
For Bug Fixes:
- ❌ Don't fix symptoms without finding root cause
- ❌ Don't skip writing regression test
- ❌ Don't make changes without understanding why bug occurred
- ✅ Do reproduce bug in test first
- ✅ Do verify fix doesn't break related functionality
- ✅ Do document why bug occurred (inline comment)
Acceptance Criteria
- [Specific functional outcome 1]
- [Specific functional outcome 2]
- [Specific functional outcome 3]
- All tests pass (pytest/npm test)
- All linting passes (ESLint/Flake8/mypy)
- TypeScript compilation succeeds (if TypeScript)
- Build succeeds (if applicable)
- Security requirements satisfied (if specified above)
- Documentation updated (as specified above)
- Performance benchmarks met (only if performance section included above)
Technical Constraints
- [Performance constraint: e.g., "< 100ms response time"]
- [Compatibility constraint: e.g., "Must work with Python 3.11+"]
- [Security constraint: e.g., "Must use encrypted connections"]
- [Scale constraint: e.g., "Support up to 10 accounts per user"]
Out of Scope
- [What NOT to include in this issue]
- [Features deferred to future issues]
- [Work that's related but separate]
Related Issues
- Depends on: #X
- Blocked by: #Y
- Related to: #Z
## Examples
### Example 1: API Endpoint (Good - What)
```markdown
## Requirements
**Endpoint:** `POST /api/accounts`
- Parameter: `email` (string, required, valid email format)
- Parameter: `category` (string, optional, enum: ['personal', 'work'])
- Response: `201 Created` with JSON: `{ id, email, category, created_at }`
- Response: `400 Bad Request` if email invalid
- Response: `409 Conflict` if email already exists
- Validation: Email must be unique per user
- Authentication: Required (Bearer token)
Example 1: API Endpoint (Bad - How)
## Implementation
Create a route handler in `src/api/accounts.ts`:
\`\`\`typescript
router.post('/accounts', async (req, res) => {
const { email, category } = req.body;
// validate email
// insert into database
// return response
});
\`\`\`
Example 2: UI Component (Good - What)
## Requirements
**Component:** AccountSelector dropdown
- Props: `accounts` (array of account objects), `onSelect` (callback function), `selectedId` (string or null)
- Behavior: Clicking opens dropdown, selecting calls `onSelect` with account ID, clicking outside closes
- Display: Shows account email and category badge
- Styling: Matches existing dropdown components in settings panel
- Empty state: Shows "No accounts connected" message
- Accessibility: Keyboard navigable, ARIA label "Select account"
Example 2: UI Component (Bad - How)
## Implementation
Create a React component using useState for open/closed state. Use useEffect to handle click-outside. Map over accounts array and render each one. Add onClick handlers...
Example 3: Bug Fix (Good - What)
## Requirements
**Current Behavior:**
When user logs out, cached email data persists in localStorage and appears briefly on next login before being cleared.
**Expected Behavior:**
All cached data should be cleared immediately on logout. No data from previous session should be visible.
**Affected Components:**
- Logout function (clears session but not cache)
- Email cache service (not being called on logout)
- Login flow (shows stale data before refresh)
**Edge Cases to Handle:**
- User logged out due to token expiration
- User force-quit browser during session
- Multiple tabs open when logging out
Example 3: Bug Fix (Bad - How)
## Implementation
Add `localStorage.clear()` to the logout function in `auth.ts` on line 45.
Example 4: Refactoring (Good - What)
## Requirements
**Current State:**
Email parsing logic is duplicated across 5 files: `inbox.ts`, `search.ts`, `filters.ts`, `notifications.ts`, `archive.ts`. Each implementation has slight variations causing inconsistent behavior.
**Target State:**
- Single source of truth: `EmailParser` utility class in `src/utils/`
- Methods: `parseEmailAddress()`, `extractDomain()`, `validateFormat()`, `normalizeEmail()`
- Consistent behavior: All 5 files use the same parser
- Error handling: Throws `InvalidEmailError` with descriptive messages
- Input: Accepts strings, returns parsed email object `{ local, domain, original }`
- Edge cases: Handles quoted strings, unicode, plus addressing, case sensitivity
**Files to Refactor:**
- Create: `src/utils/EmailParser.ts`
- Modify: `inbox.ts`, `search.ts`, `filters.ts`, `notifications.ts`, `archive.ts`
- Update imports and replace inline parsing logic with parser calls
Example 4: Refactoring (Bad - How)
## Implementation
Create a class with a constructor. Add methods for parsing. Use regex to match email patterns. Export the class. Import it in the 5 files and replace the code...
Integration with Agent Framework
This template ensures all agents have what they need:
issue-implementer needs:
- ✅ Requirements section → Clear end state, technical details
- ✅ Affected Components → Which files/modules to modify
- ✅ Implementation Guidance → Problems to solve (not solutions)
- ✅ Error Handling → What error behavior to implement
quality-checker needs:
- ✅ Testing Expectations → What tests to write/pass
- ✅ Acceptance Criteria → References pytest, ESLint, TypeScript, build
- ⚠️ Performance Requirements → Optional, only if specific concern exists
security-checker needs:
- ✅ Security Requirements → Authentication, validation, vulnerabilities
- ✅ Affected Components → What handles sensitive data
doc-checker needs:
- ✅ Documentation Requirements → README, wiki, API docs, FEATURE-LIST.md
- ✅ Acceptance Criteria → "Documentation updated" criterion
review-orchestrator needs:
- ✅ Acceptance Criteria → Clear pass/fail conditions
- ✅ Technical Constraints → Non-negotiable requirements
- ✅ Out of Scope → What NOT to review
issue-merger needs:
- ✅ Acceptance Criteria → Unambiguous "done" definition
- ✅ Related Issues → Dependencies to check
Every section maps to an agent's needs - nothing is optional unless marked [If applicable].
Notes
- Always specify what in detail (components, properties, behavior, constraints)
- Never prescribe how with complete implementations
- Short code examples are OK (< 10 lines): Type defs, response shapes, error formats
- Large code blocks are NOT OK (> 15 lines): Complete functions, full solutions
- Guide thinking with "you will need to figure out..."
- Make acceptance criteria specific and testable
- Reference existing patterns for the agent to follow
- Include performance, security, and accessibility requirements when relevant
- Adapt the issue structure to the type of work (database, API, UI, bug fix, refactoring)
- For database: Specify tables, columns, types, constraints
- For APIs: Specify endpoints, parameters, responses, error codes
- For UI: Specify props, behavior, styling, accessibility
- For bugs: Specify current vs expected behavior, affected components
- For refactoring: Specify current state, target state, files affected
The Code Snippet Rule
✅ Good: Request: { email: string, category?: 'personal' | 'work' }
- Shows shape/format
- Clarifies requirements
- Helps agent understand structure
❌ Bad: 30-line function implementation
- Provides complete solution
- Agent doesn't need to think
- Defeats purpose of agent framework