# Execute Tasks
Execute the next task from an artifact.
# Task Execution Rules
## Overview
Execute a specific task along with its sub-tasks systematically following a TDD development workflow.
- IMPORTANT: For any step that specifies a subagent in the subagent="" XML attribute you MUST use the specified subagent to perform the instructions for that step.
- Process XML blocks sequentially
- Read and execute every numbered step in the process_flow EXACTLY as the instructions specify.
- If you need clarification on any details of your current task, stop and ask the user specific numbered questions and then continue once you have all of the information you need.
- Use exact templates as provided
### Step 1: Task Understanding
Read and analyze the given parent task and all its sub-tasks from tasks.md to gain complete understanding of what needs to be built.
- Parent task description
- All sub-task descriptions
- Task dependencies
- Expected outcomes
ACTION: Read the specific parent task and all its sub-tasks
ANALYZE: Full scope of implementation required
UNDERSTAND: Dependencies and expected deliverables
NOTE: Test requirements for each sub-task
### Step 2: Technical Specification Review
Search and extract relevant sections from technical-spec.md to understand the technical implementation approach for this task.
FIND sections in technical-spec.md related to:
- Current task functionality
- Implementation approach for this feature
- Integration requirements
- Performance criteria
ACTION: Search technical-spec.md for task-relevant sections
EXTRACT: Only implementation details for current task
SKIP: Unrelated technical specifications
FOCUS: Technical approach for this specific feature
### Step 3: Standards Review
Use the context-fetcher subagent to retrieve relevant sections from ~/.research-os/standards/ that apply to the current task, including conventions.md, error-handling.md, validation.md, and testing standards.
FIND sections relevant to:
- Task's technology stack
- Feature type being implemented
- Testing approaches (from coverage.md, unit-tests.md)
- Code organization patterns (from conventions.md)
- Error handling requirements (from error-handling.md)
ACTION: Use context-fetcher subagent
REQUEST: "Find standards sections relevant to:
- Task's technology stack: [CURRENT_TECH]
- Feature type: [CURRENT_FEATURE_TYPE]
- Testing approaches needed
- Code organization patterns"
PROCESS: Returned standards
APPLY: Relevant patterns to implementation
### Step 4: Code Style Review
Use the context-fetcher subagent to retrieve relevant code style rules from ~/.research-os/standards/global/coding-style.md for the languages and file types being used in this task.
FIND style rules for:
- Languages used in this task
- File types being modified
- Component patterns being implemented
- Testing style guidelines
ACTION: Use context-fetcher subagent
REQUEST: "Find code style rules for:
- Languages: [LANGUAGES_IN_TASK]
- File types: [FILE_TYPES_BEING_MODIFIED]
- Component patterns: [PATTERNS_BEING_IMPLEMENTED]
- Testing style guidelines"
PROCESS: Returned style rules
APPLY: Relevant formatting and patterns
### Step 5: Task and Sub-task Execution
Execute the parent task and all sub-tasks in order using test-driven development (TDD) approach.
Write tests for [feature]
Implementation steps
Verify all tests pass
IF sub-task 1 is "Write tests for [feature]":
- Write all tests for the parent feature
- Include unit tests, integration tests, edge cases
- Run tests to ensure they fail appropriately
- Mark sub-task 1 complete
FOR each implementation sub-task (2 through n-1):
- Implement the specific functionality
- Make relevant tests pass
- Update any adjacent/related tests if needed
- Refactor while keeping tests green
- Mark sub-task complete
IF final sub-task is "Verify all tests pass":
- Run entire test suite
- Fix any remaining failures
- Ensure no regressions
- Mark final sub-task complete
- Written in first sub-task
- Cover all aspects of parent feature
- Include edge cases and error handling
- Made during implementation sub-tasks
- Update expectations for changed behavior
- Maintain backward compatibility
ACTION: Execute sub-tasks in their defined order
RECOGNIZE: First sub-task typically writes all tests
IMPLEMENT: Middle sub-tasks build functionality
VERIFY: Final sub-task ensures all tests pass
UPDATE: Mark each sub-task complete as finished
### Step 6: Task-Specific Test Verification
Use the test-runner subagent to run and verify only the tests specific to this parent task (not the full test suite) to ensure the feature is working correctly.
- All new tests written for this parent task
- All tests updated during this task
- Tests directly related to this feature
- Full test suite (done later in execute-tasks.md)
- Unrelated test files
IF any test failures:
- Debug and fix the specific issue
- Re-run only the failed tests
ELSE:
- Confirm all task tests passing
- Ready to proceed
ACTION: Use test-runner subagent
REQUEST: "Run tests for [this parent task's test files]"
WAIT: For test-runner analysis
PROCESS: Returned failure information
VERIFY: 100% pass rate for task-specific tests
CONFIRM: This feature's tests are complete
### Step 7: Mark this task and sub-tasks complete
IMPORTANT: In the tasks.md file, mark this task and its sub-tasks complete by updating each task checkbox to [x].
- [x] Task description
- [ ] Task description
- [ ] Task description
⚠️ Blocking issue: [DESCRIPTION]
maximum 3 different approaches
document blocking issue
⚠️
ACTION: Update tasks.md after each task completion
MARK: [x] for completed items immediately
DOCUMENT: Blocking issues with ⚠️ emoji
LIMIT: 3 attempts before marking as blocked
After completing all steps in a process_flow, always review your work and verify:
- Every numbered step has read, executed, and delivered according to its instructions.
- All steps that specified a subagent should be used, did in fact delegate those tasks to the specified subagent. IF they did not, see why the subagent was not used and report your findings to the user.
- IF you notice a step wasn't executed according to its instructions, report your findings and explain which part of the instructions were misread or skipped and why.