Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:18 +08:00
commit f39a68f422
10 changed files with 1819 additions and 0 deletions

View File

@@ -0,0 +1,14 @@
{
"name": "codex",
"description": "Invoke Codex CLI for complex coding tasks requiring high reasoning capabilities. Supports intelligent model selection, session continuation, and safe defaults.",
"version": "1.5.0",
"author": {
"name": "0xasun"
},
"skills": [
"./skills"
],
"agents": [
"./agents"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# codex
Invoke Codex CLI for complex coding tasks requiring high reasoning capabilities. Supports intelligent model selection, session continuation, and safe defaults.

12
agents/codex-agent.md Normal file
View File

@@ -0,0 +1,12 @@
---
name: codex-agent
description: Invoke Codex AI for complex coding tasks, architecture design, and code reviews
when-to-use: Use when user requests Codex, needs high-reasoning coding help, or asks for design review
model: inherit
---
You are a routing agent for the Codex skill.
When invoked, use the Skill tool to invoke the "codex" skill.
Pass the user's request directly to the skill without modification.
Let the skill handle all task execution.

69
plugin.lock.json Normal file
View File

@@ -0,0 +1,69 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:Lucklyric/cc-dev-tools:plugins/codex",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "f49bd429d9b6c6855db2c17ea750721ab3128cd2",
"treeHash": "beb7fc40c91745eb2d3b8cfadf06c34f042fd52076e3e8894c1ee736f604a6b4",
"generatedAt": "2025-11-28T10:12:02.806918Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "codex",
"description": "Invoke Codex CLI for complex coding tasks requiring high reasoning capabilities. Supports intelligent model selection, session continuation, and safe defaults.",
"version": "1.5.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "d0020c3e8c1d258e36048903d8f2a32b04fa8590439be341f2282defe008ce48"
},
{
"path": "agents/codex-agent.md",
"sha256": "321b86c51f392649afc080d94ecfa03005d745fff82e3b27ca33e2f819981575"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "8505f53e12539d83f2fb70613a5aba5edaa9895b3ce59253765a03c7e3d8978c"
},
{
"path": "skills/codex/SKILL.md",
"sha256": "a40e6718e94dc7682eef8399e0c75fd1bbda81d4244eb1808a382813266f979c"
},
{
"path": "skills/codex/references/advanced-patterns.md",
"sha256": "5b25c62a7db86b3ae47ea28600a19389d0ff9f90391287bbf9c62c7a0731e769"
},
{
"path": "skills/codex/references/session-workflows.md",
"sha256": "cfd58ff4aeb03316c0ac046a0f9803b036dd33b2b170a57af5b14c0edde2b95e"
},
{
"path": "skills/codex/references/codex-config.md",
"sha256": "1dce7296b48556544056e3daa67f7a79bacf7082f60c4c9b492ef468370dac48"
},
{
"path": "skills/codex/references/codex-help.md",
"sha256": "e3534217f08a0ce75fe67ba8d6ac7b1b5045d5cd172a8f196f7453dc47577022"
},
{
"path": "skills/codex/references/command-patterns.md",
"sha256": "522ea1355017e2587f05ce499942eebf952e58cc075c1e0e01e32cd9fd4086e3"
}
],
"dirSha256": "beb7fc40c91745eb2d3b8cfadf06c34f042fd52076e3e8894c1ee736f604a6b4"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

537
skills/codex/SKILL.md Normal file
View File

@@ -0,0 +1,537 @@
---
name: codex
version: 1.4.0
description: Invoke Codex CLI for complex coding tasks requiring high reasoning capabilities. This skill should be invoked when users explicitly mention "Codex", request complex implementation challenges, advanced reasoning, or need high-reasoning model assistance. Automatically triggers on codex-related requests and supports session continuation for iterative development.
---
# Codex: High-Reasoning AI Assistant for Claude Code
---
## CRITICAL: Always Use `codex exec`
**MUST USE**: `codex exec` for ALL Codex CLI invocations in Claude Code.
**NEVER USE**: `codex` (interactive mode) - will fail with "stdout is not a terminal"
**ALWAYS USE**: `codex exec` (non-interactive mode)
**Examples:**
- `codex exec -m gpt-5.1 "prompt"` (CORRECT)
- `codex -m gpt-5.1 "prompt"` (WRONG - will fail)
- `codex exec resume --last` (CORRECT)
- `codex resume --last` (WRONG - will fail)
**Why?** Claude Code's bash environment is non-terminal/non-interactive. Only `codex exec` works in this environment.
---
## When to Use This Skill
This skill should be invoked when:
- User explicitly mentions "Codex" or requests Codex assistance
- User needs help with complex coding tasks, algorithms, or architecture
- User requests "high reasoning" or "advanced implementation" help
- User needs complex problem-solving or architectural design
- User wants to continue a previous Codex conversation
## How It Works
### Detecting New Codex Requests
When a user makes a request that falls into one of the above categories, determine the task type:
**General Tasks** (architecture, design, reviews, explanations):
- Use model: `gpt-5.1` (high-reasoning general model)
- Example requests: "Design a queue data structure", "Review this architecture", "Explain this algorithm"
**Code Editing Tasks** (file modifications, implementation):
- Use model: `gpt-5.1-codex-max` (maximum capability for code editing - 27-42% faster)
- Example requests: "Edit this file to add feature X", "Implement the function", "Refactor this code"
### Bash CLI Command Structure
**IMPORTANT**: Always use `codex exec` for non-interactive execution. Claude Code's bash environment is non-terminal, so the interactive `codex` command will fail with "stdout is not a terminal" error.
#### For General Reasoning Tasks (Default)
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
--enable web_search_request \
"<user's prompt>"
```
#### For Code Editing Tasks
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=high \
--enable web_search_request \
"<user's prompt>"
```
**Why `codex exec`?**
- Non-interactive mode required for automation and Claude Code integration
- Produces clean output suitable for parsing
- Works in non-TTY environments (like Claude Code's bash)
### Model Selection Logic
**Use `gpt-5.1` (default) when:**
- Designing architecture or data structures
- Reviewing code for quality, security, or performance
- Explaining concepts or algorithms
- Planning implementation strategies
- General problem-solving and reasoning
**Use `gpt-5.1-codex-max` when:**
- Editing or modifying existing code files
- Implementing specific functions or features
- Refactoring code
- Writing new code with file I/O
- Any task requiring `workspace-write` sandbox
- Complex code editing requiring maximum reasoning capability
**Note**: For backward compatibility, `gpt-5.1-codex` (standard model) is still available and works identically. Use `gpt-5.1-codex-max` as the default for better performance (27-42% faster, 30% fewer thinking tokens).
### Default Configuration
All Codex invocations use these defaults unless user specifies otherwise:
| Parameter | Default Value | CLI Flag | Notes |
|-----------|---------------|----------|-------|
| Model | `gpt-5.1` | `-m gpt-5.1` | General reasoning tasks |
| Model (code editing) | `gpt-5.1-codex-max` | `-m gpt-5.1-codex-max` | Code editing tasks (27-42% faster) |
| Sandbox | `read-only` | `-s read-only` | Safe default (general tasks) |
| Sandbox (code editing) | `workspace-write` | `-s workspace-write` | Allows file modifications |
| Reasoning Effort | `high` | `-c model_reasoning_effort=high` | Maximum reasoning capability |
| Verbosity | `medium` | `-c model_verbosity=medium` | Balanced output detail |
| Web Search | `enabled` | `--enable web_search_request` | Access to up-to-date information |
### CLI Flags Reference
**Codex CLI Version**: 0.59.0+ (requires 0.59.0+ for gpt-5.1-codex-max support)
| Flag | Values | Description |
|------|--------|-------------|
| `-m, --model` | `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max` | Model selection |
| `-s, --sandbox` | `read-only`, `workspace-write`, `danger-full-access` | Sandbox mode |
| `-c, --config` | `key=value` | Config overrides (e.g., `model_reasoning_effort=high`) |
| `-C, --cd` | directory path | Working directory |
| `-p, --profile` | profile name | Use config profile |
| `--enable` | feature name | Enable a feature (e.g., `web_search_request`) |
| `--disable` | feature name | Disable a feature |
| `-i, --image` | file path(s) | Attach image(s) to initial prompt |
| `--full-auto` | flag | Convenience for workspace-write sandbox with on-failure approval |
| `--oss` | flag | Use local open source model provider |
| `--skip-git-repo-check` | flag | Allow running outside Git repository |
| `--output-schema` | file path | JSON Schema file for response shape |
| `--color` | `always`, `never`, `auto` | Color settings for output |
| `--json` | flag | Print events as JSONL |
| `-o, --output-last-message` | file path | Save last message to file |
| `--dangerously-bypass-approvals-and-sandbox` | flag | Skip confirmations (DANGEROUS) |
### Configuration Parameters
Pass these as `-c key=value`:
- `model_reasoning_effort`: `minimal`, `low`, `medium`, `high`, `xhigh` (default: `high`)
- **`xhigh`**: Extra-high reasoning for maximum capability (gpt-5.1-codex-max only)
- Use `xhigh` for complex architectural refactoring, long-horizon tasks, or when quality is more important than speed
- `model_verbosity`: `low`, `medium`, `high` (default: `medium`)
- `model_reasoning_summary`: `auto`, `concise`, `detailed`, `none` (default: `auto`)
- `sandbox_workspace_write.writable_roots`: JSON array of additional writable directories (e.g., `["/path1","/path2"]`)
**Note**: To specify additional writable directories beyond the workspace, use:
```bash
-c 'sandbox_workspace_write.writable_roots=["/path1","/path2"]'
```
This replaces the removed `--add-dir` flag from earlier versions.
### Model Selection Guide
**Default Models (Codex CLI v0.59.0+)**
This skill defaults to the GPT-5.1 model family:
- `gpt-5.1` - General reasoning, architecture, reviews (default)
- `gpt-5.1-codex-max` - Code editing and implementation (default for code tasks)
- `gpt-5.1-codex` - Standard code editing (available for backward compatibility)
**Performance Characteristics**:
- `gpt-5.1-codex-max` is 27-42% faster than `gpt-5.1-codex`
- Uses ~30% fewer thinking tokens at the same reasoning effort level
- Supports new `xhigh` reasoning effort for maximum capability
- Requires Codex CLI 0.59.0+ and ChatGPT Plus/Pro/Business/Edu/Enterprise subscription
**Backward Compatibility**
You can override to use older models when needed:
```bash
# Use older gpt-5 model explicitly
codex exec -m gpt-5 -s read-only "Design a data structure"
# Use older gpt-5-codex model explicitly
codex exec -m gpt-5-codex -s workspace-write "Implement feature X"
```
**When to Override**
- **Testing compatibility**: Verify behavior matches older model versions
- **Specific model requirements**: Project requires specific model version
- **Model comparison**: Compare outputs between model versions
**Model Override Examples**
Override via `-m` flag:
```bash
# Override to gpt-5 for general task
codex exec -m gpt-5 "Explain algorithm complexity"
# Override to gpt-5-codex for code task
codex exec -m gpt-5-codex -s workspace-write "Refactor authentication"
# Override to gpt-4 if available
codex exec -m gpt-4 "Review this code"
```
**Default Behavior**
Without explicit `-m` override:
- General tasks → `gpt-5.1`
- Code editing tasks → `gpt-5.1-codex-max` (recommended for best performance)
- Backward compatibility → `gpt-5.1-codex` still works if explicitly specified
## Session Continuation
### Detecting Continuation Requests
When user indicates they want to continue a previous Codex conversation:
- Keywords: "continue", "resume", "keep going", "add to that"
- Follow-up context referencing previous Codex work
- Explicit request like "continue where we left off"
### Resuming Sessions
For continuation requests, use the `codex resume` command:
#### Resume Most Recent Session (Recommended)
```bash
codex exec resume --last
```
This automatically continues the most recent Codex session with all previous context maintained.
#### Resume Specific Session
```bash
codex exec resume <session-id>
```
Resume a specific session by providing its UUID. Get session IDs from previous Codex output or by running `codex exec resume --last` to see the most recent session.
**Note**: The interactive session picker (`codex resume` without arguments) is NOT available in non-interactive/Claude Code environments. Always use `--last` or provide explicit session ID.
### Decision Logic: New vs. Continue
**Use `codex exec -m ... "<prompt>"`** when:
- User makes a new, independent request
- No reference to previous Codex work
- User explicitly wants a "fresh" or "new" session
**Use `codex exec resume --last`** when:
- User indicates continuation ("continue", "resume", "add to that")
- Follow-up question building on previous Codex conversation
- Iterative development on same task
### Session History Management
- Codex CLI automatically saves session history
- No manual session ID tracking needed
- Sessions persist across Claude Code restarts
- Use `codex exec resume --last` to access most recent session
- Use `codex exec resume <session-id>` for specific sessions
## Error Handling
### Simple Error Response Strategy
When errors occur, return clear, actionable messages without complex diagnostics:
**Error Message Format:**
```
Error: [Clear description of what went wrong]
To fix: [Concrete remediation action]
[Optional: Specific command example]
```
### Common Errors
#### Command Not Found
```
Error: Codex CLI not found
To fix: Install Codex CLI and ensure it's available in your PATH
Check installation: codex --version
```
#### Authentication Required
```
Error: Not authenticated with Codex
To fix: Run 'codex login' to authenticate
After authentication, try your request again.
```
#### Invalid Configuration
```
Error: Invalid model specified
To fix: Use 'gpt-5.1' for general reasoning or 'gpt-5.1-codex-max' for code editing (gpt-5.1-codex also available for backward compatibility)
Example: codex exec -m gpt-5.1 "your prompt here"
Example: codex exec -m gpt-5.1-codex-max -s workspace-write "code editing task"
```
### Troubleshooting
**First Steps for Any Issues:**
1. Check Codex CLI built-in help: `codex --help`, `codex exec --help`, `codex exec resume --help`
2. Consult official documentation: [https://github.com/openai/codex/tree/main/docs](https://github.com/openai/codex/tree/main/docs)
3. Verify skill resources in `references/` directory
**Skill not being invoked?**
- Check that request matches trigger keywords (Codex, complex coding, high reasoning, etc.)
- Explicitly mention "Codex" in your request
- Try: "Use Codex to help me with..."
**Session not resuming?**
- Verify you have a previous Codex session (check command output for session IDs)
- Try: `codex exec resume --last` to resume most recent session
- If no history exists, start a new session first
**"stdout is not a terminal" error?**
- Always use `codex exec` instead of plain `codex` in Claude Code
- Claude Code's bash environment is non-interactive/non-terminal
**Errors during execution?**
- Codex CLI errors are passed through directly
- Check Codex CLI logs for detailed diagnostics
- Verify working directory permissions if using workspace-write
- Check official Codex docs for latest updates and known issues
## Examples
### Example 1: General Reasoning Task (Architecture Design)
**User Request**: "Help me design a binary search tree architecture in Rust"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me design a binary search tree architecture in Rust"
```
**Result**: Codex provides high-reasoning architectural guidance using gpt-5. Session automatically saved for continuation.
---
### Example 2: Code Editing Task
**User Request**: "Edit this file to implement the BST insert method"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=high \
"Edit this file to implement the BST insert method"
```
**Result**: Codex uses gpt-5.1-codex-max (maximum capability for coding - 27-42% faster) with workspace-write permissions to modify files.
---
### Example 3: Session Continuation
**User Request**: "Continue with the BST - add a deletion method"
**Skill Executes**:
```bash
codex exec resume --last
```
**Result**: Codex resumes the previous BST session and continues with deletion method implementation, maintaining full context.
---
### Example 4: Custom Configuration
**User Request**: "Use Codex with web search to research and implement async patterns"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=high \
--enable web_search_request \
"Research and implement async patterns"
```
**Result**: Codex uses web search capability for latest information, then implements with high reasoning and maximum code editing capability.
---
### Example 5: Maximum Reasoning with xhigh
**User Request**: "Perform complex architectural refactoring of authentication system"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=xhigh \
"Perform complex architectural refactoring of authentication system"
```
**Result**: Codex uses extra-high reasoning effort (xhigh) for maximum capability on complex long-horizon tasks. Ideal for architectural refactoring where quality is more important than speed.
---
## New in v0.53.0
### Feature Flags (`--enable` / `--disable`)
Enable or disable specific Codex features:
```bash
codex exec --enable web_search_request "Research latest patterns"
codex exec --disable some_feature "Run without feature"
```
### Image Attachment (`-i, --image`)
Attach images to prompts for visual analysis:
```bash
codex exec -i screenshot.png "Analyze this UI design"
codex exec -i diagram1.png -i diagram2.png "Compare these architectures"
```
### Non-Git Environments (`--skip-git-repo-check`)
Run Codex outside Git repositories:
```bash
codex exec --skip-git-repo-check "Help with this script"
```
### Structured Output (`--output-schema`)
Define JSON schema for model responses:
```bash
codex exec --output-schema schema.json "Generate structured data"
```
### Output Coloring (`--color`)
Control colored output (always, never, auto):
```bash
codex exec --color never "Run in CI/CD pipeline"
```
### Web Search Migration
**Deprecated**: `--search` flag (not available in `codex exec`)
**New**: Use `--enable web_search_request` instead
```bash
# Old (invalid for codex exec)
codex --search "research topic"
# New (correct)
codex exec --enable web_search_request "research topic"
```
---
## When to Use GPT-5.1 vs GPT-5.1-Codex-Max
### Use GPT-5.1 (General High-Reasoning) For:
- Architecture and system design
- Code reviews and quality analysis
- Security audits and vulnerability assessment
- Performance optimization strategies
- Algorithm design and analysis
- Explaining complex concepts
- Planning and strategy
### Use GPT-5.1-Codex-Max (Maximum Code Capability) For:
- Editing existing code files (27-42% faster than standard codex)
- Implementing specific features
- Refactoring and code transformations
- Writing new code with file I/O
- Code generation tasks
- Debugging and fixes requiring file changes
- Complex architectural refactoring (with `xhigh` reasoning effort)
### Use GPT-5.1-Codex (Standard Code Model) For:
- Backward compatibility scenarios
- When you need to replicate behavior from earlier versions
- Explicit requirement to use the standard (non-max) model
**Default**: When in doubt, use `gpt-5.1` for general tasks. Use `gpt-5.1-codex-max` when specifically editing code for best performance and quality.
## Best Practices
### 1. Use Descriptive Requests
**Good**: "Help me implement a thread-safe queue with priority support in Python"
**Vague**: "Code help"
Clear, specific requests get better results from high-reasoning models.
### 2. Indicate Continuation Clearly
**Good**: "Continue with that queue implementation - add unit tests"
**Unclear**: "Add tests" (might start new session)
Explicit continuation keywords help the skill choose the right command.
### 3. Specify Permissions When Needed
**Good**: "Refactor this code (allow file writing)"
**Risky**: Assuming permissions without specifying
Make your intent clear when you need workspace-write permissions.
### 4. Leverage High Reasoning
The skill defaults to high reasoning effort - perfect for:
- Complex algorithms
- Architecture design
- Performance optimization
- Security reviews
## Platform & Capabilities (v0.53.0)
### Windows Sandbox Support
Windows sandbox is now available in alpha (experimental). Use with caution in production environments.
### Interactive Mode Features
The `/exit` slash-command alias is available in interactive `codex` mode (not applicable to `codex exec` non-interactive mode used by this skill).
### Model Verbosity Override
All code editing models (gpt-5.1-codex-max, gpt-5.1-codex) support verbosity override via `-c model_verbosity=<level>` for controlling output detail levels.
## Pattern References
For command construction examples and workflow patterns, Claude can reference:
- `references/command-patterns.md` - Common codex exec usage patterns
- `references/session-workflows.md` - Session continuation and resume workflows
- `references/advanced-patterns.md` - Complex configuration and flag combinations
These files provide detailed examples for constructing valid codex exec commands for various scenarios.
## Additional Resources
For more details, see:
- `references/codex-help.md` - Codex CLI command reference
- `references/codex-config.md` - Full configuration options
- `README.md` - Installation and quick start guide

View File

@@ -0,0 +1,425 @@
# Advanced Configuration Examples
---
## ⚠️ CRITICAL: Always Use `codex exec`
**ALL commands in this document use `codex exec` - this is mandatory in Claude Code.**
**NEVER**: `codex -m ...` or `codex --flag ...` (will fail with "stdout is not a terminal")
**ALWAYS**: `codex exec -m ...` or `codex exec --flag ...` (correct non-interactive mode)
Claude Code's bash environment is non-terminal. Plain `codex` commands will NOT work.
---
## Custom Model Selection
### Example 1: Force GPT-5 for Code Task
**User Request**: "Use GPT-5.1 (not Codex) to review this code for architecture issues"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Review this code for architecture issues"
```
**Why**: Even though it's code-related, user wants architectural review (high-reasoning) rather than code editing.
---
### Example 2: Explicit GPT-5-Codex for Implementation
**User Request**: "Use GPT-5.1-Codex to implement the authentication module"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
"Implement the authentication module"
```
**Why**: Implementation requires file writing and code generation (gpt-5.1-codex specialty).
---
## Workspace Write Permission
### Example 3: Allow File Modifications
**User Request**: "Have Codex refactor this codebase (allow file writing)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
"Refactor this codebase for better maintainability"
```
**Permission**: `workspace-write` allows Codex to modify files directly.
⚠️ **Warning**: Only use `workspace-write` when you trust the operation and want file modifications.
---
### Example 4: Read-Only Code Review
**User Request**: "Review this code for security vulnerabilities (read-only)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Review this code for security vulnerabilities"
```
**Permission**: `read-only` prevents file modifications - safer for review tasks.
---
## Web Search Integration
### Example 5: Research Latest Patterns
**User Request**: "Research latest Python async patterns and implement them (enable web search)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
--search \
"Research latest Python async patterns and implement them"
```
**Feature**: `--search` flag enables web search for up-to-date information.
---
### Example 6: Security Best Practices Research
**User Request**: "Use web search to find latest JWT security best practices, then review this auth code"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
--search \
"Find latest JWT security best practices and review this auth code"
```
---
## Reasoning Effort Control
### Example 7: Maximum Reasoning for Complex Algorithm
**User Request**: "Design an optimal algorithm for distributed consensus (maximum reasoning)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Design an optimal algorithm for distributed consensus"
```
**Default**: Already uses `high` reasoning effort.
---
### Example 8: Quick Code Review (Lower Reasoning)
**User Request**: "Quick syntax check on this code (low reasoning)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=low \
"Quick syntax check on this code"
```
**Use Case**: Fast turnaround for simple tasks.
---
## Verbosity Control
### Example 9: Detailed Explanation
**User Request**: "Explain this algorithm in detail (high verbosity)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
-c model_verbosity=high \
"Explain this algorithm in detail"
```
**Output**: Comprehensive, detailed explanation.
---
### Example 10: Concise Summary
**User Request**: "Briefly review this code (low verbosity)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
-c model_verbosity=low \
"Review this code"
```
**Output**: Concise, focused feedback.
---
## Working Directory Control
### Example 11: Specific Project Directory
**User Request**: "Work in the backend directory and review the API code"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
-C ./backend \
"Review the API code"
```
**Feature**: `-C` flag sets working directory for Codex.
---
## Approval Policy
### Example 12: Request Approval for Shell Commands
**User Request**: "Implement the build script (ask before running commands)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
-a on-request \
"Implement the build script"
```
**Safety**: `-a on-request` requires approval before executing shell commands.
---
## Combined Advanced Configuration
### Example 13: Full-Featured Request
**User Request**: "Use web search to find latest security practices, review my auth module in detail with high reasoning, allow file fixes if needed (ask for approval)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
-c model_verbosity=high \
-a on-request \
--search \
"Find latest security practices, review my auth module in detail, and fix issues"
```
**Features**:
- Web search enabled (`--search`)
- High reasoning (`model_reasoning_effort=high`)
- Detailed output (`model_verbosity=high`)
- File writing allowed (`workspace-write`)
- Requires approval for commands (`-a on-request`)
---
## Decision Tree: When to Use GPT-5.1 vs GPT-5.1-Codex
### Use GPT-5.1 For:
```
┌─────────────────────────────────────┐
│ Architecture & Design │
│ - System architecture │
│ - API design │
│ - Data structure design │
│ - Algorithm analysis │
├─────────────────────────────────────┤
│ Analysis & Review │
│ - Code reviews │
│ - Security audits │
│ - Performance analysis │
│ - Quality assessment │
├─────────────────────────────────────┤
│ Explanation & Learning │
│ - Concept explanations │
│ - Documentation review │
│ - Trade-off analysis │
│ - Best practices guidance │
└─────────────────────────────────────┘
```
### Use GPT-5.1-Codex For:
```
┌─────────────────────────────────────┐
│ Code Editing │
│ - Modify existing files │
│ - Implement features │
│ - Refactoring │
│ - Bug fixes │
├─────────────────────────────────────┤
│ Code Generation │
│ - Write new code │
│ - Generate boilerplate │
│ - Create test files │
│ - Scaffold projects │
├─────────────────────────────────────┤
│ File Operations │
│ - Multi-file changes │
│ - Batch updates │
│ - Migration scripts │
│ - Build configurations │
└─────────────────────────────────────┘
```
---
## Sandbox Mode Decision Matrix
| Task | Recommended Sandbox | Rationale |
|------|---------------------|-----------|
| Code review | `read-only` | No modifications needed |
| Architecture design | `read-only` | Planning phase only |
| Security audit | `read-only` | Analysis without changes |
| Implement feature | `workspace-write` | Requires file modifications |
| Refactor code | `workspace-write` | Must edit existing files |
| Generate new files | `workspace-write` | Creates new files |
| Bug fix | `workspace-write` | Edits source files |
---
## Configuration Profiles
### Create a Config Profile
You can create reusable configuration profiles in `~/.codex/config.toml`:
```toml
[profiles.review]
model = "gpt-5.1"
sandbox = "read-only"
model_reasoning_effort = "high"
model_verbosity = "medium"
[profiles.implement]
model = "gpt-5.1-codex"
sandbox = "workspace-write"
model_reasoning_effort = "high"
approval_policy = "on-request"
```
### Use Profile in Skill
**User Request**: "Use the review profile to analyze this code"
**Skill Executes**:
```bash
codex -p review "Analyze this code"
```
**Result**: Uses all settings from `[profiles.review]`.
---
## Best Practices
### 1. Match Model to Task Type
- **Thinking/Design** → GPT-5.1
- **Doing/Coding** → GPT-5.1-Codex
### 2. Use Safe Defaults, Override Intentionally
- Default to `read-only` unless file writing is explicitly needed
- Default to `high` reasoning for complex tasks
- Reduce reasoning effort only for simple, quick tasks
### 3. Combine Web Search with High Reasoning
For best results researching current practices:
```bash
codex exec -m gpt-5.1--search \
-c model_reasoning_effort=high \
"Research latest distributed systems patterns"
```
### 4. Request Approval for Risky Operations
Use `-a on-request` when:
- Working with production code
- Running shell commands
- Making broad changes
---
## Common Patterns
### Pattern 1: Research → Design → Implement
**Phase 1 - Research** (GPT-5.1 + web search):
```bash
codex exec -m gpt-5.1--search \
-c model_reasoning_effort=high \
"Research latest authentication patterns"
```
**Phase 2 - Design** (GPT-5.1 + high reasoning):
```bash
codex exec resume --last
# "Design the authentication system based on research"
```
**Phase 3 - Implement** (GPT-5.1-Codex + workspace-write):
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
"Implement the authentication system we designed"
```
---
### Pattern 2: Review → Fix → Verify
**Review** (GPT-5.1 + read-only):
```bash
codex exec -m gpt-5.1 -s read-only \
"Review this code for security issues"
```
**Fix** (GPT-5.1-Codex + workspace-write):
```bash
codex exec resume --last
# "Fix the security issues identified"
```
**Verify** (GPT-5.1 + read-only):
```bash
codex exec resume --last
# "Verify the fixes are correct"
```
---
## Next Steps
- **Basic usage**: See [basic-usage.md](./basic-usage.md)
- **Session continuation**: See [session-continuation.md](./session-continuation.md)
- **Full documentation**: See [../SKILL.md](../SKILL.md)
- **CLI reference**: See [../resources/codex-help.md](../resources/codex-help.md)
- **Config reference**: See [../resources/codex-config.md](../resources/codex-config.md)

View File

@@ -0,0 +1,57 @@
Config reference
Key Type / Values Notes
model string Model to use (e.g., gpt-5.1-codex).
model_provider string Provider id from model_providers (default: openai).
model_context_window number Context window tokens.
model_max_output_tokens number Max output tokens.
approval_policy untrusted | on-failure | on-request | never When to prompt for approval.
sandbox_mode read-only | workspace-write | danger-full-access OS sandbox policy.
sandbox_workspace_write.writable_roots array Extra writable roots in workspacewrite.
sandbox_workspace_write.network_access boolean Allow network in workspacewrite (default: false).
sandbox_workspace_write.exclude_tmpdir_env_var boolean Exclude $TMPDIR from writable roots (default: false).
sandbox_workspace_write.exclude_slash_tmp boolean Exclude /tmp from writable roots (default: false).
notify array External program for notifications.
instructions string Currently ignored; use experimental_instructions_file or AGENTS.md.
mcp_servers.<id>.command string MCP server launcher command (stdio servers only).
mcp_servers.<id>.args array MCP server args (stdio servers only).
mcp_servers.<id>.env map<string,string> MCP server env vars (stdio servers only).
mcp_servers.<id>.url string MCP server url (streamable http servers only).
mcp_servers.<id>.bearer_token_env_var string environment variable containing a bearer token to use for auth (streamable http servers only).
mcp_servers.<id>.enabled boolean When false, Codex skips starting the server (default: true).
mcp_servers.<id>.startup_timeout_sec number Startup timeout in seconds (default: 10). Timeout is applied both for initializing MCP server and initially listing tools.
mcp_servers.<id>.tool_timeout_sec number Per-tool timeout in seconds (default: 60). Accepts fractional values; omit to use the default.
mcp_servers.<id>.enabled_tools array Restrict the server to the listed tool names.
mcp_servers.<id>.disabled_tools array Remove the listed tool names after applying enabled_tools, if any.
model_providers.<id>.name string Display name.
model_providers.<id>.base_url string API base URL.
model_providers.<id>.env_key string Env var for API key.
model_providers.<id>.wire_api chat | responses Protocol used (default: chat).
model_providers.<id>.query_params map<string,string> Extra query params (e.g., Azure api-version).
model_providers.<id>.http_headers map<string,string> Additional static headers.
model_providers.<id>.env_http_headers map<string,string> Headers sourced from env vars.
model_providers.<id>.request_max_retries number Perprovider HTTP retry count (default: 4).
model_providers.<id>.stream_max_retries number SSE stream retry count (default: 5).
model_providers.<id>.stream_idle_timeout_ms number SSE idle timeout (ms) (default: 300000).
project_doc_max_bytes number Max bytes to read from AGENTS.md.
profile string Active profile name.
profiles.<name>.* various Profilescoped overrides of the same keys.
history.persistence save-all | none History file persistence (default: save-all).
history.max_bytes number Currently ignored (not enforced).
file_opener vscode | vscode-insiders | windsurf | cursor | none URI scheme for clickable citations (default: vscode).
tui table TUIspecific options.
tui.notifications boolean | array Enable desktop notifications in the tui (default: false).
hide_agent_reasoning boolean Hide model reasoning events.
show_raw_agent_reasoning boolean Show raw reasoning (when available).
model_reasoning_effort minimal | low | medium | high Responses API reasoning effort.
model_reasoning_summary auto | concise | detailed | none Reasoning summaries.
model_verbosity low | medium | high GPT5 text verbosity (Responses API).
model_supports_reasoning_summaries boolean Forceenable reasoning summaries.
model_reasoning_summary_format none | experimental Force reasoning summary format.
chatgpt_base_url string Base URL for ChatGPT auth flow.
experimental_instructions_file string (path) Replace builtin instructions (experimental).
experimental_use_exec_command_tool boolean Use experimental exec command tool.
projects.<path>.trust_level string Mark project/worktree as trusted (only "trusted" is recognized).
tools.web_search_request boolean Enable web search tool (default: false). Deprecated alias: tools.web_search
forced_login_method chatgpt | api Only allow Codex to be used with ChatGPT or API keys.
forced_chatgpt_workspace_id string (uuid) Only allow Codex to be used with the specified ChatGPT workspace.
tools.view_image boolean Enable the view_image tool so Codex can attach local image files from the workspace (default: false).

View File

@@ -0,0 +1,227 @@
# Codex CLI Help Reference
**Version**: 0.58.0
## Main Command: `codex --help`
```
Codex CLI
If no subcommand is specified, options will be forwarded to the interactive CLI.
Usage: codex [OPTIONS] [PROMPT]
codex [OPTIONS] <COMMAND> [ARGS]
Commands:
exec Run Codex non-interactively [aliases: e]
login Manage login
logout Remove stored authentication credentials
mcp [experimental] Run Codex as an MCP server and manage MCP servers
mcp-server [experimental] Run the Codex MCP server (stdio transport)
app-server [experimental] Run the app server or related tooling
completion Generate shell completion scripts
sandbox Run commands within a Codex-provided sandbox [aliases: debug]
apply Apply the latest diff produced by Codex agent as a `git apply` to your local working
tree [aliases: a]
resume Resume a previous interactive session (picker by default; use --last to continue the
most recent)
cloud [EXPERIMENTAL] Browse tasks from Codex Cloud and apply changes locally
features Inspect feature flags
help Print this message or the help of the given subcommand(s)
Arguments:
[PROMPT]
Optional user prompt to start the session
Options:
-c, --config <key=value>
Override a configuration value that would otherwise be loaded from `~/.codex/config.toml`.
Use a dotted path (`foo.bar.baz`) to override nested values. The `value` portion is parsed
as TOML. If it fails to parse as TOML, the raw string is used as a literal.
Examples: - `-c model="o3"` - `-c 'sandbox_permissions=["disk-full-read-access"]'` - `-c
shell_environment_policy.inherit=all`
--enable <FEATURE>
Enable a feature (repeatable). Equivalent to `-c features.<name>=true`
--disable <FEATURE>
Disable a feature (repeatable). Equivalent to `-c features.<name>=false`
-i, --image <FILE>...
Optional image(s) to attach to the initial prompt
-m, --model <MODEL>
Model the agent should use
--oss
Convenience flag to select the local open source model provider. Equivalent to -c
model_provider=oss; verifies a local Ollama server is running
-p, --profile <CONFIG_PROFILE>
Configuration profile from config.toml to specify default options
-s, --sandbox <SANDBOX_MODE>
Select the sandbox policy to use when executing model-generated shell commands
[possible values: read-only, workspace-write, danger-full-access]
-a, --ask-for-approval <APPROVAL_POLICY>
Configure when the model requires human approval before executing a command
Possible values:
- untrusted: Only run "trusted" commands (e.g. ls, cat, sed) without asking for user
approval. Will escalate to the user if the model proposes a command that is not in the
"trusted" set
- on-failure: Run all commands without asking for user approval. Only asks for approval if
a command fails to execute, in which case it will escalate to the user to ask for
un-sandboxed execution
- on-request: The model decides when to ask the user for approval
- never: Never ask for user approval Execution failures are immediately returned to
the model
--full-auto
Convenience alias for low-friction sandboxed automatic execution (-a on-request, --sandbox
workspace-write)
--dangerously-bypass-approvals-and-sandbox
Skip all confirmation prompts and execute commands without sandboxing. EXTREMELY
DANGEROUS. Intended solely for running in environments that are externally sandboxed
-C, --cd <DIR>
Tell the agent to use the specified directory as its working root
--search
Enable web search (off by default). When enabled, the native Responses `web_search` tool
is available to the model (no percall approval)
--add-dir <DIR>
Additional directories that should be writable alongside the primary workspace
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
```
## Exec Command: `codex exec --help`
```
Run Codex non-interactively
Usage: codex exec [OPTIONS] [PROMPT] [COMMAND]
Commands:
resume Resume a previous session by id or pick the most recent with --last
help Print this message or the help of the given subcommand(s)
Arguments:
[PROMPT]
Initial instructions for the agent. If not provided as an argument (or if `-` is used),
instructions are read from stdin
Options:
-c, --config <key=value>
Override a configuration value that would otherwise be loaded from `~/.codex/config.toml`.
Use a dotted path (`foo.bar.baz`) to override nested values. The `value` portion is parsed
as TOML. If it fails to parse as TOML, the raw string is used as a literal.
Examples: - `-c model="o3"` - `-c 'sandbox_permissions=["disk-full-read-access"]'` - `-c
shell_environment_policy.inherit=all`
--enable <FEATURE>
Enable a feature (repeatable). Equivalent to `-c features.<name>=true`
-i, --image <FILE>...
Optional image(s) to attach to the initial prompt
--disable <FEATURE>
Disable a feature (repeatable). Equivalent to `-c features.<name>=false`
-m, --model <MODEL>
Model the agent should use
--oss
-s, --sandbox <SANDBOX_MODE>
Select the sandbox policy to use when executing model-generated shell commands
[possible values: read-only, workspace-write, danger-full-access]
-p, --profile <CONFIG_PROFILE>
Configuration profile from config.toml to specify default options
--full-auto
Convenience alias for low-friction sandboxed automatic execution (-a on-request, --sandbox
workspace-write)
--dangerously-bypass-approvals-and-sandbox
Skip all confirmation prompts and execute commands without sandboxing. EXTREMELY
DANGEROUS. Intended solely for running in environments that are externally sandboxed
-C, --cd <DIR>
Tell the agent to use the specified directory as its working root
--skip-git-repo-check
Allow running Codex outside a Git repository
--output-schema <FILE>
Path to a JSON Schema file describing the model's final response shape
--color <COLOR>
Specifies color settings for use in the output
[default: auto]
[possible values: always, never, auto]
--json
Print events to stdout as JSONL
-o, --output-last-message <FILE>
Specifies file where the last message from the agent should be written
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
```
## Exec Resume Command: `codex exec resume --help`
```
Resume a previous session by id or pick the most recent with --last
Usage: codex exec resume [OPTIONS] [SESSION_ID] [PROMPT]
Arguments:
[SESSION_ID]
Conversation/session id (UUID). When provided, resumes this session. If omitted, use
--last to pick the most recent recorded session
[PROMPT]
Prompt to send after resuming the session. If `-` is used, read from stdin
Options:
-c, --config <key=value>
Override a configuration value that would otherwise be loaded from `~/.codex/config.toml`.
Use a dotted path (`foo.bar.baz`) to override nested values. The `value` portion is parsed
as TOML. If it fails to parse as TOML, the raw string is used as a literal.
Examples: - `-c model="o3"` - `-c 'sandbox_permissions=["disk-full-read-access"]'` - `-c
shell_environment_policy.inherit=all`
--last
Resume the most recent recorded session (newest) without specifying an id
--enable <FEATURE>
Enable a feature (repeatable). Equivalent to `-c features.<name>=true`
--disable <FEATURE>
Disable a feature (repeatable). Equivalent to `-c features.<name>=false`
-h, --help
Print help (see a summary with '-h')
```

View File

@@ -0,0 +1,188 @@
# Basic Usage Examples
---
## ⚠️ CRITICAL: Always Use `codex exec`
**ALL commands in this document use `codex exec` - this is mandatory in Claude Code.**
**NEVER**: `codex -m ...` (will fail with "stdout is not a terminal")
**ALWAYS**: `codex exec -m ...` (correct non-interactive mode)
Claude Code's bash environment is non-terminal. Plain `codex` commands will NOT work.
---
## Example 1: General Reasoning Task - Queue Design
### User Request
"Help me design a queue data structure in Python"
### What Happens
1. **Claude detects** the coding task (queue design)
2. **Skill is invoked** autonomously
3. **Codex CLI is called** with gpt-5.1 (general high-reasoning model):
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me design a queue data structure in Python"
```
4. **Codex responds** with high-reasoning architectural guidance on queue design
5. **Session is auto-saved** for potential continuation
### Expected Output
Codex provides:
- Queue design principles and trade-offs
- Multiple implementation approaches (list-based, deque, linked-list)
- Performance characteristics (O(1) enqueue/dequeue)
- Thread-safety considerations
- Usage examples and best practices
---
## Example 2: Code Editing Task - Implement Queue
### User Request
"Edit my Python file to implement the queue with thread-safety"
### What Happens
1. **Skill detects** code editing request
2. **Uses gpt-5.1-codex-max** (maximum capability for coding - 27-42% faster):
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=high \
"Edit my Python file to implement the queue with thread-safety"
```
3. **Codex performs code editing** with maximum capability model
4. **Files are modified** (workspace-write sandbox)
### Expected Output
Codex:
- Edits the target Python file
- Implements thread-safe queue using `threading.Lock`
- Adds proper synchronization primitives
- Includes docstrings and type hints
- Provides usage examples
---
## Example 3: Explicit Codex Request
### User Request
"Use Codex to design a REST API for a blog system"
### What Happens
1. **Explicit "Codex" mention** triggers skill
2. **Codex invoked** with coding-optimized settings:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Design a REST API for a blog system"
```
3. **High-reasoning analysis** provides comprehensive API design
### Expected Output
Codex delivers:
- RESTful endpoint design (GET/POST/PUT/DELETE)
- Resource modeling (posts, authors, comments)
- Authentication and authorization strategy
- Data validation approaches
- API versioning recommendations
- Error handling patterns
---
## Example 4: Complex Algorithm Design
### User Request
"Help me implement a binary search tree with balancing"
### What Happens
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me implement a binary search tree with balancing"
```
### Expected Output
Codex provides:
- BST fundamentals and invariants
- AVL vs Red-Black tree trade-offs
- Rotation algorithms (left, right, left-right, right-left)
- Insertion and deletion with rebalancing
- Complexity analysis
- Implementation guidance
---
## Example 5: Maximum Reasoning with xhigh
### User Request
"Refactor the authentication system with comprehensive security improvements"
### What Happens
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=xhigh \
"Refactor the authentication system with comprehensive security improvements"
```
### Expected Output
Codex provides:
- Deep architectural analysis of current system
- Comprehensive security vulnerability assessment
- Multi-layered refactoring strategy
- Implementation of security best practices
- Detailed reasoning about trade-offs
- Long-horizon planning for complex changes
**When to use xhigh**: Complex architectural refactoring, security-critical changes, long-horizon tasks where quality is more important than speed.
---
## Model Selection Summary
| Task Type | Model | Sandbox | Example |
|-----------|-------|---------|---------|
| General reasoning | `gpt-5.1` | `read-only` | "Design a queue" |
| Architecture design | `gpt-5.1` | `read-only` | "Design REST API" |
| Code review | `gpt-5.1` | `read-only` | "Review this code" |
| Code editing (standard) | `gpt-5.1-codex-max` | `workspace-write` | "Edit file to add X" |
| Code editing (maximum reasoning) | `gpt-5.1-codex-max` + `xhigh` | `workspace-write` | "Complex refactoring" |
| Implementation | `gpt-5.1-codex-max` | `workspace-write` | "Implement function Y" |
| Backward compatibility | `gpt-5.1-codex` | `workspace-write` | "Use standard model" |
**Note**: `gpt-5.1-codex-max` is 27-42% faster than `gpt-5.1-codex` and uses ~30% fewer thinking tokens. It supports a new `xhigh` reasoning effort level for maximum capability.
---
## Tips for Best Results
1. **Be specific** in your requests - detailed prompts get better reasoning
2. **Indicate task type** clearly (design vs. implementation)
3. **Mention permissions** when you need file writes ("allow file writing")
4. **Use continuation** for iterative development (see session-continuation.md)
---
## Next Steps
- **Continue a session**: See [session-continuation.md](./session-continuation.md)
- **Advanced config**: See [advanced-config.md](./advanced-config.md)
- **Full documentation**: See [../SKILL.md](../SKILL.md)

View File

@@ -0,0 +1,287 @@
# Session Continuation Examples
---
## ⚠️ CRITICAL: Always Use `codex exec`
**ALL commands in this document use `codex exec` - this is mandatory in Claude Code.**
**NEVER**: `codex resume ...` (will fail with "stdout is not a terminal")
**ALWAYS**: `codex exec resume ...` (correct non-interactive mode)
Claude Code's bash environment is non-terminal. Plain `codex` commands will NOT work.
---
## Example 1: Basic Session Continuation
### Initial Request
**User**: "Help me design a queue data structure in Python"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me design a queue data structure in Python"
```
**Codex Response**: Provides queue design with multiple approaches.
**Session Auto-Saved**: Codex CLI saves this session automatically.
---
### Follow-Up Request
**User**: "Continue with that queue - now add thread-safety"
**Skill Detects**: Continuation keywords ("continue with that")
**Skill Executes**:
```bash
codex exec resume --last
```
**Codex Response**: Resumes previous session, maintains context about the queue design, and adds thread-safety implementation building on the previous discussion.
**Context Maintained**: All previous conversation history is available to Codex.
---
## Example 2: Multi-Turn Iterative Development
### Turn 1: Initial Design
**User**: "Design a REST API for a blog system"
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Design a REST API for a blog system"
```
**Output**: API endpoint design, resource modeling, etc.
---
### Turn 2: Add Authentication
**User**: "Add authentication to that API design"
**Skill Executes**:
```bash
codex exec resume --last
```
**Output**: Codex continues from previous API design and adds JWT/OAuth authentication strategy.
---
### Turn 3: Add Error Handling
**User**: "Now add comprehensive error handling"
**Skill Executes**:
```bash
codex exec resume --last
```
**Output**: Codex builds on previous API + auth design and adds error handling patterns.
---
### Turn 4: Implementation
**User**: "Implement the user authentication endpoint"
**Skill Executes**:
```bash
codex exec resume --last
```
**Output**: Codex uses all previous context to implement the auth endpoint with full understanding of the API design.
**Result**: After 4 turns, you have a complete API with design, auth, error handling, and initial implementation - all with maintained context.
---
## Example 3: Explicit Resume Command
### When to Use Interactive Picker
If you have multiple Codex sessions and want to choose which one to continue:
**User**: "Show me my Codex sessions and let me pick which to resume"
**Manual Command** (run outside skill):
```bash
codex exec resume --last
```
This opens an interactive picker showing:
```
Recent Codex Sessions:
1. Queue data structure design (30 minutes ago)
2. REST API for blog system (2 hours ago)
3. Binary search tree implementation (yesterday)
Select session to resume:
```
---
## Example 4: Resuming After Claude Code Restart
### Scenario
1. You worked on a queue design with Codex
2. Closed Claude Code
3. Reopened Claude Code days later
### Resume Request
**User**: "Continue where we left off with the queue implementation"
**Skill Executes**:
```bash
codex exec resume --last
```
**Result**: Codex resumes the most recent session (the queue work) with full context maintained across Claude Code restarts.
**Why It Works**: Codex CLI persists session history independently of Claude Code.
---
## Continuation Keywords
The skill detects continuation requests when you use phrases like:
- "Continue with that"
- "Resume the previous session"
- "Keep going"
- "Add to that"
- "Now add X" (implies building on previous)
- "Continue where we left off"
- "Follow up on that"
---
## Decision Tree: New Session vs. Resume
```
User makes request
├─ Contains continuation keywords?
│ │
│ ├─ YES → Use `codex exec resume --last`
│ │
│ └─ NO → Check context
│ │
│ ├─ References previous Codex work?
│ │ │
│ │ ├─ YES → Use `codex exec resume --last`
│ │ │
│ │ └─ NO → New session: `codex exec -m ... "prompt"`
└─ User explicitly says "new" or "fresh"?
└─ YES → Force new session even if continuation keywords present
```
---
## Session History Management
### Automatic Save
- Every Codex session is automatically saved by Codex CLI
- No manual session ID tracking needed
- Sessions persist across:
- Claude Code restarts
- Terminal sessions
- System reboots
### Accessing History
```bash
# Resume most recent (recommended for skill)
codex exec resume --last
# Interactive picker (manual use)
codex exec resume --last
# List sessions (manual use)
codex list
```
---
## Best Practices
### 1. Use Clear Continuation Language
**Good**:
- "Continue with that queue implementation - add unit tests"
- "Resume the API design session and add rate limiting"
**Less Clear**:
- "Add tests" (ambiguous - new or continue?)
- "Rate limiting" (no continuation context)
### 2. Build Incrementally
Start with high-level design, then iterate:
1. Design (new session)
2. Add feature A (resume)
3. Add feature B (resume)
4. Implement (resume with full context)
### 3. Leverage Context Accumulation
Each resumed session has ALL previous context:
- Design decisions
- Trade-offs discussed
- Code patterns chosen
- Error handling approaches
This allows Codex to provide increasingly sophisticated, context-aware assistance.
---
## Troubleshooting
### "No previous sessions found"
**Cause**: Codex CLI history is empty (no prior sessions)
**Fix**: Start a new session first:
```bash
codex exec -m gpt-5.1"Design a queue"
```
Then subsequent "continue" requests will work.
---
### Session Not Resuming Correctly
**Symptoms**: Resume works but context seems lost
**Possible Causes**:
- Multiple sessions mixed together
- User explicitly requested "fresh start"
**Fix**: Use interactive picker to select correct session:
```bash
codex exec resume --last
```
---
### Multiple Sessions Confusion
**Scenario**: Working on two projects, want to resume specific one
**Solution**:
1. Be explicit: "Resume the queue design session" (skill will use --last)
2. Or manually: `codex exec resume --last` (or `codex exec resume <session-id>`) → pick correct session
---
## Next Steps
- **Advanced config**: See [advanced-config.md](./advanced-config.md)
- **Basic examples**: See [basic-usage.md](./basic-usage.md)
- **Full docs**: See [../SKILL.md](../SKILL.md)