Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:18 +08:00
commit f39a68f422
10 changed files with 1819 additions and 0 deletions

View File

@@ -0,0 +1,425 @@
# Advanced Configuration Examples
---
## ⚠️ CRITICAL: Always Use `codex exec`
**ALL commands in this document use `codex exec` - this is mandatory in Claude Code.**
**NEVER**: `codex -m ...` or `codex --flag ...` (will fail with "stdout is not a terminal")
**ALWAYS**: `codex exec -m ...` or `codex exec --flag ...` (correct non-interactive mode)
Claude Code's bash environment is non-terminal. Plain `codex` commands will NOT work.
---
## Custom Model Selection
### Example 1: Force GPT-5 for Code Task
**User Request**: "Use GPT-5.1 (not Codex) to review this code for architecture issues"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Review this code for architecture issues"
```
**Why**: Even though it's code-related, user wants architectural review (high-reasoning) rather than code editing.
---
### Example 2: Explicit GPT-5-Codex for Implementation
**User Request**: "Use GPT-5.1-Codex to implement the authentication module"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
"Implement the authentication module"
```
**Why**: Implementation requires file writing and code generation (gpt-5.1-codex specialty).
---
## Workspace Write Permission
### Example 3: Allow File Modifications
**User Request**: "Have Codex refactor this codebase (allow file writing)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
"Refactor this codebase for better maintainability"
```
**Permission**: `workspace-write` allows Codex to modify files directly.
⚠️ **Warning**: Only use `workspace-write` when you trust the operation and want file modifications.
---
### Example 4: Read-Only Code Review
**User Request**: "Review this code for security vulnerabilities (read-only)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Review this code for security vulnerabilities"
```
**Permission**: `read-only` prevents file modifications - safer for review tasks.
---
## Web Search Integration
### Example 5: Research Latest Patterns
**User Request**: "Research latest Python async patterns and implement them (enable web search)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
--search \
"Research latest Python async patterns and implement them"
```
**Feature**: `--search` flag enables web search for up-to-date information.
---
### Example 6: Security Best Practices Research
**User Request**: "Use web search to find latest JWT security best practices, then review this auth code"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
--search \
"Find latest JWT security best practices and review this auth code"
```
---
## Reasoning Effort Control
### Example 7: Maximum Reasoning for Complex Algorithm
**User Request**: "Design an optimal algorithm for distributed consensus (maximum reasoning)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Design an optimal algorithm for distributed consensus"
```
**Default**: Already uses `high` reasoning effort.
---
### Example 8: Quick Code Review (Lower Reasoning)
**User Request**: "Quick syntax check on this code (low reasoning)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=low \
"Quick syntax check on this code"
```
**Use Case**: Fast turnaround for simple tasks.
---
## Verbosity Control
### Example 9: Detailed Explanation
**User Request**: "Explain this algorithm in detail (high verbosity)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
-c model_verbosity=high \
"Explain this algorithm in detail"
```
**Output**: Comprehensive, detailed explanation.
---
### Example 10: Concise Summary
**User Request**: "Briefly review this code (low verbosity)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
-c model_verbosity=low \
"Review this code"
```
**Output**: Concise, focused feedback.
---
## Working Directory Control
### Example 11: Specific Project Directory
**User Request**: "Work in the backend directory and review the API code"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
-C ./backend \
"Review the API code"
```
**Feature**: `-C` flag sets working directory for Codex.
---
## Approval Policy
### Example 12: Request Approval for Shell Commands
**User Request**: "Implement the build script (ask before running commands)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
-a on-request \
"Implement the build script"
```
**Safety**: `-a on-request` requires approval before executing shell commands.
---
## Combined Advanced Configuration
### Example 13: Full-Featured Request
**User Request**: "Use web search to find latest security practices, review my auth module in detail with high reasoning, allow file fixes if needed (ask for approval)"
**Skill Executes**:
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
-c model_verbosity=high \
-a on-request \
--search \
"Find latest security practices, review my auth module in detail, and fix issues"
```
**Features**:
- Web search enabled (`--search`)
- High reasoning (`model_reasoning_effort=high`)
- Detailed output (`model_verbosity=high`)
- File writing allowed (`workspace-write`)
- Requires approval for commands (`-a on-request`)
---
## Decision Tree: When to Use GPT-5.1 vs GPT-5.1-Codex
### Use GPT-5.1 For:
```
┌─────────────────────────────────────┐
│ Architecture & Design │
│ - System architecture │
│ - API design │
│ - Data structure design │
│ - Algorithm analysis │
├─────────────────────────────────────┤
│ Analysis & Review │
│ - Code reviews │
│ - Security audits │
│ - Performance analysis │
│ - Quality assessment │
├─────────────────────────────────────┤
│ Explanation & Learning │
│ - Concept explanations │
│ - Documentation review │
│ - Trade-off analysis │
│ - Best practices guidance │
└─────────────────────────────────────┘
```
### Use GPT-5.1-Codex For:
```
┌─────────────────────────────────────┐
│ Code Editing │
│ - Modify existing files │
│ - Implement features │
│ - Refactoring │
│ - Bug fixes │
├─────────────────────────────────────┤
│ Code Generation │
│ - Write new code │
│ - Generate boilerplate │
│ - Create test files │
│ - Scaffold projects │
├─────────────────────────────────────┤
│ File Operations │
│ - Multi-file changes │
│ - Batch updates │
│ - Migration scripts │
│ - Build configurations │
└─────────────────────────────────────┘
```
---
## Sandbox Mode Decision Matrix
| Task | Recommended Sandbox | Rationale |
|------|---------------------|-----------|
| Code review | `read-only` | No modifications needed |
| Architecture design | `read-only` | Planning phase only |
| Security audit | `read-only` | Analysis without changes |
| Implement feature | `workspace-write` | Requires file modifications |
| Refactor code | `workspace-write` | Must edit existing files |
| Generate new files | `workspace-write` | Creates new files |
| Bug fix | `workspace-write` | Edits source files |
---
## Configuration Profiles
### Create a Config Profile
You can create reusable configuration profiles in `~/.codex/config.toml`:
```toml
[profiles.review]
model = "gpt-5.1"
sandbox = "read-only"
model_reasoning_effort = "high"
model_verbosity = "medium"
[profiles.implement]
model = "gpt-5.1-codex"
sandbox = "workspace-write"
model_reasoning_effort = "high"
approval_policy = "on-request"
```
### Use Profile in Skill
**User Request**: "Use the review profile to analyze this code"
**Skill Executes**:
```bash
codex -p review "Analyze this code"
```
**Result**: Uses all settings from `[profiles.review]`.
---
## Best Practices
### 1. Match Model to Task Type
- **Thinking/Design** → GPT-5.1
- **Doing/Coding** → GPT-5.1-Codex
### 2. Use Safe Defaults, Override Intentionally
- Default to `read-only` unless file writing is explicitly needed
- Default to `high` reasoning for complex tasks
- Reduce reasoning effort only for simple, quick tasks
### 3. Combine Web Search with High Reasoning
For best results researching current practices:
```bash
codex exec -m gpt-5.1--search \
-c model_reasoning_effort=high \
"Research latest distributed systems patterns"
```
### 4. Request Approval for Risky Operations
Use `-a on-request` when:
- Working with production code
- Running shell commands
- Making broad changes
---
## Common Patterns
### Pattern 1: Research → Design → Implement
**Phase 1 - Research** (GPT-5.1 + web search):
```bash
codex exec -m gpt-5.1--search \
-c model_reasoning_effort=high \
"Research latest authentication patterns"
```
**Phase 2 - Design** (GPT-5.1 + high reasoning):
```bash
codex exec resume --last
# "Design the authentication system based on research"
```
**Phase 3 - Implement** (GPT-5.1-Codex + workspace-write):
```bash
codex exec -m gpt-5.1-codex -s workspace-write \
-c model_reasoning_effort=high \
"Implement the authentication system we designed"
```
---
### Pattern 2: Review → Fix → Verify
**Review** (GPT-5.1 + read-only):
```bash
codex exec -m gpt-5.1 -s read-only \
"Review this code for security issues"
```
**Fix** (GPT-5.1-Codex + workspace-write):
```bash
codex exec resume --last
# "Fix the security issues identified"
```
**Verify** (GPT-5.1 + read-only):
```bash
codex exec resume --last
# "Verify the fixes are correct"
```
---
## Next Steps
- **Basic usage**: See [basic-usage.md](./basic-usage.md)
- **Session continuation**: See [session-continuation.md](./session-continuation.md)
- **Full documentation**: See [../SKILL.md](../SKILL.md)
- **CLI reference**: See [../resources/codex-help.md](../resources/codex-help.md)
- **Config reference**: See [../resources/codex-config.md](../resources/codex-config.md)

View File

@@ -0,0 +1,57 @@
Config reference
Key Type / Values Notes
model string Model to use (e.g., gpt-5.1-codex).
model_provider string Provider id from model_providers (default: openai).
model_context_window number Context window tokens.
model_max_output_tokens number Max output tokens.
approval_policy untrusted | on-failure | on-request | never When to prompt for approval.
sandbox_mode read-only | workspace-write | danger-full-access OS sandbox policy.
sandbox_workspace_write.writable_roots array Extra writable roots in workspacewrite.
sandbox_workspace_write.network_access boolean Allow network in workspacewrite (default: false).
sandbox_workspace_write.exclude_tmpdir_env_var boolean Exclude $TMPDIR from writable roots (default: false).
sandbox_workspace_write.exclude_slash_tmp boolean Exclude /tmp from writable roots (default: false).
notify array External program for notifications.
instructions string Currently ignored; use experimental_instructions_file or AGENTS.md.
mcp_servers.<id>.command string MCP server launcher command (stdio servers only).
mcp_servers.<id>.args array MCP server args (stdio servers only).
mcp_servers.<id>.env map<string,string> MCP server env vars (stdio servers only).
mcp_servers.<id>.url string MCP server url (streamable http servers only).
mcp_servers.<id>.bearer_token_env_var string environment variable containing a bearer token to use for auth (streamable http servers only).
mcp_servers.<id>.enabled boolean When false, Codex skips starting the server (default: true).
mcp_servers.<id>.startup_timeout_sec number Startup timeout in seconds (default: 10). Timeout is applied both for initializing MCP server and initially listing tools.
mcp_servers.<id>.tool_timeout_sec number Per-tool timeout in seconds (default: 60). Accepts fractional values; omit to use the default.
mcp_servers.<id>.enabled_tools array Restrict the server to the listed tool names.
mcp_servers.<id>.disabled_tools array Remove the listed tool names after applying enabled_tools, if any.
model_providers.<id>.name string Display name.
model_providers.<id>.base_url string API base URL.
model_providers.<id>.env_key string Env var for API key.
model_providers.<id>.wire_api chat | responses Protocol used (default: chat).
model_providers.<id>.query_params map<string,string> Extra query params (e.g., Azure api-version).
model_providers.<id>.http_headers map<string,string> Additional static headers.
model_providers.<id>.env_http_headers map<string,string> Headers sourced from env vars.
model_providers.<id>.request_max_retries number Perprovider HTTP retry count (default: 4).
model_providers.<id>.stream_max_retries number SSE stream retry count (default: 5).
model_providers.<id>.stream_idle_timeout_ms number SSE idle timeout (ms) (default: 300000).
project_doc_max_bytes number Max bytes to read from AGENTS.md.
profile string Active profile name.
profiles.<name>.* various Profilescoped overrides of the same keys.
history.persistence save-all | none History file persistence (default: save-all).
history.max_bytes number Currently ignored (not enforced).
file_opener vscode | vscode-insiders | windsurf | cursor | none URI scheme for clickable citations (default: vscode).
tui table TUIspecific options.
tui.notifications boolean | array Enable desktop notifications in the tui (default: false).
hide_agent_reasoning boolean Hide model reasoning events.
show_raw_agent_reasoning boolean Show raw reasoning (when available).
model_reasoning_effort minimal | low | medium | high Responses API reasoning effort.
model_reasoning_summary auto | concise | detailed | none Reasoning summaries.
model_verbosity low | medium | high GPT5 text verbosity (Responses API).
model_supports_reasoning_summaries boolean Forceenable reasoning summaries.
model_reasoning_summary_format none | experimental Force reasoning summary format.
chatgpt_base_url string Base URL for ChatGPT auth flow.
experimental_instructions_file string (path) Replace builtin instructions (experimental).
experimental_use_exec_command_tool boolean Use experimental exec command tool.
projects.<path>.trust_level string Mark project/worktree as trusted (only "trusted" is recognized).
tools.web_search_request boolean Enable web search tool (default: false). Deprecated alias: tools.web_search
forced_login_method chatgpt | api Only allow Codex to be used with ChatGPT or API keys.
forced_chatgpt_workspace_id string (uuid) Only allow Codex to be used with the specified ChatGPT workspace.
tools.view_image boolean Enable the view_image tool so Codex can attach local image files from the workspace (default: false).

View File

@@ -0,0 +1,227 @@
# Codex CLI Help Reference
**Version**: 0.58.0
## Main Command: `codex --help`
```
Codex CLI
If no subcommand is specified, options will be forwarded to the interactive CLI.
Usage: codex [OPTIONS] [PROMPT]
codex [OPTIONS] <COMMAND> [ARGS]
Commands:
exec Run Codex non-interactively [aliases: e]
login Manage login
logout Remove stored authentication credentials
mcp [experimental] Run Codex as an MCP server and manage MCP servers
mcp-server [experimental] Run the Codex MCP server (stdio transport)
app-server [experimental] Run the app server or related tooling
completion Generate shell completion scripts
sandbox Run commands within a Codex-provided sandbox [aliases: debug]
apply Apply the latest diff produced by Codex agent as a `git apply` to your local working
tree [aliases: a]
resume Resume a previous interactive session (picker by default; use --last to continue the
most recent)
cloud [EXPERIMENTAL] Browse tasks from Codex Cloud and apply changes locally
features Inspect feature flags
help Print this message or the help of the given subcommand(s)
Arguments:
[PROMPT]
Optional user prompt to start the session
Options:
-c, --config <key=value>
Override a configuration value that would otherwise be loaded from `~/.codex/config.toml`.
Use a dotted path (`foo.bar.baz`) to override nested values. The `value` portion is parsed
as TOML. If it fails to parse as TOML, the raw string is used as a literal.
Examples: - `-c model="o3"` - `-c 'sandbox_permissions=["disk-full-read-access"]'` - `-c
shell_environment_policy.inherit=all`
--enable <FEATURE>
Enable a feature (repeatable). Equivalent to `-c features.<name>=true`
--disable <FEATURE>
Disable a feature (repeatable). Equivalent to `-c features.<name>=false`
-i, --image <FILE>...
Optional image(s) to attach to the initial prompt
-m, --model <MODEL>
Model the agent should use
--oss
Convenience flag to select the local open source model provider. Equivalent to -c
model_provider=oss; verifies a local Ollama server is running
-p, --profile <CONFIG_PROFILE>
Configuration profile from config.toml to specify default options
-s, --sandbox <SANDBOX_MODE>
Select the sandbox policy to use when executing model-generated shell commands
[possible values: read-only, workspace-write, danger-full-access]
-a, --ask-for-approval <APPROVAL_POLICY>
Configure when the model requires human approval before executing a command
Possible values:
- untrusted: Only run "trusted" commands (e.g. ls, cat, sed) without asking for user
approval. Will escalate to the user if the model proposes a command that is not in the
"trusted" set
- on-failure: Run all commands without asking for user approval. Only asks for approval if
a command fails to execute, in which case it will escalate to the user to ask for
un-sandboxed execution
- on-request: The model decides when to ask the user for approval
- never: Never ask for user approval Execution failures are immediately returned to
the model
--full-auto
Convenience alias for low-friction sandboxed automatic execution (-a on-request, --sandbox
workspace-write)
--dangerously-bypass-approvals-and-sandbox
Skip all confirmation prompts and execute commands without sandboxing. EXTREMELY
DANGEROUS. Intended solely for running in environments that are externally sandboxed
-C, --cd <DIR>
Tell the agent to use the specified directory as its working root
--search
Enable web search (off by default). When enabled, the native Responses `web_search` tool
is available to the model (no percall approval)
--add-dir <DIR>
Additional directories that should be writable alongside the primary workspace
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
```
## Exec Command: `codex exec --help`
```
Run Codex non-interactively
Usage: codex exec [OPTIONS] [PROMPT] [COMMAND]
Commands:
resume Resume a previous session by id or pick the most recent with --last
help Print this message or the help of the given subcommand(s)
Arguments:
[PROMPT]
Initial instructions for the agent. If not provided as an argument (or if `-` is used),
instructions are read from stdin
Options:
-c, --config <key=value>
Override a configuration value that would otherwise be loaded from `~/.codex/config.toml`.
Use a dotted path (`foo.bar.baz`) to override nested values. The `value` portion is parsed
as TOML. If it fails to parse as TOML, the raw string is used as a literal.
Examples: - `-c model="o3"` - `-c 'sandbox_permissions=["disk-full-read-access"]'` - `-c
shell_environment_policy.inherit=all`
--enable <FEATURE>
Enable a feature (repeatable). Equivalent to `-c features.<name>=true`
-i, --image <FILE>...
Optional image(s) to attach to the initial prompt
--disable <FEATURE>
Disable a feature (repeatable). Equivalent to `-c features.<name>=false`
-m, --model <MODEL>
Model the agent should use
--oss
-s, --sandbox <SANDBOX_MODE>
Select the sandbox policy to use when executing model-generated shell commands
[possible values: read-only, workspace-write, danger-full-access]
-p, --profile <CONFIG_PROFILE>
Configuration profile from config.toml to specify default options
--full-auto
Convenience alias for low-friction sandboxed automatic execution (-a on-request, --sandbox
workspace-write)
--dangerously-bypass-approvals-and-sandbox
Skip all confirmation prompts and execute commands without sandboxing. EXTREMELY
DANGEROUS. Intended solely for running in environments that are externally sandboxed
-C, --cd <DIR>
Tell the agent to use the specified directory as its working root
--skip-git-repo-check
Allow running Codex outside a Git repository
--output-schema <FILE>
Path to a JSON Schema file describing the model's final response shape
--color <COLOR>
Specifies color settings for use in the output
[default: auto]
[possible values: always, never, auto]
--json
Print events to stdout as JSONL
-o, --output-last-message <FILE>
Specifies file where the last message from the agent should be written
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
```
## Exec Resume Command: `codex exec resume --help`
```
Resume a previous session by id or pick the most recent with --last
Usage: codex exec resume [OPTIONS] [SESSION_ID] [PROMPT]
Arguments:
[SESSION_ID]
Conversation/session id (UUID). When provided, resumes this session. If omitted, use
--last to pick the most recent recorded session
[PROMPT]
Prompt to send after resuming the session. If `-` is used, read from stdin
Options:
-c, --config <key=value>
Override a configuration value that would otherwise be loaded from `~/.codex/config.toml`.
Use a dotted path (`foo.bar.baz`) to override nested values. The `value` portion is parsed
as TOML. If it fails to parse as TOML, the raw string is used as a literal.
Examples: - `-c model="o3"` - `-c 'sandbox_permissions=["disk-full-read-access"]'` - `-c
shell_environment_policy.inherit=all`
--last
Resume the most recent recorded session (newest) without specifying an id
--enable <FEATURE>
Enable a feature (repeatable). Equivalent to `-c features.<name>=true`
--disable <FEATURE>
Disable a feature (repeatable). Equivalent to `-c features.<name>=false`
-h, --help
Print help (see a summary with '-h')
```

View File

@@ -0,0 +1,188 @@
# Basic Usage Examples
---
## ⚠️ CRITICAL: Always Use `codex exec`
**ALL commands in this document use `codex exec` - this is mandatory in Claude Code.**
**NEVER**: `codex -m ...` (will fail with "stdout is not a terminal")
**ALWAYS**: `codex exec -m ...` (correct non-interactive mode)
Claude Code's bash environment is non-terminal. Plain `codex` commands will NOT work.
---
## Example 1: General Reasoning Task - Queue Design
### User Request
"Help me design a queue data structure in Python"
### What Happens
1. **Claude detects** the coding task (queue design)
2. **Skill is invoked** autonomously
3. **Codex CLI is called** with gpt-5.1 (general high-reasoning model):
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me design a queue data structure in Python"
```
4. **Codex responds** with high-reasoning architectural guidance on queue design
5. **Session is auto-saved** for potential continuation
### Expected Output
Codex provides:
- Queue design principles and trade-offs
- Multiple implementation approaches (list-based, deque, linked-list)
- Performance characteristics (O(1) enqueue/dequeue)
- Thread-safety considerations
- Usage examples and best practices
---
## Example 2: Code Editing Task - Implement Queue
### User Request
"Edit my Python file to implement the queue with thread-safety"
### What Happens
1. **Skill detects** code editing request
2. **Uses gpt-5.1-codex-max** (maximum capability for coding - 27-42% faster):
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=high \
"Edit my Python file to implement the queue with thread-safety"
```
3. **Codex performs code editing** with maximum capability model
4. **Files are modified** (workspace-write sandbox)
### Expected Output
Codex:
- Edits the target Python file
- Implements thread-safe queue using `threading.Lock`
- Adds proper synchronization primitives
- Includes docstrings and type hints
- Provides usage examples
---
## Example 3: Explicit Codex Request
### User Request
"Use Codex to design a REST API for a blog system"
### What Happens
1. **Explicit "Codex" mention** triggers skill
2. **Codex invoked** with coding-optimized settings:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Design a REST API for a blog system"
```
3. **High-reasoning analysis** provides comprehensive API design
### Expected Output
Codex delivers:
- RESTful endpoint design (GET/POST/PUT/DELETE)
- Resource modeling (posts, authors, comments)
- Authentication and authorization strategy
- Data validation approaches
- API versioning recommendations
- Error handling patterns
---
## Example 4: Complex Algorithm Design
### User Request
"Help me implement a binary search tree with balancing"
### What Happens
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me implement a binary search tree with balancing"
```
### Expected Output
Codex provides:
- BST fundamentals and invariants
- AVL vs Red-Black tree trade-offs
- Rotation algorithms (left, right, left-right, right-left)
- Insertion and deletion with rebalancing
- Complexity analysis
- Implementation guidance
---
## Example 5: Maximum Reasoning with xhigh
### User Request
"Refactor the authentication system with comprehensive security improvements"
### What Happens
```bash
codex exec -m gpt-5.1-codex-max -s workspace-write \
-c model_reasoning_effort=xhigh \
"Refactor the authentication system with comprehensive security improvements"
```
### Expected Output
Codex provides:
- Deep architectural analysis of current system
- Comprehensive security vulnerability assessment
- Multi-layered refactoring strategy
- Implementation of security best practices
- Detailed reasoning about trade-offs
- Long-horizon planning for complex changes
**When to use xhigh**: Complex architectural refactoring, security-critical changes, long-horizon tasks where quality is more important than speed.
---
## Model Selection Summary
| Task Type | Model | Sandbox | Example |
|-----------|-------|---------|---------|
| General reasoning | `gpt-5.1` | `read-only` | "Design a queue" |
| Architecture design | `gpt-5.1` | `read-only` | "Design REST API" |
| Code review | `gpt-5.1` | `read-only` | "Review this code" |
| Code editing (standard) | `gpt-5.1-codex-max` | `workspace-write` | "Edit file to add X" |
| Code editing (maximum reasoning) | `gpt-5.1-codex-max` + `xhigh` | `workspace-write` | "Complex refactoring" |
| Implementation | `gpt-5.1-codex-max` | `workspace-write` | "Implement function Y" |
| Backward compatibility | `gpt-5.1-codex` | `workspace-write` | "Use standard model" |
**Note**: `gpt-5.1-codex-max` is 27-42% faster than `gpt-5.1-codex` and uses ~30% fewer thinking tokens. It supports a new `xhigh` reasoning effort level for maximum capability.
---
## Tips for Best Results
1. **Be specific** in your requests - detailed prompts get better reasoning
2. **Indicate task type** clearly (design vs. implementation)
3. **Mention permissions** when you need file writes ("allow file writing")
4. **Use continuation** for iterative development (see session-continuation.md)
---
## Next Steps
- **Continue a session**: See [session-continuation.md](./session-continuation.md)
- **Advanced config**: See [advanced-config.md](./advanced-config.md)
- **Full documentation**: See [../SKILL.md](../SKILL.md)

View File

@@ -0,0 +1,287 @@
# Session Continuation Examples
---
## ⚠️ CRITICAL: Always Use `codex exec`
**ALL commands in this document use `codex exec` - this is mandatory in Claude Code.**
**NEVER**: `codex resume ...` (will fail with "stdout is not a terminal")
**ALWAYS**: `codex exec resume ...` (correct non-interactive mode)
Claude Code's bash environment is non-terminal. Plain `codex` commands will NOT work.
---
## Example 1: Basic Session Continuation
### Initial Request
**User**: "Help me design a queue data structure in Python"
**Skill Executes**:
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Help me design a queue data structure in Python"
```
**Codex Response**: Provides queue design with multiple approaches.
**Session Auto-Saved**: Codex CLI saves this session automatically.
---
### Follow-Up Request
**User**: "Continue with that queue - now add thread-safety"
**Skill Detects**: Continuation keywords ("continue with that")
**Skill Executes**:
```bash
codex exec resume --last
```
**Codex Response**: Resumes previous session, maintains context about the queue design, and adds thread-safety implementation building on the previous discussion.
**Context Maintained**: All previous conversation history is available to Codex.
---
## Example 2: Multi-Turn Iterative Development
### Turn 1: Initial Design
**User**: "Design a REST API for a blog system"
```bash
codex exec -m gpt-5.1 -s read-only \
-c model_reasoning_effort=high \
"Design a REST API for a blog system"
```
**Output**: API endpoint design, resource modeling, etc.
---
### Turn 2: Add Authentication
**User**: "Add authentication to that API design"
**Skill Executes**:
```bash
codex exec resume --last
```
**Output**: Codex continues from previous API design and adds JWT/OAuth authentication strategy.
---
### Turn 3: Add Error Handling
**User**: "Now add comprehensive error handling"
**Skill Executes**:
```bash
codex exec resume --last
```
**Output**: Codex builds on previous API + auth design and adds error handling patterns.
---
### Turn 4: Implementation
**User**: "Implement the user authentication endpoint"
**Skill Executes**:
```bash
codex exec resume --last
```
**Output**: Codex uses all previous context to implement the auth endpoint with full understanding of the API design.
**Result**: After 4 turns, you have a complete API with design, auth, error handling, and initial implementation - all with maintained context.
---
## Example 3: Explicit Resume Command
### When to Use Interactive Picker
If you have multiple Codex sessions and want to choose which one to continue:
**User**: "Show me my Codex sessions and let me pick which to resume"
**Manual Command** (run outside skill):
```bash
codex exec resume --last
```
This opens an interactive picker showing:
```
Recent Codex Sessions:
1. Queue data structure design (30 minutes ago)
2. REST API for blog system (2 hours ago)
3. Binary search tree implementation (yesterday)
Select session to resume:
```
---
## Example 4: Resuming After Claude Code Restart
### Scenario
1. You worked on a queue design with Codex
2. Closed Claude Code
3. Reopened Claude Code days later
### Resume Request
**User**: "Continue where we left off with the queue implementation"
**Skill Executes**:
```bash
codex exec resume --last
```
**Result**: Codex resumes the most recent session (the queue work) with full context maintained across Claude Code restarts.
**Why It Works**: Codex CLI persists session history independently of Claude Code.
---
## Continuation Keywords
The skill detects continuation requests when you use phrases like:
- "Continue with that"
- "Resume the previous session"
- "Keep going"
- "Add to that"
- "Now add X" (implies building on previous)
- "Continue where we left off"
- "Follow up on that"
---
## Decision Tree: New Session vs. Resume
```
User makes request
├─ Contains continuation keywords?
│ │
│ ├─ YES → Use `codex exec resume --last`
│ │
│ └─ NO → Check context
│ │
│ ├─ References previous Codex work?
│ │ │
│ │ ├─ YES → Use `codex exec resume --last`
│ │ │
│ │ └─ NO → New session: `codex exec -m ... "prompt"`
└─ User explicitly says "new" or "fresh"?
└─ YES → Force new session even if continuation keywords present
```
---
## Session History Management
### Automatic Save
- Every Codex session is automatically saved by Codex CLI
- No manual session ID tracking needed
- Sessions persist across:
- Claude Code restarts
- Terminal sessions
- System reboots
### Accessing History
```bash
# Resume most recent (recommended for skill)
codex exec resume --last
# Interactive picker (manual use)
codex exec resume --last
# List sessions (manual use)
codex list
```
---
## Best Practices
### 1. Use Clear Continuation Language
**Good**:
- "Continue with that queue implementation - add unit tests"
- "Resume the API design session and add rate limiting"
**Less Clear**:
- "Add tests" (ambiguous - new or continue?)
- "Rate limiting" (no continuation context)
### 2. Build Incrementally
Start with high-level design, then iterate:
1. Design (new session)
2. Add feature A (resume)
3. Add feature B (resume)
4. Implement (resume with full context)
### 3. Leverage Context Accumulation
Each resumed session has ALL previous context:
- Design decisions
- Trade-offs discussed
- Code patterns chosen
- Error handling approaches
This allows Codex to provide increasingly sophisticated, context-aware assistance.
---
## Troubleshooting
### "No previous sessions found"
**Cause**: Codex CLI history is empty (no prior sessions)
**Fix**: Start a new session first:
```bash
codex exec -m gpt-5.1"Design a queue"
```
Then subsequent "continue" requests will work.
---
### Session Not Resuming Correctly
**Symptoms**: Resume works but context seems lost
**Possible Causes**:
- Multiple sessions mixed together
- User explicitly requested "fresh start"
**Fix**: Use interactive picker to select correct session:
```bash
codex exec resume --last
```
---
### Multiple Sessions Confusion
**Scenario**: Working on two projects, want to resume specific one
**Solution**:
1. Be explicit: "Resume the queue design session" (skill will use --last)
2. Or manually: `codex exec resume --last` (or `codex exec resume <session-id>`) → pick correct session
---
## Next Steps
- **Advanced config**: See [advanced-config.md](./advanced-config.md)
- **Basic examples**: See [basic-usage.md](./basic-usage.md)
- **Full docs**: See [../SKILL.md](../SKILL.md)