Files
gh-glittercowboy-taches-cc-…/skills/create-agent-skills/references/executable-code.md
2025-11-29 18:28:37 +08:00

4.3 KiB

<when_to_use_scripts> Even if Claude could write a script, pre-made scripts offer advantages:

  • More reliable than generated code
  • Save tokens (no need to include code in context)
  • Save time (no code generation required)
  • Ensure consistency across uses

<execution_vs_reference> Make clear whether Claude should:

  • Execute the script (most common): "Run analyze_form.py to extract fields"
  • Read it as reference (for complex logic): "See analyze_form.py for the extraction algorithm"

For most utility scripts, execution is preferred. </execution_vs_reference>

<how_scripts_work> When Claude executes a script via bash:

  1. Script code never enters context window
  2. Only script output consumes tokens
  3. Far more efficient than having Claude generate equivalent code </how_scripts_work> </when_to_use_scripts>

<file_organization> <scripts_directory> Best practice: Place all executable scripts in a scripts/ subdirectory within the skill folder.

skill-name/
├── SKILL.md
├── scripts/
│   ├── main_utility.py
│   ├── helper_script.py
│   └── validator.py
└── references/
    └── api-docs.md

Benefits:

  • Keeps skill root clean and organized
  • Clear separation between documentation and executable code
  • Consistent pattern across all skills
  • Easy to reference: python scripts/script_name.py

Reference pattern: In SKILL.md, reference scripts using the scripts/ path:

python ~/.claude/skills/skill-name/scripts/analyze.py input.har

</scripts_directory> </file_organization>

<utility_scripts_pattern>

Utility scripts

analyze_form.py: Extract all form fields from PDF

python scripts/analyze_form.py input.pdf > fields.json

Output format:

{
  "field_name": { "type": "text", "x": 100, "y": 200 },
  "signature": { "type": "sig", "x": 150, "y": 500 }
}

validate_boxes.py: Check for overlapping bounding boxes

python scripts/validate_boxes.py fields.json
# Returns: "OK" or lists conflicts

fill_form.py: Apply field values to PDF

python scripts/fill_form.py input.pdf fields.json output.pdf

<solve_dont_punt> Handle error conditions rather than punting to Claude.

```python def process_file(path): """Process a file, creating it if it doesn't exist.""" try: with open(path) as f: return f.read() except FileNotFoundError: print(f"File {path} not found, creating default") with open(path, 'w') as f: f.write('') return '' except PermissionError: print(f"Cannot access {path}, using default") return '' ``` ```python def process_file(path): # Just fail and let Claude figure it out return open(path).read() ```

<configuration_values> Document configuration parameters to avoid "voodoo constants":

```python # HTTP requests typically complete within 30 seconds REQUEST_TIMEOUT = 30

Three retries balances reliability vs speed

MAX_RETRIES = 3

</example>

<example type="bad">
```python
TIMEOUT = 47  # Why 47?
RETRIES = 5   # Why 5?

<package_dependencies> <runtime_constraints> Skills run in code execution environment with platform-specific limitations:

  • claude.ai: Can install packages from npm and PyPI
  • Anthropic API: No network access and no runtime package installation </runtime_constraints>
List required packages in your SKILL.md and verify they're available. Install required package: `pip install pypdf`

Then use it:

from pypdf import PdfReader
reader = PdfReader("file.pdf")
"Use the pdf library to process the file."

<mcp_tool_references> If your Skill uses MCP (Model Context Protocol) tools, always use fully qualified tool names.

ServerName:tool_name

- Use the BigQuery:bigquery_schema tool to retrieve table schemas. - Use the GitHub:create_issue tool to create issues.

Without the server prefix, Claude may fail to locate the tool, especially when multiple MCP servers are available. </mcp_tool_references>