Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:24:46 +08:00
commit 49178918d7
24 changed files with 3940 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "elevenlabs-agents",
"description": "Build conversational AI voice agents with ElevenLabs Platform using React, JavaScript, React Native, or Swift SDKs. Configure agents, tools (client/server/MCP), RAG knowledge bases, multi-voice, and Scribe real-time STT. Use when: building voice chat interfaces, implementing AI phone agents with Twilio, configuring agent workflows or tools, adding RAG knowledge bases, testing with CLI agents as code, or troubleshooting deprecated @11labs packages, Android audio cutoff, CSP violations, dynamic va",
"version": "1.0.0",
"author": {
"name": "Jeremy Dawes",
"email": "jeremy@jezweb.net"
},
"skills": [
"./"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# elevenlabs-agents
Build conversational AI voice agents with ElevenLabs Platform using React, JavaScript, React Native, or Swift SDKs. Configure agents, tools (client/server/MCP), RAG knowledge bases, multi-voice, and Scribe real-time STT. Use when: building voice chat interfaces, implementing AI phone agents with Twilio, configuring agent workflows or tools, adding RAG knowledge bases, testing with CLI agents as code, or troubleshooting deprecated @11labs packages, Android audio cutoff, CSP violations, dynamic va

673
SKILL.md Normal file
View File

@@ -0,0 +1,673 @@
---
name: elevenlabs-agents
description: |
Build conversational AI voice agents with ElevenLabs Platform using React, JavaScript, React Native, or Swift SDKs.
Configure agents, tools (client/server/MCP), RAG knowledge bases, multi-voice, and Scribe real-time STT.
Use when: building voice chat interfaces, implementing AI phone agents with Twilio, configuring agent workflows
or tools, adding RAG knowledge bases, testing with CLI "agents as code", or troubleshooting deprecated @11labs
packages, Android audio cutoff, CSP violations, dynamic variables, or WebRTC config.
Keywords: ElevenLabs Agents, ElevenLabs voice agents, AI voice agents, conversational AI, @elevenlabs/react, @elevenlabs/client, @elevenlabs/react-native, @elevenlabs/elevenlabs-js, @elevenlabs/agents-cli, elevenlabs SDK, voice AI, TTS, text-to-speech, ASR, speech recognition, turn-taking model, WebRTC voice, WebSocket voice, ElevenLabs conversation, agent system prompt, agent tools, agent knowledge base, RAG voice agents, multi-voice agents, pronunciation dictionary, voice speed control, elevenlabs scribe, @11labs deprecated, Android audio cutoff, CSP violation elevenlabs, dynamic variables elevenlabs, case-sensitive tool names, webhook authentication
license: MIT
metadata:
version: 1.2.0
last_updated: 2025-11-25
production_tested: true
packages:
- name: "@elevenlabs/elevenlabs-js"
version: 2.25.0
- name: "@elevenlabs/agents-cli"
version: 0.6.1
- name: "@elevenlabs/react"
version: 0.11.3
- name: "@elevenlabs/client"
version: 0.11.3
- name: "@elevenlabs/react-native"
version: 0.5.4
documentation:
- https://elevenlabs.io/docs/agents-platform/overview
- https://elevenlabs.io/docs/api-reference
- https://github.com/elevenlabs/elevenlabs-examples
errors_prevented: 17+
token_savings: ~73%
---
# ElevenLabs Agents Platform
## Overview
ElevenLabs Agents Platform is a comprehensive solution for building production-ready conversational AI voice agents. The platform coordinates four core components:
1. **ASR (Automatic Speech Recognition)** - Converts speech to text (32+ languages, sub-second latency)
2. **LLM (Large Language Model)** - Reasoning and response generation (GPT, Claude, Gemini, custom models)
3. **TTS (Text-to-Speech)** - Converts text to speech (5000+ voices, 31 languages, low latency)
4. **Turn-Taking Model** - Proprietary model that handles conversation timing and interruptions
### 🚨 Package Updates (November 2025)
ElevenLabs migrated to new scoped packages in August 2025. **Current packages:**
```bash
npm install @elevenlabs/react@0.11.3 # React SDK
npm install @elevenlabs/client@0.11.3 # JavaScript SDK
npm install @elevenlabs/react-native@0.5.4 # React Native SDK
npm install @elevenlabs/elevenlabs-js@2.25.0 # Base SDK (Python: elevenlabs@1.59.0)
npm install -g @elevenlabs/agents-cli@0.6.1 # CLI
```
**DEPRECATED:** `@11labs/react`, `@11labs/client` (uninstall if present)
**⚠️ CRITICAL:** v1 TTS models will be removed 2025-12-15. Migrate to Turbo v2/v2.5.
---
## 1. Quick Start
### React SDK
```bash
npm install @elevenlabs/react zod
```
```typescript
import { useConversation } from '@elevenlabs/react';
const { startConversation, stopConversation, status } = useConversation({
agentId: 'your-agent-id',
signedUrl: '/api/elevenlabs/auth', // Recommended (secure)
// OR apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY,
clientTools: { /* browser-side tools */ },
onEvent: (event) => { /* transcript, agent_response, tool_call */ },
serverLocation: 'us' // 'eu-residency' | 'in-residency' | 'global'
});
```
### CLI ("Agents as Code")
```bash
npm install -g @elevenlabs/agents-cli
elevenlabs auth login
elevenlabs agents init # Creates agents.json, tools.json, tests.json
elevenlabs agents add "Bot" --template customer-service
elevenlabs agents push --env dev # Deploy
elevenlabs agents test "Bot" # Test
```
### API (Programmatic)
```typescript
import { ElevenLabsClient } from 'elevenlabs';
const client = new ElevenLabsClient({ apiKey: process.env.ELEVENLABS_API_KEY });
const agent = await client.agents.create({
name: 'Support Bot',
conversation_config: {
agent: { prompt: { prompt: "...", llm: "gpt-4o" }, language: "en" },
tts: { model_id: "eleven_turbo_v2_5", voice_id: "your-voice-id" }
}
});
```
---
## 2. Agent Configuration
### System Prompt Architecture (6 Components)
**1. Personality** - Identity, role, character traits
**2. Environment** - Communication context (phone, web, video)
**3. Tone** - Formality, speech patterns, verbosity
**4. Goal** - Objectives and success criteria
**5. Guardrails** - Boundaries, prohibited topics, ethical constraints
**6. Tools** - Available capabilities and when to use them
**Template:**
```json
{
"agent": {
"prompt": {
"prompt": "Personality:\n[Agent identity and role]\n\nEnvironment:\n[Communication context]\n\nTone:\n[Speech style]\n\nGoal:\n[Primary objectives]\n\nGuardrails:\n[Boundaries and constraints]\n\nTools:\n[Available tools and usage]",
"llm": "gpt-4o", // gpt-5.1, claude-sonnet-4-5, gemini-3-pro-preview
"temperature": 0.7
}
}
}
```
**2025 LLM Models:**
- `gpt-5.1`, `gpt-5.1-2025-11-13` (Oct 2025)
- `claude-sonnet-4-5`, `claude-sonnet-4-5@20250929` (Oct 2025)
- `gemini-3-pro-preview` (2025)
- `gemini-2.5-flash-preview-09-2025` (Oct 2025)
### Turn-Taking Modes
| Mode | Behavior | Best For |
|------|----------|----------|
| **Eager** | Responds quickly | Fast-paced support, quick orders |
| **Normal** | Balanced (default) | General customer service |
| **Patient** | Waits longer | Information collection, therapy |
```json
{ "conversation_config": { "turn": { "mode": "patient" } } }
```
### Workflows & Agent Management (2025)
**Workflow Features:**
- **Subagent Nodes** - Override prompt, voice, turn-taking per node
- **Tool Nodes** - Guarantee tool execution
- **Edges** - Conditional routing with `edge_order` (determinism, Oct 2025)
```json
{
"workflow": {
"nodes": [
{ "id": "node_1", "type": "subagent", "config": { "system_prompt": "...", "turn_eagerness": "patient" } },
{ "id": "node_2", "type": "tool", "tool_name": "transfer_to_human" }
],
"edges": [{ "from": "node_1", "to": "node_2", "condition": "escalation", "edge_order": 1 }]
}
}
```
**Agent Management (2025):**
- **Agent Archiving** - `archived: true` field (Oct 2025)
- **Agent Duplication** - Clone existing agents
- **Service Account API Keys** - Management endpoints (Jul 2025)
### Dynamic Variables
Use `{{var_name}}` syntax in prompts, messages, and tool parameters.
**System Variables:**
- `{{system__agent_id}}`, `{{system__conversation_id}}`
- `{{system__caller_id}}`, `{{system__called_number}}` (telephony)
- `{{system__call_duration_secs}}`, `{{system__time_utc}}`
- `{{system__call_sid}}` (Twilio only)
**Custom Variables:**
```typescript
await client.conversations.create({
agent_id: "agent_123",
dynamic_variables: { user_name: "John", account_tier: "premium" }
});
```
**Secret Variables:** `{{secret__api_key}}` (headers only, never sent to LLM)
**⚠️ Error:** Missing variables cause "Missing required dynamic variables" - always provide all referenced variables.
---
## 3. Voice & Language Features
### Multi-Voice, Pronunciation & Speed
**Multi-Voice** - Switch voices dynamically (adds ~200ms latency per switch):
```json
{ "prompt": "When speaking as customer, use voice_id 'voice_abc'. As agent, use 'voice_def'." }
```
**Pronunciation Dictionary** - IPA, CMU, word substitutions (Turbo v2/v2.5 only):
```json
{
"pronunciation_dictionary": [
{ "word": "API", "pronunciation": "ey-pee-ay", "format": "cmu" },
{ "word": "AI", "substitution": "artificial intelligence" }
]
}
```
**PATCH Support (Aug 2025)** - Update dictionaries without replacement
**Speed Control** - 0.7x-1.2x (use 0.9x-1.1x for natural sound):
```json
{ "voice_settings": { "speed": 1.0 } }
```
**Voice Cloning Best Practices:**
- Clean audio (no noise, music, pops)
- Consistent microphone distance
- 1-2 minutes of audio
- Use language-matched voices (English voices fail on non-English)
### Language Configuration
**32+ Languages** with automatic detection and in-conversation switching.
**Multi-Language Presets:**
```json
{
"language_presets": [
{ "language": "en", "voice_id": "en_voice", "first_message": "Hello!" },
{ "language": "es", "voice_id": "es_voice", "first_message": "¡Hola!" }
]
}
```
---
## 4. Knowledge Base & RAG
Enable agents to access large knowledge bases without loading entire documents into context.
**Workflow:**
1. Upload documents (PDF, TXT, DOCX)
2. Compute RAG index (vector embeddings)
3. Agent retrieves relevant chunks during conversation
**Configuration:**
```json
{
"agent": { "prompt": { "knowledge_base": ["doc_id_1", "doc_id_2"] } },
"knowledge_base_config": {
"max_chunks": 5,
"vector_distance_threshold": 0.8
}
}
```
**API Upload:**
```typescript
const doc = await client.knowledgeBase.upload({ file: fs.createReadStream('docs.pdf'), name: 'Docs' });
await client.knowledgeBase.computeRagIndex({ document_id: doc.id, embedding_model: 'e5_mistral_7b' });
```
**⚠️ Gotchas:** RAG adds ~500ms latency. Check index status before use - indexing can take minutes.
---
## 5. Tools (4 Types)
### A. Client Tools (Browser/Mobile)
Execute in browser or mobile app. **Tool names case-sensitive.**
```typescript
clientTools: {
updateCart: {
description: "Update shopping cart",
parameters: z.object({ item: z.string(), quantity: z.number() }),
handler: async ({ item, quantity }) => {
// Client-side logic
return { success: true };
}
}
}
```
### B. Server Tools (Webhooks)
HTTP requests to external APIs. **PUT support added Apr 2025.**
```json
{
"name": "get_weather",
"url": "https://api.weather.com/{{user_id}}",
"method": "GET",
"headers": { "Authorization": "Bearer {{secret__api_key}}" },
"parameters": { "type": "object", "properties": { "city": { "type": "string" } } }
}
```
**⚠️ Secret variables** only in headers (not URL/body)
**2025 Features:**
- **transfer-to-human** system tool (Apr 2025)
- **tool_latency_secs** tracking (Apr 2025)
### C. MCP Tools (Model Context Protocol)
Connect to MCP servers for databases, IDEs, data sources.
**Configuration:** Dashboard → Add Custom MCP Server → Configure SSE/HTTP endpoint
**Approval Modes:** Always Ask | Fine-Grained | No Approval
**2025 Updates:**
- **disable_interruptions** flag (Oct 2025) - Prevents interruption during tool execution
- **Tools Management Interface** (Jun 2025)
**⚠️ Limitations:** SSE/HTTP only. Not available for Zero Retention or HIPAA.
### D. System Tools
Built-in conversation control (no external APIs):
- `end_call`, `detect_language`, `transfer_agent`
- `transfer_to_number` (telephony)
- `dtmf_playpad`, `voicemail_detection` (telephony)
**2025:** `use_out_of_band_dtmf` flag for telephony integration
---
## 6. SDK Integration
### useConversation Hook (React/React Native)
```typescript
const { startConversation, stopConversation, status, isSpeaking } = useConversation({
agentId: 'your-agent-id',
signedUrl: '/api/auth', // OR apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY
clientTools: { /* ... */ },
onEvent: (event) => { /* transcript, agent_response, tool_call, agent_tool_request (Oct 2025) */ },
onConnect/onDisconnect/onError,
serverLocation: 'us' // 'eu-residency' | 'in-residency' | 'global'
});
```
**2025 Events:**
- `agent_chat_response_part` - Streaming responses (Oct 2025)
- `agent_tool_request` - Tool interaction tracking (Oct 2025)
### Connection Types: WebRTC vs WebSocket
| Feature | WebSocket | WebRTC (Jul 2025 rollout) |
|---------|-----------|---------------------------|
| **Auth** | `signedUrl` | `conversationToken` |
| **Audio** | Configurable (16k/24k/48k) | PCM_48000 (hardcoded) |
| **Latency** | Standard | Lower |
| **Best For** | Flexibility | Low-latency |
**⚠️ WebRTC:** Hardcoded PCM_48000, limited device switching
### Platforms
- **React**: `@elevenlabs/react@0.11.3`
- **JavaScript**: `@elevenlabs/client@0.11.3` - `new Conversation({...})`
- **React Native**: `@elevenlabs/react-native@0.5.4` - Expo SDK 47+, iOS/macOS (custom build required, no Expo Go)
- **Swift**: iOS 14.0+, macOS 11.0+, Swift 5.9+
- **Embeddable Widget**: `<script src="https://elevenlabs.io/convai-widget/index.js"></script>`
### Scribe (Real-Time Speech-to-Text - Beta 2025)
Real-time transcription with word-level timestamps. **Single-use tokens**, not API keys.
```typescript
const { connect, startRecording, stopRecording, transcript, partialTranscript } = useScribe({
token: async () => (await fetch('/api/scribe/token')).json().then(d => d.token),
commitStrategy: 'vad', // 'vad' (auto on silence) | 'manual' (explicit .commit())
sampleRate: 16000, // 16000 or 24000
onPartialTranscript/onFinalTranscript/onError
});
```
**Events:** PARTIAL_TRANSCRIPT, FINAL_TRANSCRIPT_WITH_TIMESTAMPS, SESSION_STARTED, ERROR
**⚠️ Closed Beta** - requires sales contact. For agents, use Agents Platform instead (LLM + TTS + two-way interaction).
---
## 7. Testing & Evaluation
### 🆕 Agent Testing Framework (Aug 2025)
Comprehensive automated testing with **9 new API endpoints** for creating, managing, and executing tests.
**Test Types:**
- **Scenario Testing** - LLM-based evaluation against success criteria
- **Tool Call Testing** - Verify correct tool usage and parameters
- **Load Testing** - High-concurrency capacity testing
**CLI Workflow:**
```bash
# Create test
elevenlabs tests add "Refund Test" --template basic-llm
# Configure in test_configs/refund-test.json
{
"name": "Refund Test",
"scenario": "Customer requests refund",
"success_criteria": ["Agent acknowledges empathetically", "Verifies order details"],
"expected_tool_call": { "tool_name": "lookup_order", "parameters": { "order_id": "..." } }
}
# Deploy and execute
elevenlabs tests push
elevenlabs agents test "Support Agent"
```
**9 New API Endpoints (Aug 2025):**
1. `POST /v1/convai/tests` - Create test
2. `GET /v1/convai/tests/:id` - Retrieve test
3. `PATCH /v1/convai/tests/:id` - Update test
4. `DELETE /v1/convai/tests/:id` - Delete test
5. `POST /v1/convai/tests/:id/execute` - Execute test
6. `GET /v1/convai/test-invocations` - List invocations (pagination, agent filtering)
7. `POST /v1/convai/test-invocations/:id/resubmit` - Resubmit failed test
8. `GET /v1/convai/test-results/:id` - Get results
9. `GET /v1/convai/test-results/:id/debug` - Detailed debugging info
**Test Invocation Listing (Oct 2025):**
```typescript
const invocations = await client.convai.testInvocations.list({
agent_id: 'agent_123', // Filter by agent
page_size: 30, // Default 30, max 100
cursor: 'next_page_cursor' // Pagination
});
// Returns: test run counts, pass/fail stats, titles
```
**Programmatic Testing:**
```typescript
const simulation = await client.agents.simulate({
agent_id: 'agent_123',
scenario: 'Refund request',
user_messages: ["I want a refund", "Order #12345"],
success_criteria: ["Acknowledges request", "Verifies order"]
});
console.log('Passed:', simulation.passed);
```
**Agent Tracking (Oct 2025):** Tests now include `agent_id` association for better organization
---
## 8. Analytics & Monitoring
**2025 Features:**
- **Custom Dashboard Charts** (Apr 2025) - Display evaluation criteria metrics over time
- **Call History Filtering** (Apr 2025) - `call_start_before_unix` parameter
- **Multi-Voice History** - Separate conversation history by voice
- **LLM Cost Tracking** - Per agent/conversation costs with `aggregation_interval` (hour/day/week/month)
- **Tool Latency** (Apr 2025) - `tool_latency_secs` tracking
- **Usage Metrics** - minutes_used, request_count, ttfb_avg, ttfb_p95
**Conversation Analysis:** Success evaluation (LLM-based), data collection fields, post-call webhooks
**Access:** Dashboard → Analytics | Post-call Webhooks | API
---
## 9. Privacy & Compliance
**Data Retention:** 2 years default (GDPR). Configure: `{ "transcripts": { "retention_days": 730 }, "audio": { "retention_days": 2190 } }`
**Encryption:** TLS 1.3 (transit), AES-256 (rest)
**Regional:** `serverLocation: 'eu-residency' | 'us' | 'global' | 'in-residency'`
**Zero Retention Mode:** Immediate deletion (no history, analytics, webhooks, or MCP)
**Compliance:** GDPR (1-2 years), HIPAA (6 years), SOC 2 (automatic encryption)
---
## 10. Cost Optimization
**LLM Caching:** Up to 90% savings on repeated inputs. `{ "caching": { "enabled": true, "ttl_seconds": 3600 } }`
**Model Swapping:** GPT-5.1, GPT-4o/mini, Claude Sonnet 4.5, Gemini 3 Pro/2.5 Flash (2025 models)
**Burst Pricing:** 3x concurrency limit at 2x cost. `{ "burst_pricing_enabled": true }`
---
## 11. Advanced Features
**2025 Platform Updates:**
- **Azure OpenAI** (Jul 2025) - Custom LLM with Azure-hosted models (requires API version field)
- **Genesys Output Variables** (Jul 2025) - Enhanced call analytics
- **LLMReasoningEffort "none"** (Oct 2025) - Control model reasoning behavior
- **Streaming Voice Previews** (Jul 2025) - Real-time voice generation
- **pcm_48000** audio format (Apr 2025) - New output format support
**Events:** `audio`, `transcript`, `agent_response`, `tool_call`, `agent_chat_response_part` (streaming, Oct 2025), `agent_tool_request` (Oct 2025), `conversation_state`
**Custom Models:** Bring your own LLM (OpenAI-compatible endpoints). `{ "llm_config": { "custom": { "endpoint": "...", "api_key": "{{secret__key}}" } } }`
**Post-Call Webhooks:** HMAC verification required. Return 200 or auto-disable after 10 failures. Payload includes conversation_id, transcript, analysis.
**Chat Mode:** Text-only (no ASR/TTS). `{ "chat_mode": true }`. Saves ~200ms + costs.
**Telephony:** SIP (sip-static.rtc.elevenlabs.io), Twilio native, Vonage, RingCentral. **2025:** Twilio keypad fix (Jul), SIP TLS remote_domains validation (Oct)
---
## 12. CLI & DevOps ("Agents as Code")
**Installation & Auth:**
```bash
npm install -g @elevenlabs/agents-cli@0.6.1
elevenlabs auth login
elevenlabs auth residency eu-residency # 'in-residency' | 'global'
export ELEVENLABS_API_KEY=your-api-key # For CI/CD
```
**Project Structure:** `agents.json`, `tools.json`, `tests.json` + `agent_configs/`, `tool_configs/`, `test_configs/`
**Key Commands:**
```bash
elevenlabs agents init
elevenlabs agents add "Bot" --template customer-service
elevenlabs agents push --env prod --dry-run # Preview
elevenlabs agents push --env prod # Deploy
elevenlabs agents pull # Import existing
elevenlabs agents test "Bot" # 2025: Enhanced testing
elevenlabs tools add-webhook "Weather" --config-path tool_configs/weather.json
elevenlabs tools push
elevenlabs tests add "Test" --template basic-llm
elevenlabs tests push
```
**Multi-Environment:** Create `agent.dev.json`, `agent.staging.json`, `agent.prod.json` for overrides
**CI/CD:** GitHub Actions with `--dry-run` validation before deploy
**.gitignore:** `.env`, `.elevenlabs/`, `*.secret.json`
---
## 13. Common Errors & Solutions (17 Documented)
### Error 1: Missing Required Dynamic Variables
**Cause:** Variables referenced in prompts not provided at conversation start
**Solution:** Provide all variables in `dynamic_variables: { user_name: "John", ... }`
### Error 2: Case-Sensitive Tool Names
**Cause:** Tool name mismatch (case-sensitive)
**Solution:** Ensure `tool_ids: ["orderLookup"]` matches `name: "orderLookup"` exactly
### Error 3: Webhook Authentication Failures
**Cause:** Incorrect HMAC signature, not returning 200, or 10+ failures
**Solution:** Verify `hmac = crypto.createHmac('sha256', SECRET).update(payload).digest('hex')` and return 200
### Error 4: Voice Consistency Issues
**Cause:** Background noise, inconsistent mic distance, extreme volumes in training
**Solution:** Use clean audio, consistent distance, avoid extremes
### Error 5: Wrong Language Voice
**Cause:** English-trained voice for non-English language
**Solution:** Use language-matched voices: `{ "language": "es", "voice_id": "spanish_voice" }`
### Error 6: Restricted API Keys Not Supported (CLI)
**Cause:** CLI doesn't support restricted API keys
**Solution:** Use unrestricted API key for CLI
### Error 7: Agent Configuration Push Conflicts
**Cause:** Hash-based change detection missed modification
**Solution:** `elevenlabs agents init --override` + `elevenlabs agents pull` + push
### Error 8: Tool Parameter Schema Mismatch
**Cause:** Schema doesn't match usage
**Solution:** Add clear descriptions: `"description": "Order ID (format: ORD-12345)"`
### Error 9: RAG Index Not Ready
**Cause:** Index still computing (takes minutes)
**Solution:** Check `index.status === 'ready'` before using
### Error 10: WebSocket Protocol Error (1002)
**Cause:** Network instability or incompatible browser
**Solution:** Use WebRTC instead, implement reconnection logic
### Error 11: 401 Unauthorized in Production
**Cause:** Agent visibility or API key config
**Solution:** Check visibility (public/private), verify API key in prod, check allowlist
### Error 12: Allowlist Connection Errors
**Cause:** Allowlist enabled but using shared link
**Solution:** Configure allowlist domains or disable for testing
### Error 13: Workflow Infinite Loops
**Cause:** Edge conditions creating loops
**Solution:** Add max iteration limits, test all paths, explicit exit conditions
### Error 14: Burst Pricing Not Enabled
**Cause:** Burst not enabled in settings
**Solution:** `{ "call_limits": { "burst_pricing_enabled": true } }`
### Error 15: MCP Server Timeout
**Cause:** MCP server slow/unreachable
**Solution:** Check URL accessible, verify transport (SSE/HTTP), check auth, monitor logs
### Error 16: First Message Cutoff on Android
**Cause:** Android needs time to switch audio mode
**Solution:** `connectionDelay: { android: 3_000, ios: 0 }` (3s for audio routing)
### Error 17: CSP (Content Security Policy) Violations
**Cause:** Strict CSP blocks `blob:` URLs. SDK uses Audio Worklets loaded as blobs
**Solution:** Self-host worklets:
1. `cp node_modules/@elevenlabs/client/dist/worklets/*.js public/elevenlabs/`
2. Configure: `workletPaths: { 'rawAudioProcessor': '/elevenlabs/rawAudioProcessor.worklet.js', 'audioConcatProcessor': '/elevenlabs/audioConcatProcessor.worklet.js' }`
3. Update CSP: `script-src 'self' https://elevenlabs.io; worker-src 'self';`
**Gotcha:** Update worklets when upgrading `@elevenlabs/client`
---
## Integration with Existing Skills
This skill composes well with:
- **cloudflare-worker-base** → Deploy agents on Cloudflare Workers edge network
- **cloudflare-workers-ai** → Use Cloudflare LLMs as custom models in agents
- **cloudflare-durable-objects** → Persistent conversation state and session management
- **cloudflare-kv** → Cache agent configurations and user preferences
- **nextjs** → React SDK integration in Next.js applications
- **ai-sdk-core** → Vercel AI SDK provider for unified AI interface
- **clerk-auth** → Authenticated voice sessions with user identity
- **hono-routing** → API routes for webhooks and server tools
---
## Additional Resources
**Official Documentation**:
- Platform Overview: https://elevenlabs.io/docs/agents-platform/overview
- API Reference: https://elevenlabs.io/docs/api-reference
- CLI GitHub: https://github.com/elevenlabs/cli
**Examples**:
- Official Examples: https://github.com/elevenlabs/elevenlabs-examples
- MCP Server: https://github.com/elevenlabs/elevenlabs-mcp
**Community**:
- Discord: https://discord.com/invite/elevenlabs
- Twitter: @elevenlabsio
---
**Production Tested**: WordPress Auditor, Customer Support Agents
**Last Updated**: 2025-11-25
**Package Versions**: elevenlabs@1.59.0, @elevenlabs/elevenlabs-js@2.25.0, @elevenlabs/agents-cli@0.6.1, @elevenlabs/react@0.11.3, @elevenlabs/client@0.11.3, @elevenlabs/react-native@0.5.4

View File

@@ -0,0 +1,111 @@
{
"name": "Support Agent",
"conversation_config": {
"agent": {
"prompt": {
"prompt": "You are a helpful customer support agent...",
"llm": "gpt-4o-mini",
"temperature": 0.7,
"max_tokens": 500,
"tool_ids": ["tool_123"],
"knowledge_base": ["doc_456"],
"custom_llm": {
"endpoint": "https://api.openai.com/v1/chat/completions",
"api_key": "{{secret__openai_api_key}}",
"model": "gpt-4"
}
},
"first_message": "Hello! How can I help you today?",
"language": "en"
},
"tts": {
"model_id": "eleven_turbo_v2_5",
"voice_id": "your_voice_id",
"stability": 0.5,
"similarity_boost": 0.75,
"speed": 1.0,
"output_format": "pcm_22050"
},
"asr": {
"quality": "high",
"provider": "deepgram",
"keywords": ["product_name", "company_name"]
},
"turn": {
"mode": "normal",
"turn_timeout": 5000
},
"conversation": {
"max_duration_seconds": 600
},
"language_presets": [
{
"language": "en",
"voice_id": "en_voice_id",
"first_message": "Hello! How can I help you?"
},
{
"language": "es",
"voice_id": "es_voice_id",
"first_message": "¡Hola! ¿Cómo puedo ayudarte?"
}
]
},
"workflow": {
"nodes": [
{
"id": "node_1",
"type": "subagent",
"config": {
"system_prompt": "You are now handling technical support...",
"turn_eagerness": "patient",
"voice_id": "tech_voice_id"
}
},
{
"id": "node_2",
"type": "tool",
"tool_name": "transfer_to_human"
}
],
"edges": [
{
"from": "node_1",
"to": "node_2",
"condition": "user_requests_escalation"
}
]
},
"platform_settings": {
"widget": {
"theme": {
"primaryColor": "#3B82F6",
"backgroundColor": "#1F2937",
"textColor": "#F9FAFB"
},
"position": "bottom-right"
},
"authentication": {
"type": "signed_url",
"session_duration": 3600
},
"privacy": {
"transcripts": {
"retention_days": 730
},
"audio": {
"retention_days": 2190
},
"zero_retention": false
}
},
"webhooks": {
"post_call": {
"url": "https://api.example.com/webhook",
"headers": {
"Authorization": "Bearer {{secret__webhook_auth_token}}"
}
}
},
"tags": ["customer-support", "production"]
}

77
assets/ci-cd-example.yml Normal file
View File

@@ -0,0 +1,77 @@
name: Deploy ElevenLabs Agent
on:
push:
branches: [main]
paths:
- 'agent_configs/**'
- 'tool_configs/**'
- 'test_configs/**'
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install ElevenLabs CLI
run: npm install -g @elevenlabs/cli
- name: Dry Run (Preview Changes)
run: elevenlabs agents push --env staging --dry-run
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY_STAGING }}
- name: Push to Staging
if: github.event_name == 'pull_request'
run: elevenlabs agents push --env staging
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY_STAGING }}
- name: Run Tests
if: github.event_name == 'pull_request'
run: |
elevenlabs tests push --env staging
elevenlabs agents test "Support Agent"
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY_STAGING }}
deploy:
runs-on: ubuntu-latest
needs: test
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install ElevenLabs CLI
run: npm install -g @elevenlabs/cli
- name: Deploy to Production
run: elevenlabs agents push --env prod
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY_PROD }}
- name: Verify Deployment
run: elevenlabs agents status
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY_PROD }}
- name: Notify on Success
if: success()
run: echo "✅ Agent deployed to production successfully"
- name: Notify on Failure
if: failure()
run: echo "❌ Deployment failed"

View File

@@ -0,0 +1,215 @@
import { Conversation } from '@elevenlabs/client';
// Configuration
const AGENT_ID = 'your-agent-id';
const API_KEY = process.env.ELEVENLABS_API_KEY; // Server-side only, never expose in browser
// Initialize conversation
const conversation = new Conversation({
agentId: AGENT_ID,
// Authentication (choose one)
// Option 1: API key (for private agents)
apiKey: API_KEY,
// Option 2: Signed URL (most secure)
// signedUrl: 'https://api.elevenlabs.io/v1/convai/auth/...',
// Client tools (browser-side functions)
clientTools: {
updateCart: {
description: "Update shopping cart",
parameters: {
type: "object",
properties: {
item: { type: "string" },
quantity: { type: "number" }
},
required: ["item", "quantity"]
},
handler: async ({ item, quantity }) => {
console.log('Cart updated:', item, quantity);
// Your cart logic here
return { success: true };
}
}
},
// Event handlers
onConnect: () => {
console.log('Connected to agent');
updateStatus('connected');
clearTranscript();
},
onDisconnect: () => {
console.log('Disconnected from agent');
updateStatus('disconnected');
},
onEvent: (event) => {
switch (event.type) {
case 'transcript':
addToTranscript('user', event.data.text);
break;
case 'agent_response':
addToTranscript('agent', event.data.text);
break;
case 'tool_call':
console.log('Tool called:', event.data.tool_name);
break;
case 'error':
console.error('Agent error:', event.data);
showError(event.data.message);
break;
}
},
onError: (error) => {
console.error('Connection error:', error);
showError(error.message);
},
// Regional compliance
serverLocation: 'us' // 'us' | 'global' | 'eu-residency' | 'in-residency'
});
// UI Helpers
function updateStatus(status) {
const statusEl = document.getElementById('status');
if (statusEl) {
statusEl.textContent = `Status: ${status}`;
}
}
function addToTranscript(role, text) {
const transcriptEl = document.getElementById('transcript');
if (transcriptEl) {
const messageEl = document.createElement('div');
messageEl.className = `message ${role}`;
messageEl.innerHTML = `
<strong>${role === 'user' ? 'You' : 'Agent'}:</strong>
<p>${text}</p>
`;
transcriptEl.appendChild(messageEl);
transcriptEl.scrollTop = transcriptEl.scrollHeight;
}
}
function clearTranscript() {
const transcriptEl = document.getElementById('transcript');
if (transcriptEl) {
transcriptEl.innerHTML = '';
}
}
function showError(message) {
const errorEl = document.getElementById('error');
if (errorEl) {
errorEl.textContent = `Error: ${message}`;
errorEl.style.display = 'block';
}
}
function hideError() {
const errorEl = document.getElementById('error');
if (errorEl) {
errorEl.style.display = 'none';
}
}
// Button event listeners
document.getElementById('start-btn')?.addEventListener('click', async () => {
try {
hideError();
await conversation.start();
} catch (error) {
console.error('Failed to start conversation:', error);
showError(error.message);
}
});
document.getElementById('stop-btn')?.addEventListener('click', async () => {
try {
await conversation.stop();
} catch (error) {
console.error('Failed to stop conversation:', error);
showError(error.message);
}
});
// HTML Template
/*
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ElevenLabs Voice Agent</title>
<style>
body {
font-family: Arial, sans-serif;
max-width: 600px;
margin: 50px auto;
padding: 20px;
}
button {
padding: 10px 20px;
margin: 5px;
cursor: pointer;
}
#status {
margin: 10px 0;
padding: 10px;
background: #f0f0f0;
border-radius: 4px;
}
#error {
display: none;
margin: 10px 0;
padding: 10px;
background: #ffebee;
color: #c62828;
border-radius: 4px;
}
#transcript {
margin-top: 20px;
padding: 10px;
border: 1px solid #ddd;
border-radius: 4px;
max-height: 400px;
overflow-y: auto;
}
.message {
margin: 10px 0;
padding: 10px;
border-radius: 4px;
}
.message.user {
background: #e3f2fd;
}
.message.agent {
background: #f5f5f5;
}
</style>
</head>
<body>
<h1>ElevenLabs Voice Agent</h1>
<div>
<button id="start-btn">Start Conversation</button>
<button id="stop-btn">Stop</button>
</div>
<div id="status">Status: disconnected</div>
<div id="error"></div>
<div id="transcript"></div>
<script type="module" src="./app.js"></script>
</body>
</html>
*/

View File

@@ -0,0 +1,62 @@
import { useConversation } from '@elevenlabs/react-native';
import { View, Button, Text, ScrollView } from 'react-native';
import { z } from 'zod';
import { useState } from 'react';
export default function VoiceAgent() {
const [transcript, setTranscript] = useState<Array<{ role: string; text: string }>>([]);
const { startConversation, stopConversation, status } = useConversation({
agentId: process.env.EXPO_PUBLIC_ELEVENLABS_AGENT_ID!,
// Use signed URL (most secure)
signedUrl: async () => {
const response = await fetch('https://your-api.com/elevenlabs/auth');
const { signedUrl } = await response.json();
return signedUrl;
},
clientTools: {
updateProfile: {
description: "Update user profile",
parameters: z.object({
name: z.string()
}),
handler: async ({ name }) => {
console.log('Updating profile:', name);
return { success: true };
}
}
},
onEvent: (event) => {
if (event.type === 'transcript') {
setTranscript(prev => [...prev, { role: 'user', text: event.data.text }]);
} else if (event.type === 'agent_response') {
setTranscript(prev => [...prev, { role: 'agent', text: event.data.text }]);
}
}
});
return (
<View style={{ padding: 20 }}>
<Text style={{ fontSize: 24, fontWeight: 'bold', marginBottom: 20 }}>Voice Agent</Text>
<View style={{ flexDirection: 'row', gap: 10, marginBottom: 20 }}>
<Button title="Start" onPress={startConversation} disabled={status === 'connected'} />
<Button title="Stop" onPress={stopConversation} disabled={status !== 'connected'} />
</View>
<Text>Status: {status}</Text>
<ScrollView style={{ marginTop: 20, maxHeight: 400 }}>
{transcript.map((msg, i) => (
<View key={i} style={{ padding: 10, marginBottom: 10, backgroundColor: msg.role === 'user' ? '#e3f2fd' : '#f5f5f5' }}>
<Text style={{ fontWeight: 'bold' }}>{msg.role === 'user' ? 'You' : 'Agent'}</Text>
<Text>{msg.text}</Text>
</View>
))}
</ScrollView>
</View>
);
}

View File

@@ -0,0 +1,166 @@
import { useConversation } from '@elevenlabs/react';
import { z } from 'zod';
import { useState } from 'react';
export default function VoiceAgent() {
const [transcript, setTranscript] = useState<Array<{ role: 'user' | 'agent'; text: string }>>([]);
const [error, setError] = useState<string | null>(null);
const {
startConversation,
stopConversation,
status,
isSpeaking
} = useConversation({
// Agent Configuration
agentId: process.env.NEXT_PUBLIC_ELEVENLABS_AGENT_ID!,
// Authentication (choose one)
// Option 1: API key (for private agents, less secure)
// apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY,
// Option 2: Signed URL (most secure, recommended for production)
signedUrl: async () => {
const response = await fetch('/api/elevenlabs/auth');
const { signedUrl } = await response.json();
return signedUrl;
},
// Client-side tools (browser functions)
clientTools: {
updateCart: {
description: "Update the shopping cart with items",
parameters: z.object({
item: z.string().describe("The item name"),
quantity: z.number().describe("Quantity to add"),
action: z.enum(['add', 'remove']).describe("Add or remove item")
}),
handler: async ({ item, quantity, action }) => {
console.log(`${action} ${quantity}x ${item}`);
// Your cart logic here
return { success: true, total: 99.99 };
}
},
navigate: {
description: "Navigate to a different page",
parameters: z.object({
url: z.string().url().describe("The URL to navigate to")
}),
handler: async ({ url }) => {
window.location.href = url;
return { success: true };
}
}
},
// Event handlers
onConnect: () => {
console.log('Connected to agent');
setTranscript([]);
setError(null);
},
onDisconnect: () => {
console.log('Disconnected from agent');
},
onEvent: (event) => {
switch (event.type) {
case 'transcript':
setTranscript(prev => [
...prev,
{ role: 'user', text: event.data.text }
]);
break;
case 'agent_response':
setTranscript(prev => [
...prev,
{ role: 'agent', text: event.data.text }
]);
break;
case 'tool_call':
console.log('Tool called:', event.data.tool_name, event.data.parameters);
break;
case 'error':
console.error('Agent error:', event.data);
setError(event.data.message);
break;
}
},
onError: (error) => {
console.error('Connection error:', error);
setError(error.message);
},
// Regional compliance (for GDPR)
serverLocation: 'us' // 'us' | 'global' | 'eu-residency' | 'in-residency'
});
return (
<div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">Voice Agent</h1>
{/* Controls */}
<div className="flex gap-2 mb-4">
<button
onClick={startConversation}
disabled={status === 'connected'}
className="px-4 py-2 bg-blue-500 text-white rounded disabled:bg-gray-300"
>
Start Conversation
</button>
<button
onClick={stopConversation}
disabled={status !== 'connected'}
className="px-4 py-2 bg-red-500 text-white rounded disabled:bg-gray-300"
>
Stop
</button>
</div>
{/* Status */}
<div className="mb-4 p-2 bg-gray-100 rounded">
<p>Status: <span className="font-semibold">{status}</span></p>
{isSpeaking && <p className="text-blue-600">Agent is speaking...</p>}
</div>
{/* Error */}
{error && (
<div className="mb-4 p-2 bg-red-100 border border-red-400 text-red-700 rounded">
Error: {error}
</div>
)}
{/* Transcript */}
<div className="flex-1 overflow-y-auto border rounded p-4 space-y-2">
<h2 className="font-semibold mb-2">Transcript</h2>
{transcript.length === 0 ? (
<p className="text-gray-500">No conversation yet. Click "Start Conversation" to begin.</p>
) : (
transcript.map((message, i) => (
<div
key={i}
className={`p-2 rounded ${
message.role === 'user'
? 'bg-blue-100 ml-8'
: 'bg-gray-100 mr-8'
}`}
>
<p className="text-xs font-semibold mb-1">
{message.role === 'user' ? 'You' : 'Agent'}
</p>
<p>{message.text}</p>
</div>
))
)}
</div>
</div>
);
}

View File

@@ -0,0 +1,70 @@
import SwiftUI
import ElevenLabs
struct VoiceAgentView: View {
@State private var isConnected = false
@State private var transcript: [(role: String, text: String)] = []
private let agentID = "your-agent-id"
private let apiKey = "your-api-key" // Use environment variable in production
var body: some View {
VStack {
Text("Voice Agent")
.font(.largeTitle)
.padding()
HStack {
Button("Start Conversation") {
startConversation()
}
.disabled(isConnected)
Button("Stop") {
stopConversation()
}
.disabled(!isConnected)
}
.padding()
Text("Status: \(isConnected ? "Connected" : "Disconnected")")
.padding()
ScrollView {
ForEach(transcript.indices, id: \.self) { index in
let message = transcript[index]
HStack {
VStack(alignment: .leading) {
Text(message.role == "user" ? "You" : "Agent")
.font(.caption)
.fontWeight(.bold)
Text(message.text)
}
.padding()
.background(message.role == "user" ? Color.blue.opacity(0.1) : Color.gray.opacity(0.1))
.cornerRadius(8)
Spacer()
}
.padding(.horizontal)
}
}
}
}
private func startConversation() {
// Initialize ElevenLabs conversation
// Implementation would use the ElevenLabs Swift SDK
isConnected = true
}
private func stopConversation() {
isConnected = false
}
}
#Preview {
VoiceAgentView()
}
// Note: This is a placeholder. Full Swift SDK documentation available at:
// https://github.com/elevenlabs/elevenlabs-swift-sdk

View File

@@ -0,0 +1,210 @@
# System Prompt Template
Use this template to create structured, effective agent prompts.
---
## Personality
```
You are [NAME], a [ROLE/PROFESSION] at [COMPANY].
You have [YEARS] years of experience [DOING WHAT].
Your key traits: [LIST 3-5 PERSONALITY TRAITS].
```
**Example**:
```
You are Sarah, a patient and knowledgeable technical support specialist at TechCorp.
You have 7 years of experience helping customers troubleshoot software issues.
Your key traits: patient, empathetic, detail-oriented, solution-focused, friendly.
```
---
## Environment
```
You're communicating via [CHANNEL: phone/chat/video].
Context: [ENVIRONMENTAL FACTORS].
Communication style: [GUIDELINES].
```
**Example**:
```
You're speaking with customers over the phone.
Context: Background noise and poor connections are common.
Communication style: Speak clearly, use short sentences, pause occasionally for emphasis.
```
---
## Tone
```
Formality: [PROFESSIONAL/CASUAL/FORMAL].
Language: [CONTRACTIONS/JARGON GUIDELINES].
Verbosity: [SENTENCE/RESPONSE LENGTH].
Emotional Expression: [HOW TO EXPRESS EMPATHY/ENTHUSIASM].
```
**Example**:
```
Formality: Professional yet warm and approachable.
Language: Use contractions for natural conversation. Avoid jargon unless customer uses it first.
Verbosity: 2-3 sentences per response. Ask one question at a time.
Emotional Expression: Show empathy with phrases like "I understand how frustrating that must be."
```
---
## Goal
```
Primary Goal: [MAIN OBJECTIVE]
Secondary Goals:
- [SUPPORTING OBJECTIVE 1]
- [SUPPORTING OBJECTIVE 2]
- [SUPPORTING OBJECTIVE 3]
Success Criteria:
- [MEASURABLE OUTCOME 1]
- [MEASURABLE OUTCOME 2]
```
**Example**:
```
Primary Goal: Resolve customer technical issues on the first call.
Secondary Goals:
- Verify customer identity securely
- Document issue details accurately
- Provide proactive tips to prevent future issues
Success Criteria:
- Customer verbally confirms issue is resolved
- Issue documented in CRM
- Customer satisfaction ≥ 4/5
```
---
## Guardrails
```
Never:
- [PROHIBITED ACTION 1]
- [PROHIBITED ACTION 2]
- [PROHIBITED ACTION 3]
Always:
- [REQUIRED ACTION 1]
- [REQUIRED ACTION 2]
Escalate When:
- [ESCALATION TRIGGER 1]
- [ESCALATION TRIGGER 2]
```
**Example**:
```
Never:
- Provide medical, legal, or financial advice
- Share confidential company information
- Make promises about refunds without verification
- Continue if customer becomes abusive
Always:
- Verify customer identity before accessing account details
- Document all interactions
- Offer alternative solutions if first approach fails
Escalate When:
- Customer requests manager
- Issue requires account credit/refund approval
- Technical issue beyond knowledge base
- Customer exhibits abusive behavior
```
---
## Tools
```
Available Tools:
1. tool_name(param1, param2)
Purpose: [WHAT IT DOES]
Use When: [TRIGGER CONDITION]
Example: [SAMPLE USAGE]
2. ...
Guidelines:
- Always explain to customer before calling tool
- Wait for tool response before continuing
- If tool fails, offer alternative
```
**Example**:
```
Available Tools:
1. lookup_order(order_id: string)
Purpose: Fetch order details from database
Use When: Customer mentions order number or asks about order status
Example: "Let me look that up for you. [Call lookup_order('ORD-12345')]"
2. send_password_reset(email: string)
Purpose: Trigger password reset email
Use When: Customer can't access account and identity verified
Example: "I'll send a password reset email. [Call send_password_reset('user@example.com')]"
3. transfer_to_supervisor()
Purpose: Escalate to human agent
Use When: Issue requires manager approval or customer explicitly requests
Example: "Let me connect you with a supervisor. [Call transfer_to_supervisor()]"
Guidelines:
- Always explain what you're doing before calling tool
- Wait for tool response before continuing conversation
- If tool fails, acknowledge and offer alternative solution
```
---
## Complete Prompt
Combine all sections into your final system prompt:
```
Personality:
You are [NAME], a [ROLE] at [COMPANY]. You have [EXPERIENCE]. Your traits: [TRAITS].
Environment:
You're communicating via [CHANNEL]. [CONTEXT]. [COMMUNICATION STYLE].
Tone:
[FORMALITY]. [LANGUAGE]. [VERBOSITY]. [EMOTIONAL EXPRESSION].
Goal:
Primary: [PRIMARY GOAL]
Secondary: [SECONDARY GOALS]
Success: [SUCCESS CRITERIA]
Guardrails:
Never: [PROHIBITIONS]
Always: [REQUIREMENTS]
Escalate: [TRIGGERS]
Tools:
[TOOL DESCRIPTIONS WITH EXAMPLES]
```
---
## Testing Your Prompt
1. Create test scenarios covering common use cases
2. Run conversations and analyze transcripts
3. Check for:
- Tone consistency
- Goal achievement
- Guardrail adherence
- Tool usage accuracy
4. Iterate based on findings
5. Monitor analytics dashboard for real performance

View File

@@ -0,0 +1,78 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ElevenLabs Voice Agent Widget</title>
</head>
<body>
<h1>Welcome to Our Support</h1>
<p>Need help? Click the voice assistant button in the bottom-right corner!</p>
<!-- ElevenLabs Widget -->
<script src="https://elevenlabs.io/convai-widget/index.js"></script>
<script>
ElevenLabsWidget.init({
// Required: Your agent ID
agentId: 'your-agent-id',
// Optional: Theming
theme: {
primaryColor: '#3B82F6', // Blue
backgroundColor: '#1F2937', // Dark gray
textColor: '#F9FAFB', // Light gray
accentColor: '#10B981' // Green
},
// Optional: Position
position: 'bottom-right', // or 'bottom-left'
// Optional: Custom branding
branding: {
logo: 'https://example.com/logo.png',
name: 'Support Assistant',
tagline: 'How can I help you today?'
},
// Optional: Customize button
button: {
size: 'medium', // 'small' | 'medium' | 'large'
icon: 'microphone', // 'microphone' | 'chat' | 'phone'
text: 'Talk to us' // Optional button label
},
// Optional: Auto-open widget
autoOpen: false,
autoOpenDelay: 3000, // milliseconds
// Optional: Welcome message
welcomeMessage: {
enabled: true,
message: "Hi! I'm here to help. Click to start a voice conversation."
},
// Optional: Callbacks
onOpen: () => {
console.log('Widget opened');
},
onClose: () => {
console.log('Widget closed');
},
onConversationStart: () => {
console.log('Conversation started');
},
onConversationEnd: () => {
console.log('Conversation ended');
}
});
</script>
<!-- Optional: Custom styling -->
<style>
/* Override widget styles if needed */
.elevenlabs-widget {
/* Custom styles */
}
</style>
</body>
</html>

125
plugin.lock.json Normal file
View File

@@ -0,0 +1,125 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jezweb/claude-skills:skills/elevenlabs-agents",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "799ad0048eddd949e159d5079bd98723c7c83a86",
"treeHash": "c33b4ac9169f56f6656d7a375536a10474750451e17493db80bf59934dcecc7e",
"generatedAt": "2025-11-28T10:19:02.870530Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "elevenlabs-agents",
"description": "Build conversational AI voice agents with ElevenLabs Platform using React, JavaScript, React Native, or Swift SDKs. Configure agents, tools (client/server/MCP), RAG knowledge bases, multi-voice, and Scribe real-time STT. Use when: building voice chat interfaces, implementing AI phone agents with Twilio, configuring agent workflows or tools, adding RAG knowledge bases, testing with CLI agents as code, or troubleshooting deprecated @11labs packages, Android audio cutoff, CSP violations, dynamic va",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "7a738c23198bdc7b22d80a41efeb73dedc6620031e56a88f616ce2c773e64fac"
},
{
"path": "SKILL.md",
"sha256": "e3d13197550d228b52a12b67b88c954be6f673c8f8632d5d59864234e3e9cee9"
},
{
"path": "references/cost-optimization.md",
"sha256": "0b8bba82aea1132fd32f1f641a425f5c29d0e671780379f359751f315af984e3"
},
{
"path": "references/system-prompt-guide.md",
"sha256": "b7cb12ec838532ddf23750907f81a3cb074237a9ff86610e9503a96cd2893175"
},
{
"path": "references/cli-commands.md",
"sha256": "e572d52139ff613c2e8b014d3702333127c99833fe2fdb78282fa3ecd745504c"
},
{
"path": "references/compliance-guide.md",
"sha256": "57ad846c52b99829ff2c9fe8f3487021c758cea83327a5ba6d84820e7ab04242"
},
{
"path": "references/testing-guide.md",
"sha256": "23ff831a472080e8c4b0ddbd5897b018909c2a5a31bbbb3ee3d20668346e34c8"
},
{
"path": "references/workflow-examples.md",
"sha256": "ade5909e4c37fb574acead086181702c23fe1c4c796a01627dfdb0b007567a71"
},
{
"path": "references/tool-examples.md",
"sha256": "448239060b5c89acae1cb30dcdcbf27783a08344801f2c57e55f6cdbd7f450de"
},
{
"path": "references/api-reference.md",
"sha256": "e8ca6b215f8854696919e5d28d464ab7958c5cabad78721207ce18327a45085d"
},
{
"path": "scripts/test-agent.sh",
"sha256": "fd3b7e7f8e347d06c899d0bea2b6ab9464a76c6e9284890245f5c93a13b0d051"
},
{
"path": "scripts/deploy-agent.sh",
"sha256": "aa30e299d4db74ecfecffad5d088918ee21607c2a38c512c30e23b72eea9d680"
},
{
"path": "scripts/create-agent.sh",
"sha256": "103e8cbb95e7b75053585cbfc93ca01f98f3a4953521412fb9876d670a785b2d"
},
{
"path": "scripts/simulate-conversation.sh",
"sha256": "a8eb469088a969f9186ba6ffe83551040bb14627123817cf4b297290834151c5"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "b844509e8c6225e29536ad2e9d6059a0aecd474b77bb59be7c7997d2a57f3350"
},
{
"path": "assets/agent-config-schema.json",
"sha256": "d46b74c9896a0f4c73a62eddb2d470e2b80d0ac6cccb7713405c1e3ac9b8c408"
},
{
"path": "assets/react-native-boilerplate.tsx",
"sha256": "fcc88f1253a023ea23dc7f2d47733bce25dd1ee50d2e9d480396eb4c8b5c781d"
},
{
"path": "assets/widget-embed-template.html",
"sha256": "e9f76b2f5b962b2e9aa4b839a97836fa7ebd1d3b99c4d5f465c913888009e4a4"
},
{
"path": "assets/react-sdk-boilerplate.tsx",
"sha256": "83e08e4a943e636294e8666301e05a4efb7464d27165a9bd85a967cbf8b2953b"
},
{
"path": "assets/system-prompt-template.md",
"sha256": "b1e92c088d9a8a2a517d093b6d4444d030141c4504bb85bbbb237bc2a8ed1eac"
},
{
"path": "assets/ci-cd-example.yml",
"sha256": "84ddf10354ce34fd1caa19c4e7b58cee2b154e616c259cce34aedf98c53192e1"
},
{
"path": "assets/javascript-sdk-boilerplate.js",
"sha256": "9f7a0aa5147a52ec43a08a43fbb161a23cafeadec9ef15d9cd667ab1ba298982"
},
{
"path": "assets/swift-sdk-boilerplate.swift",
"sha256": "2b013f0c9eb369fb1689a92f23feed280c427563381964426291827f783924e5"
}
],
"dirSha256": "c33b4ac9169f56f6656d7a375536a10474750451e17493db80bf59934dcecc7e"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

413
references/api-reference.md Normal file
View File

@@ -0,0 +1,413 @@
# ElevenLabs Agents API Reference
## Base URL
```
https://api.elevenlabs.io/v1/convai
```
## Authentication
All requests require an API key in the header:
```bash
curl -H "xi-api-key: YOUR_API_KEY" https://api.elevenlabs.io/v1/convai/agents
```
---
## Agents
### Create Agent
**Endpoint**: `POST /agents/create`
**Request Body**:
```json
{
"name": "Support Agent",
"conversation_config": {
"agent": {
"prompt": {
"prompt": "You are a helpful support agent.",
"llm": "gpt-4o",
"temperature": 0.7,
"max_tokens": 500,
"tool_ids": ["tool_123"],
"knowledge_base": ["doc_456"]
},
"first_message": "Hello! How can I help?",
"language": "en"
},
"tts": {
"model_id": "eleven_turbo_v2_5",
"voice_id": "voice_abc123",
"stability": 0.5,
"similarity_boost": 0.75,
"speed": 1.0
},
"asr": {
"quality": "high",
"provider": "deepgram"
},
"turn": {
"mode": "normal"
}
}
}
```
**Response**:
```json
{
"agent_id": "agent_abc123",
"name": "Support Agent",
"created_at": "2025-11-03T12:00:00Z"
}
```
### Update Agent
**Endpoint**: `PATCH /agents/:agent_id`
**Request Body**: Same as Create Agent
### Get Agent
**Endpoint**: `GET /agents/:agent_id`
**Response**:
```json
{
"agent_id": "agent_abc123",
"name": "Support Agent",
"conversation_config": { ... },
"created_at": "2025-11-03T12:00:00Z",
"updated_at": "2025-11-03T14:00:00Z"
}
```
### List Agents
**Endpoint**: `GET /agents`
**Response**:
```json
{
"agents": [
{
"agent_id": "agent_abc123",
"name": "Support Agent",
"created_at": "2025-11-03T12:00:00Z"
}
]
}
```
### Delete Agent
**Endpoint**: `DELETE /agents/:agent_id`
**Response**:
```json
{
"success": true
}
```
---
## Conversations
### Create Conversation
**Endpoint**: `POST /conversations/create`
**Request Body**:
```json
{
"agent_id": "agent_abc123",
"dynamic_variables": {
"user_name": "John",
"account_tier": "premium"
},
"overrides": {
"agent": {
"prompt": {
"prompt": "Custom prompt override"
}
}
}
}
```
**Response**:
```json
{
"conversation_id": "conv_xyz789",
"signed_url": "wss://api.elevenlabs.io/v1/convai/...",
"created_at": "2025-11-03T12:00:00Z"
}
```
### Get Conversation
**Endpoint**: `GET /conversations/:conversation_id`
**Response**:
```json
{
"conversation_id": "conv_xyz789",
"agent_id": "agent_abc123",
"transcript": "...",
"duration_seconds": 120,
"status": "completed",
"created_at": "2025-11-03T12:00:00Z",
"ended_at": "2025-11-03T12:02:00Z"
}
```
---
## Knowledge Base
### Upload Document
**Endpoint**: `POST /knowledge-base/upload`
**Request Body** (multipart/form-data):
```
file: <binary>
name: "Support Documentation"
```
**Response**:
```json
{
"document_id": "doc_456",
"name": "Support Documentation",
"status": "processing"
}
```
### Compute RAG Index
**Endpoint**: `POST /knowledge-base/:document_id/rag-index`
**Request Body**:
```json
{
"embedding_model": "e5_mistral_7b"
}
```
**Response**:
```json
{
"document_id": "doc_456",
"status": "computing"
}
```
### Get RAG Index Status
**Endpoint**: `GET /knowledge-base/:document_id/rag-index`
**Response**:
```json
{
"document_id": "doc_456",
"status": "ready",
"embedding_model": "e5_mistral_7b",
"created_at": "2025-11-03T12:00:00Z"
}
```
---
## Tools
### Create Webhook Tool
**Endpoint**: `POST /tools/webhook`
**Request Body**:
```json
{
"name": "get_weather",
"description": "Fetch current weather for a city",
"url": "https://api.weather.com/v1/current",
"method": "GET",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
},
"headers": {
"Authorization": "Bearer {{secret__weather_api_key}}"
}
}
```
**Response**:
```json
{
"tool_id": "tool_123",
"name": "get_weather",
"created_at": "2025-11-03T12:00:00Z"
}
```
---
## Testing
### Simulate Conversation
**Endpoint**: `POST /agents/:agent_id/simulate`
**Request Body**:
```json
{
"scenario": "Customer requests refund",
"user_messages": [
"I want a refund for order #12345",
"I ordered it last week"
],
"success_criteria": [
"Agent acknowledges request",
"Agent provides timeline"
]
}
```
**Response**:
```json
{
"simulation_id": "sim_123",
"passed": true,
"transcript": "...",
"evaluation": {
"criteria_met": 2,
"criteria_total": 2,
"details": [
{
"criterion": "Agent acknowledges request",
"passed": true
},
{
"criterion": "Agent provides timeline",
"passed": true
}
]
}
}
```
---
## Error Codes
| Code | Meaning | Solution |
|------|---------|----------|
| 400 | Bad Request | Check request body format |
| 401 | Unauthorized | Verify API key is correct |
| 403 | Forbidden | Check agent visibility settings |
| 404 | Not Found | Verify resource ID exists |
| 429 | Rate Limited | Implement backoff strategy |
| 500 | Server Error | Retry with exponential backoff |
---
## Rate Limits
- **Standard Tier**: 100 requests/minute
- **Pro Tier**: 500 requests/minute
- **Enterprise Tier**: Custom limits
**Rate Limit Headers**:
```
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1730640000
```
---
## Pagination
**Query Parameters**:
```
?page=1&per_page=50
```
**Response Headers**:
```
X-Total-Count: 250
X-Page: 1
X-Per-Page: 50
```
---
## Webhook Events
### Post-Call Webhook
**Event Type**: `post_call_transcription`
**Payload**:
```json
{
"type": "post_call_transcription",
"data": {
"conversation_id": "conv_xyz789",
"agent_id": "agent_abc123",
"transcript": "...",
"duration_seconds": 120,
"analysis": {
"sentiment": "positive",
"resolution": true,
"extracted_data": {}
}
},
"event_timestamp": "2025-11-03T12:02:00Z"
}
```
**Verification** (HMAC SHA-256):
```typescript
import crypto from 'crypto';
const signature = request.headers['elevenlabs-signature'];
const payload = JSON.stringify(request.body);
const hmac = crypto
.createHmac('sha256', process.env.WEBHOOK_SECRET)
.update(payload)
.digest('hex');
if (signature !== hmac) {
// Invalid signature
}
```
---
## SDK vs API
| Feature | SDK | API |
|---------|-----|-----|
| WebSocket Connection | ✅ | ❌ |
| Client Tools | ✅ | ❌ |
| Real-time Events | ✅ | ❌ |
| Agent Management | ❌ | ✅ |
| Tool Management | ❌ | ✅ |
| Knowledge Base | ❌ | ✅ |
**Recommendation**: Use SDK for conversations, API for agent management.

274
references/cli-commands.md Normal file
View File

@@ -0,0 +1,274 @@
# CLI Commands Reference
## Installation
```bash
npm install -g @elevenlabs/cli
# or
pnpm install -g @elevenlabs/cli
```
---
## Authentication
### Login
```bash
elevenlabs auth login
```
### Check Current User
```bash
elevenlabs auth whoami
```
### Set Residency
```bash
elevenlabs auth residency eu-residency
# Options: global | eu-residency | in-residency
```
### Logout
```bash
elevenlabs auth logout
```
---
## Project Initialization
### Initialize New Project
```bash
elevenlabs agents init
```
### Recreate Project Structure
```bash
elevenlabs agents init --override
```
---
## Agent Management
### Add Agent
```bash
elevenlabs agents add "Agent Name" --template TEMPLATE
```
**Templates**: default | minimal | voice-only | text-only | customer-service | assistant
### Push to Platform
```bash
# Push all agents
elevenlabs agents push
# Push specific agent
elevenlabs agents push --agent "Agent Name"
# Push to environment
elevenlabs agents push --env prod
# Dry run (preview changes)
elevenlabs agents push --dry-run
```
### Pull from Platform
```bash
# Pull all agents
elevenlabs agents pull
# Pull specific agent
elevenlabs agents pull --agent "Agent Name"
```
### List Agents
```bash
elevenlabs agents list
```
### Check Sync Status
```bash
elevenlabs agents status
```
### Delete Agent
```bash
elevenlabs agents delete AGENT_ID
```
### Generate Widget
```bash
elevenlabs agents widget "Agent Name"
```
---
## Tool Management
### Add Webhook Tool
```bash
elevenlabs tools add-webhook "Tool Name" --config-path tool_configs/tool.json
```
### Add Client Tool
```bash
elevenlabs tools add-client "Tool Name" --config-path tool_configs/tool.json
```
### Push Tools
```bash
elevenlabs tools push
```
### Pull Tools
```bash
elevenlabs tools pull
```
### Delete Tool
```bash
elevenlabs tools delete TOOL_ID
# Delete all tools
elevenlabs tools delete --all
```
---
## Testing
### Add Test
```bash
elevenlabs tests add "Test Name" --template basic-llm
```
### Push Tests
```bash
elevenlabs tests push
```
### Pull Tests
```bash
elevenlabs tests pull
```
### Run Test
```bash
elevenlabs agents test "Agent Name"
```
---
## Multi-Environment Workflow
```bash
# Development
elevenlabs agents push --env dev
# Staging
elevenlabs agents push --env staging
# Production
elevenlabs agents push --env prod --dry-run
# Review changes...
elevenlabs agents push --env prod
```
---
## Common Workflows
### Create and Deploy Agent
```bash
elevenlabs auth login
elevenlabs agents init
elevenlabs agents add "Support Bot" --template customer-service
# Edit agent_configs/support-bot.json
elevenlabs agents push --env dev
elevenlabs agents test "Support Bot"
elevenlabs agents push --env prod
```
### Update Existing Agent
```bash
elevenlabs agents pull
# Edit agent_configs/agent-name.json
elevenlabs agents push --dry-run
elevenlabs agents push
```
### Promote Agent to Production
```bash
# Test in staging first
elevenlabs agents push --env staging
elevenlabs agents test "Agent Name"
# If tests pass, promote to prod
elevenlabs agents push --env prod
```
---
## Environment Variables
```bash
# For CI/CD
export ELEVENLABS_API_KEY=your-api-key
# Run commands
elevenlabs agents push --env prod
```
---
## Troubleshooting
### Reset Project
```bash
elevenlabs agents init --override
elevenlabs agents pull
```
### Check Version
```bash
elevenlabs --version
```
### Get Help
```bash
elevenlabs --help
elevenlabs agents --help
elevenlabs tools --help
```
---
## File Locations
### Config Files
```
~/.elevenlabs/api_key # API key (if not using keychain)
```
### Project Files
```
./agents.json # Agent registry
./tools.json # Tool registry
./tests.json # Test registry
./agent_configs/*.json # Individual agent configs
./tool_configs/*.json # Individual tool configs
./test_configs/*.json # Individual test configs
```
---
## Best Practices
1. **Always use --dry-run** before pushing to production
2. **Commit configs to Git** for version control
3. **Use environment-specific configs** (dev/staging/prod)
4. **Test agents** before deploying
5. **Pull before editing** to avoid conflicts
6. **Use templates** for consistency
7. **Document changes** in commit messages

View File

@@ -0,0 +1,227 @@
# Privacy & Compliance Guide
## GDPR Compliance
### Data Retention
**Default**: 2 years (730 days)
```json
{
"privacy": {
"transcripts": {
"retention_days": 730
},
"audio": {
"retention_days": 730
}
}
}
```
### Right to Be Forgotten
Enable data deletion requests:
```typescript
await client.conversations.delete(conversation_id);
```
### Data Residency
```typescript
const { startConversation } = useConversation({
serverLocation: 'eu-residency' // GDPR-compliant EU data centers
});
```
### User Consent
Inform users before recording:
```json
{
"first_message": "This call will be recorded for quality and training purposes. Do you consent?"
}
```
---
## HIPAA Compliance
### Data Retention
**Minimum**: 6 years (2190 days)
```json
{
"privacy": {
"transcripts": {
"retention_days": 2190
},
"audio": {
"retention_days": 2190
}
}
}
```
### Encryption
- **In Transit**: TLS 1.3 (automatic)
- **At Rest**: AES-256 (automatic)
### Business Associate Agreement (BAA)
Contact ElevenLabs for HIPAA BAA.
### PHI Handling
**Never**:
- Store PHI in dynamic variables
- Log PHI in tool parameters
- Send PHI to third-party tools without BAA
**Always**:
- Use secure authentication
- Verify patient identity
- Document access logs
---
## SOC 2 Compliance
### Security Controls
✅ Encryption in transit and at rest (automatic)
✅ Access controls (API key management)
✅ Audit logs (conversation history)
✅ Incident response (automatic backups)
### Best Practices
```json
{
"authentication": {
"type": "signed_url", // Most secure
"session_duration": 3600 // 1 hour max
}
}
```
---
## Regional Compliance
### US Residency
```typescript
serverLocation: 'us'
```
### EU Residency (GDPR)
```typescript
serverLocation: 'eu-residency'
```
### India Residency
```typescript
serverLocation: 'in-residency'
```
---
## Zero Retention Mode
**Maximum Privacy**: Immediately delete all data after conversation ends.
```json
{
"privacy": {
"zero_retention": true
}
}
```
**Limitations**:
- No conversation history
- No analytics
- No post-call webhooks
- No MCP tool integrations
---
## PCI DSS (Payment Card Industry)
### Never:
❌ Store credit card numbers in conversation logs
❌ Send credit card data to LLM
❌ Log CVV or PIN numbers
### Always:
✅ Use PCI-compliant payment processors (Stripe, PayPal)
✅ Tokenize payment data
✅ Use DTMF keypad for card entry (telephony)
### Example: Secure Payment Collection
```json
{
"system_tools": [
{
"name": "dtmf_playpad",
"description": "Display keypad for secure card entry"
}
]
}
```
---
## Compliance Checklist
### GDPR
- [ ] Data retention ≤ 2 years (or justify longer)
- [ ] EU data residency enabled
- [ ] User consent obtained before recording
- [ ] Data deletion process implemented
- [ ] Privacy policy updated
### HIPAA
- [ ] Data retention ≥ 6 years
- [ ] BAA signed with ElevenLabs
- [ ] Encryption enabled (automatic)
- [ ] Access logs maintained
- [ ] Staff trained on PHI handling
### SOC 2
- [ ] API key security (never expose in client)
- [ ] Use signed URLs for authentication
- [ ] Monitor access logs
- [ ] Incident response plan documented
### PCI DSS
- [ ] Never log card data
- [ ] Use tokenization for payments
- [ ] DTMF keypad for card entry
- [ ] PCI-compliant payment processor
---
## Monitoring & Auditing
### Access Logs
```typescript
const logs = await client.conversations.list({
agent_id: 'agent_123',
from_date: '2025-01-01',
to_date: '2025-12-31'
});
```
### Compliance Reports
- Monthly conversation volume
- Data retention adherence
- Security incidents
- User consent rates
---
## Incident Response
### Data Breach Protocol
1. Identify affected conversations
2. Notify ElevenLabs immediately
3. Delete compromised data
4. Notify affected users (GDPR requirement)
5. Document incident
6. Review security controls
### Contact
security@elevenlabs.io

View File

@@ -0,0 +1,275 @@
# Cost Optimization Guide
## 1. LLM Caching
### How It Works
- **First request**: Full cost (`input_cache_write`)
- **Subsequent requests**: 10% cost (`input_cache_read`)
- **Cache TTL**: 5 minutes to 1 hour (configurable)
### Configuration
```json
{
"llm_config": {
"caching": {
"enabled": true,
"ttl_seconds": 3600
}
}
}
```
### What Gets Cached
✅ System prompts
✅ Tool definitions
✅ Knowledge base context
✅ Recent conversation history
❌ User messages (always fresh)
❌ Dynamic variables
❌ Tool responses
### Savings
**Up to 90%** on cached inputs
**Example**:
- System prompt: 500 tokens
- Without caching: 500 tokens × 100 conversations = 50,000 tokens
- With caching: 500 tokens (first) + 50 tokens × 99 (cached) = 5,450 tokens
- **Savings**: 89%
---
## 2. Model Swapping
### Model Comparison
| Model | Cost (per 1M tokens) | Speed | Quality | Best For |
|-------|---------------------|-------|---------|----------|
| GPT-4o | $5 | Medium | Highest | Complex reasoning |
| GPT-4o-mini | $0.15 | Fast | High | Most use cases |
| Claude Sonnet 4.5 | $3 | Medium | Highest | Long context |
| Gemini 2.5 Flash | $0.075 | Fastest | Medium | Simple tasks |
### Configuration
```json
{
"llm_config": {
"model": "gpt-4o-mini"
}
}
```
### Optimization Strategy
1. **Start with gpt-4o-mini** for all agents
2. **Upgrade to gpt-4o** only if:
- Complex reasoning required
- High accuracy critical
- User feedback indicates quality issues
3. **Use Gemini 2.5 Flash** for:
- Simple routing/classification
- FAQ responses
- Order status lookups
### Savings
**Up to 97%** (gpt-4o → Gemini 2.5 Flash)
---
## 3. Burst Pricing
### How It Works
- **Normal**: Your subscription concurrency limit (e.g., 10 calls)
- **Burst**: Up to 3× your limit (e.g., 30 calls)
- **Cost**: 2× per-minute rate for burst calls
### Configuration
```json
{
"call_limits": {
"burst_pricing_enabled": true
}
}
```
### When to Use
✅ Black Friday traffic spikes
✅ Product launches
✅ Seasonal demand (holidays)
✅ Marketing campaigns
❌ Sustained high traffic (upgrade plan instead)
❌ Unpredictable usage patterns
### Cost Calculation
**Example**:
- Subscription: 10 concurrent calls ($0.10/min per call)
- Traffic spike: 25 concurrent calls
- Burst calls: 25 - 10 = 15 calls
- Burst cost: 15 × $0.20/min = $3/min
- Regular cost: 10 × $0.10/min = $1/min
- **Total**: $4/min during spike
---
## 4. Prompt Optimization
### Reduce Token Count
**Before** (500 tokens):
```
You are a highly experienced and knowledgeable customer support specialist with extensive training in technical troubleshooting, customer service best practices, and empathetic communication. You should always maintain a professional yet friendly demeanor while helping customers resolve their issues efficiently and effectively.
```
**After** (150 tokens):
```
You are an experienced support specialist. Be professional, friendly, and efficient.
```
**Savings**: 70% token reduction
### Use Tools Instead of Context
**Before**: Include FAQ in system prompt (2,000 tokens)
**After**: Use RAG/knowledge base (100 tokens + retrieval)
**Savings**: 95% for large knowledge bases
---
## 5. Turn-Taking Optimization
### Impact on Cost
| Mode | Latency | LLM Calls | Cost Impact |
|------|---------|-----------|-------------|
| Eager | Low | More | Higher (more interruptions) |
| Normal | Medium | Medium | Balanced |
| Patient | High | Fewer | Lower (fewer interruptions) |
### Recommendation
Use **Patient** mode for cost-sensitive applications where speed is less critical.
---
## 6. Voice Settings
### Speed vs Cost
| Speed | TTS Cost | User Experience |
|-------|----------|-----------------|
| 0.7x | Higher (longer audio) | Slow |
| 1.0x | Baseline | Natural |
| 1.2x | Lower (shorter audio) | Fast |
### Recommendation
Use **1.1x speed** for slight cost savings without compromising experience.
---
## 7. Conversation Duration Limits
### Configuration
```json
{
"conversation": {
"max_duration_seconds": 300 // 5 minutes
}
}
```
### Use Cases
- FAQ bots (limit: 2-3 minutes)
- Order status (limit: 1 minute)
- Full support (limit: 10-15 minutes)
### Savings
Prevents unexpectedly long conversations.
---
## 8. Analytics-Driven Optimization
### Monitor Metrics
1. **Average conversation duration**
2. **LLM tokens per conversation**
3. **Tool call frequency**
4. **Resolution rate**
### Identify Issues
- Long conversations → improve prompts or add escalation
- High token count → enable caching or shorten prompts
- Low resolution rate → upgrade model or improve knowledge base
---
## 9. Cost Monitoring
### API Usage Tracking
```typescript
const usage = await client.analytics.getLLMUsage({
agent_id: 'agent_123',
from_date: '2025-11-01',
to_date: '2025-11-30'
});
console.log('Total tokens:', usage.total_tokens);
console.log('Cached tokens:', usage.cached_tokens);
console.log('Cost:', usage.total_cost);
```
### Set Budgets
```json
{
"cost_limits": {
"daily_budget_usd": 100,
"monthly_budget_usd": 2000
}
}
```
---
## 10. Cost Optimization Checklist
### Before Launch
- [ ] Enable LLM caching
- [ ] Use gpt-4o-mini (not gpt-4o)
- [ ] Optimize prompt length
- [ ] Set conversation duration limits
- [ ] Use RAG instead of large system prompts
- [ ] Configure burst pricing if needed
### During Operation
- [ ] Monitor LLM token usage weekly
- [ ] Review conversation analytics monthly
- [ ] Test cheaper models quarterly
- [ ] Optimize prompts based on analytics
- [ ] Review and remove unused tools
### Continuous Improvement
- [ ] A/B test cheaper models
- [ ] Analyze long conversations
- [ ] Improve resolution rates
- [ ] Reduce average conversation duration
- [ ] Increase cache hit rates
---
## Expected Savings
**Baseline Configuration**:
- Model: gpt-4o
- No caching
- Average prompt: 1,000 tokens
- Average conversation: 5 minutes
- Cost: ~$0.50/conversation
**Optimized Configuration**:
- Model: gpt-4o-mini
- Caching enabled
- Average prompt: 300 tokens
- Average conversation: 3 minutes
- Cost: ~$0.05/conversation
**Total Savings**: **90%** 🎉

View File

@@ -0,0 +1,276 @@
# System Prompt Engineering Guide
## 6-Component Framework
### 1. Personality
Define who the agent is.
**Template**:
```
You are [NAME], a [ROLE/PROFESSION] at [COMPANY].
You have [EXPERIENCE/BACKGROUND].
Your traits: [LIST PERSONALITY TRAITS].
```
**Example**:
```
You are Sarah, a patient and knowledgeable technical support specialist at TechCorp.
You have 7 years of experience helping customers troubleshoot software issues.
Your traits: patient, empathetic, detail-oriented, solution-focused.
```
### 2. Environment
Describe the communication context.
**Template**:
```
You're communicating via [CHANNEL: phone/chat/video].
Consider [ENVIRONMENTAL FACTORS].
Adapt your communication style to [CONTEXT].
```
**Example**:
```
You're speaking with customers over the phone.
Background noise and poor connections are common.
Speak clearly, use short sentences, and occasionally pause for emphasis.
```
### 3. Tone
Specify speech patterns and formality.
**Template**:
```
Tone: [FORMALITY LEVEL].
Language: [CONTRACTIONS/JARGON GUIDELINES].
Verbosity: [SENTENCE LENGTH, RESPONSE LENGTH].
Emotional Expression: [GUIDELINES].
```
**Example**:
```
Tone: Professional yet warm and approachable.
Language: Use contractions ("I'm", "let's") for natural conversation. Avoid technical jargon unless the customer uses it first.
Verbosity: Keep responses to 2-3 sentences. Ask one question at a time.
Emotional Expression: Express empathy with phrases like "I understand how frustrating that must be."
```
### 4. Goal
Define objectives and success criteria.
**Template**:
```
Primary Goal: [MAIN OBJECTIVE]
Secondary Goals:
- [SUPPORTING OBJECTIVE 1]
- [SUPPORTING OBJECTIVE 2]
Success Criteria:
- [MEASURABLE OUTCOME 1]
- [MEASURABLE OUTCOME 2]
```
**Example**:
```
Primary Goal: Resolve customer technical issues on the first call.
Secondary Goals:
- Verify customer identity securely
- Document issue details accurately
- Provide proactive tips to prevent future issues
Success Criteria:
- Customer verbally confirms their issue is resolved
- Issue documented in CRM system
- Customer satisfaction score ≥ 4/5
```
### 5. Guardrails
Set boundaries and ethical constraints.
**Template**:
```
Never:
- [PROHIBITED ACTION 1]
- [PROHIBITED ACTION 2]
Always:
- [REQUIRED ACTION 1]
- [REQUIRED ACTION 2]
Escalation Triggers:
- [CONDITION REQUIRING HUMAN INTERVENTION]
```
**Example**:
```
Never:
- Provide medical, legal, or financial advice
- Share confidential company information
- Make promises about refunds without verification
- Continue conversation if customer becomes abusive
Always:
- Verify customer identity before accessing account details
- Document all interactions in CRM
- Offer alternative solutions if first approach doesn't work
Escalation Triggers:
- Customer requests manager
- Issue requires account credit/refund approval
- Technical issue beyond your knowledge base
- Customer exhibits abusive behavior
```
### 6. Tools
Describe available functions and when to use them.
**Template**:
```
Available Tools:
1. tool_name(parameters)
Purpose: [WHAT IT DOES]
Use When: [TRIGGER CONDITION]
Example: [SAMPLE USAGE]
2. ...
Guidelines:
- [GENERAL TOOL USAGE RULES]
```
**Example**:
```
Available Tools:
1. lookup_order(order_id: string)
Purpose: Fetch order details from database
Use When: Customer mentions an order number or asks about order status
Example: "Let me look that up for you. [Call lookup_order(order_id='ORD-12345')]"
2. send_password_reset(email: string)
Purpose: Trigger password reset email
Use When: Customer can't access account and identity is verified
Example: "I'll send you a password reset email. [Call send_password_reset(email='customer@example.com')]"
3. transfer_to_supervisor()
Purpose: Escalate to human agent
Use When: Issue requires manager approval or customer explicitly requests
Example: "Let me connect you with a supervisor. [Call transfer_to_supervisor()]"
Guidelines:
- Always explain to the customer what you're doing before calling a tool
- Wait for tool response before continuing
- If tool fails, acknowledge and offer alternative
```
---
## Complete Example Templates
### Customer Support Agent
```
Personality:
You are Alex, a friendly and knowledgeable customer support specialist at TechCorp. You have 5 years of experience helping customers solve technical issues. You're patient, empathetic, and always maintain a positive attitude.
Environment:
You're speaking with customers over the phone. Communication is voice-only. Customers may have background noise or poor connection quality. Speak clearly and use thoughtful pauses for emphasis.
Tone:
Professional yet warm. Use contractions ("I'm", "let's") to sound natural. Avoid jargon unless the customer uses it first. Keep responses concise (2-3 sentences max). Use encouraging phrases like "I'll be happy to help with that."
Goal:
Primary: Resolve customer technical issues on the first call.
Secondary: Verify customer identity, document issues accurately, provide proactive solutions.
Success: Customer verbally confirms issue is resolved.
Guardrails:
- Never provide medical/legal/financial advice
- Don't share confidential company information
- Escalate if customer becomes abusive
- Never make promises about refunds without verification
Tools:
1. lookup_order(order_id) - Fetch order details when customer mentions order number
2. transfer_to_supervisor() - Escalate when issue requires manager approval
3. send_password_reset(email) - Trigger reset when customer can't access account
Always explain what you're doing before calling tools.
```
### Educational Tutor
```
Personality:
You are Maya, a patient and encouraging math tutor. You have 10 years of experience teaching middle school students. You're enthusiastic about learning and celebrate every small victory.
Environment:
You're tutoring students via voice chat. Students may feel anxious or frustrated about math. Create a safe, judgment-free environment where mistakes are learning opportunities.
Tone:
Warm, encouraging, and patient. Never sound frustrated or disappointed. Use positive reinforcement frequently ("Great thinking!", "You're on the right track!"). Adjust complexity based on student's responses.
Goal:
Primary: Help students understand math concepts, not just get answers.
Secondary: Build confidence and reduce math anxiety.
Success: Student can explain the concept in their own words and solve similar problems independently.
Guardrails:
- Never give answers directly—guide students to discover solutions
- Don't move to next topic until current concept is mastered
- If student becomes frustrated, take a break or switch to easier problem
- Never compare students or use negative language
Tools:
1. show_visual_aid(concept) - Display diagram or graph to illustrate concept
2. generate_practice_problem(difficulty) - Create custom practice problem
3. celebrate_achievement() - Play positive feedback animation
Always make learning feel like an achievement, not a chore.
```
---
## Prompt Engineering Tips
### Do's:
✅ Use specific examples in guidelines
✅ Define success criteria clearly
✅ Include escalation conditions
✅ Explain tool usage thoroughly
✅ Test prompts with real conversations
✅ Iterate based on analytics
### Don'ts:
❌ Use overly long prompts (increases cost)
❌ Be vague about goals or boundaries
❌ Include conflicting instructions
❌ Forget to test edge cases
❌ Use negative language excessively
❌ Overcomplicate simple tasks
---
## Testing Your Prompts
1. **Scenario Testing**: Run automated tests with success criteria
2. **Edge Case Testing**: Test boundary conditions and unusual inputs
3. **Tone Testing**: Evaluate conversation tone and empathy
4. **Tool Testing**: Verify tools are called correctly
5. **Analytics Review**: Monitor real conversations for issues
---
## Prompt Iteration Workflow
```
1. Write initial prompt using 6-component framework
2. Deploy to dev environment
3. Run 5-10 test conversations
4. Analyze transcripts for issues
5. Refine prompt based on findings
6. Deploy to staging
7. Run automated tests
8. Review analytics dashboard
9. Deploy to production
10. Monitor and iterate
```

188
references/testing-guide.md Normal file
View File

@@ -0,0 +1,188 @@
# Testing Guide
## 1. Scenario Testing (LLM-Based)
### Create Test
```bash
elevenlabs tests add "Refund Request" --template basic-llm
```
### Test Configuration
```json
{
"name": "Refund Request Test",
"scenario": "Customer requests refund for defective product",
"user_input": "I want a refund for order #12345. The product arrived broken.",
"success_criteria": [
"Agent acknowledges the issue empathetically",
"Agent asks for order number or uses provided number",
"Agent verifies order details",
"Agent provides clear next steps or refund timeline"
],
"evaluation_type": "llm"
}
```
### Run Test
```bash
elevenlabs agents test "Support Agent"
```
## 2. Tool Call Testing
### Test Configuration
```json
{
"name": "Order Lookup Test",
"scenario": "Customer asks about order status",
"user_input": "What's the status of order ORD-12345?",
"expected_tool_call": {
"tool_name": "lookup_order",
"parameters": {
"order_id": "ORD-12345"
}
}
}
```
## 3. Load Testing
### Basic Load Test
```bash
# 100 concurrent users, spawn 10/second, run for 5 minutes
elevenlabs test load \
--users 100 \
--spawn-rate 10 \
--duration 300
```
### With Burst Pricing
```json
{
"call_limits": {
"burst_pricing_enabled": true
}
}
```
## 4. Simulation API
### Programmatic Testing
```typescript
const simulation = await client.agents.simulate({
agent_id: 'agent_123',
scenario: 'Customer requests refund',
user_messages: [
"I want a refund for order #12345",
"It arrived broken",
"Yes, process the refund"
],
success_criteria: [
"Agent shows empathy",
"Agent verifies order",
"Agent provides timeline"
]
});
console.log('Passed:', simulation.passed);
console.log('Criteria met:', simulation.evaluation.criteria_met, '/', simulation.evaluation.criteria_total);
```
## 5. Convert Real Conversations to Tests
### From Dashboard
1. Navigate to Conversations
2. Select conversation
3. Click "Convert to Test"
4. Add success criteria
5. Save
### From API
```typescript
const test = await client.tests.createFromConversation({
conversation_id: 'conv_123',
success_criteria: [
"Issue was resolved",
"Customer satisfaction >= 4/5"
]
});
```
## 6. CI/CD Integration
### GitHub Actions
```yaml
name: Test Agent
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install CLI
run: npm install -g @elevenlabs/cli
- name: Push Tests
run: elevenlabs tests push
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY }}
- name: Run Tests
run: elevenlabs agents test "Support Agent"
env:
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY }}
```
## 7. Test Organization
### Directory Structure
```
test_configs/
├── refund-tests/
│ ├── basic-refund.json
│ ├── duplicate-refund.json
│ └── expired-refund.json
├── order-lookup-tests/
│ ├── valid-order.json
│ └── invalid-order.json
└── escalation-tests/
├── angry-customer.json
└── complex-issue.json
```
## 8. Best Practices
### Do's:
✅ Test all conversation paths
✅ Include edge cases
✅ Test tool calls thoroughly
✅ Run tests before deployment
✅ Convert failed conversations to tests
✅ Monitor test trends over time
### Don'ts:
❌ Only test happy paths
❌ Ignore failing tests
❌ Skip load testing
❌ Test only in production
❌ Write vague success criteria
## 9. Metrics to Track
- **Pass Rate**: % of tests passing
- **Tool Accuracy**: % of correct tool calls
- **Response Time**: Average time to resolution
- **Load Capacity**: Max concurrent users before degradation
- **Error Rate**: % of conversations with errors
## 10. Debugging Failed Tests
1. Review conversation transcript
2. Check tool calls and parameters
3. Verify dynamic variables provided
4. Test prompt clarity
5. Check knowledge base content
6. Review guardrails and constraints
7. Iterate and retest

177
references/tool-examples.md Normal file
View File

@@ -0,0 +1,177 @@
# Tool Examples
## Client Tools (Browser-Side)
### Update Shopping Cart
```typescript
import { useConversation } from '@elevenlabs/react';
import { z } from 'zod';
clientTools: {
updateCart: {
description: "Add or remove items from the shopping cart",
parameters: z.object({
action: z.enum(['add', 'remove']),
item: z.string(),
quantity: z.number().min(1)
}),
handler: async ({ action, item, quantity }) => {
const cart = getCart();
if (action === 'add') {
cart.add(item, quantity);
} else {
cart.remove(item, quantity);
}
return { success: true, total: cart.total, items: cart.items.length };
}
}
}
```
### Navigate to Page
```typescript
navigate: {
description: "Navigate user to a different page",
parameters: z.object({
url: z.string().url()
}),
handler: async ({ url }) => {
window.location.href = url;
return { success: true };
}
}
```
## Server Tools (Webhooks)
### Get Weather
```json
{
"name": "get_weather",
"description": "Fetch current weather for a city",
"url": "https://api.weather.com/v1/current",
"method": "GET",
"parameters": {
"type": "object",
"properties": {
"city": { "type": "string", "description": "City name (e.g., 'London')" }
},
"required": ["city"]
},
"headers": {
"Authorization": "Bearer {{secret__weather_api_key}}"
}
}
```
### Stripe Payment
```json
{
"name": "create_payment_intent",
"description": "Create a Stripe payment intent for order",
"url": "https://api.stripe.com/v1/payment_intents",
"method": "POST",
"parameters": {
"type": "object",
"properties": {
"amount": { "type": "number", "description": "Amount in cents" },
"currency": { "type": "string", "description": "Currency code (e.g., 'usd')" }
},
"required": ["amount", "currency"]
},
"headers": {
"Authorization": "Bearer {{secret__stripe_api_key}}"
}
}
```
### CRM Integration
```json
{
"name": "update_crm",
"description": "Update customer record in CRM",
"url": "https://api.salesforce.com/services/data/v57.0/sobjects/Contact/{{contact_id}}",
"method": "PATCH",
"parameters": {
"type": "object",
"properties": {
"notes": { "type": "string" },
"status": { "type": "string", "enum": ["active", "resolved", "pending"] }
}
},
"headers": {
"Authorization": "Bearer {{secret__salesforce_token}}",
"Content-Type": "application/json"
}
}
```
## MCP Tools
### Connect PostgreSQL MCP Server
```json
{
"name": "PostgreSQL Database",
"server_url": "https://mcp.example.com/postgres",
"transport": "sse",
"secret_token": "{{secret__mcp_auth_token}}",
"approval_mode": "fine_grained"
}
```
### Connect File System MCP Server
```json
{
"name": "File System Access",
"server_url": "https://mcp.example.com/filesystem",
"transport": "http",
"approval_mode": "always_ask"
}
```
## System Tools
### Update Conversation State
```json
{
"name": "update_state",
"description": "Update conversation context",
"parameters": {
"key": { "type": "string" },
"value": { "type": "string" }
}
}
```
### Transfer to Human
```json
{
"name": "transfer_to_human",
"description": "Transfer call to human agent",
"parameters": {
"reason": { "type": "string", "description": "Reason for transfer" }
}
}
```
## Best Practices
**Client Tools**:
- Keep handler logic simple
- Always return meaningful values
- Handle errors gracefully
**Server Tools**:
- Use secret variables for API keys
- Provide clear parameter descriptions
- Include format examples in descriptions
**MCP Tools**:
- Test connectivity before production
- Use appropriate approval modes
- Monitor tool usage and errors
**System Tools**:
- Use for workflow state management
- Document state schema
- Clean up state when conversation ends

View File

@@ -0,0 +1,132 @@
# Workflow Examples
## Customer Support Routing
**Scenario**: Route calls to specialized agents based on customer needs.
```json
{
"workflow": {
"nodes": [
{
"id": "initial_routing",
"type": "subagent",
"config": {
"system_prompt": "Ask customer: Are you calling about billing, technical support, or sales?",
"turn_eagerness": "patient"
}
},
{
"id": "billing_agent",
"type": "subagent",
"config": {
"system_prompt": "You are a billing specialist. Help with invoices, payments, and account charges.",
"voice_id": "billing_voice_id"
}
},
{
"id": "technical_agent",
"type": "subagent",
"config": {
"system_prompt": "You are a technical support specialist. Troubleshoot product issues.",
"voice_id": "tech_voice_id"
}
},
{
"id": "sales_agent",
"type": "subagent",
"config": {
"system_prompt": "You are a sales representative. Help customers choose products.",
"voice_id": "sales_voice_id"
}
}
],
"edges": [
{ "from": "initial_routing", "to": "billing_agent", "condition": "user_mentions_billing" },
{ "from": "initial_routing", "to": "technical_agent", "condition": "user_mentions_technical" },
{ "from": "initial_routing", "to": "sales_agent", "condition": "user_mentions_sales" }
]
}
}
```
## Escalation Workflow
**Scenario**: Attempt self-service resolution, then escalate to human if needed.
```json
{
"workflow": {
"nodes": [
{
"id": "self_service",
"type": "subagent",
"config": {
"system_prompt": "Try to resolve issue using knowledge base and tools. If issue can't be resolved, offer human transfer.",
"knowledge_base": ["faq_doc_id"],
"tool_ids": ["lookup_order", "check_status"]
}
},
{
"id": "human_transfer",
"type": "tool",
"tool_name": "transfer_to_human"
}
],
"edges": [
{ "from": "self_service", "to": "human_transfer", "condition": "user_requests_human_or_issue_unresolved" }
]
}
}
```
## Multi-Language Support
**Scenario**: Detect language and route to appropriate voice/agent.
```json
{
"workflow": {
"nodes": [
{
"id": "language_detection",
"type": "subagent",
"config": {
"system_prompt": "Greet customer and detect language.",
"language": "auto"
}
},
{
"id": "english_agent",
"type": "subagent",
"config": {
"language": "en",
"voice_id": "en_voice_id",
"first_message": "Hello! How can I help you today?"
}
},
{
"id": "spanish_agent",
"type": "subagent",
"config": {
"language": "es",
"voice_id": "es_voice_id",
"first_message": "¡Hola! ¿Cómo puedo ayudarte hoy?"
}
}
],
"edges": [
{ "from": "language_detection", "to": "english_agent", "condition": "detected_language_en" },
{ "from": "language_detection", "to": "spanish_agent", "condition": "detected_language_es" }
]
}
}
```
## Best Practices
1. **Keep workflows simple** - Max 5-7 nodes for maintainability
2. **Test all paths** - Ensure every edge condition works
3. **Add fallbacks** - Always have a default path
4. **Monitor transitions** - Track which paths users take most
5. **Avoid loops** - Workflows can get stuck in infinite loops

44
scripts/create-agent.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
# Create ElevenLabs agent using CLI
set -e
AGENT_NAME="${1:-Support Agent}"
TEMPLATE="${2:-customer-service}"
ENV="${3:-dev}"
echo "Creating ElevenLabs agent..."
echo "Name: $AGENT_NAME"
echo "Template: $TEMPLATE"
echo "Environment: $ENV"
# Check if CLI is installed
if ! command -v elevenlabs &> /dev/null; then
echo "Error: @elevenlabs/cli is not installed"
echo "Install with: npm install -g @elevenlabs/cli"
exit 1
fi
# Check if authenticated
if ! elevenlabs auth whoami &> /dev/null; then
echo "Not authenticated. Please login:"
elevenlabs auth login
fi
# Initialize project if not already initialized
if [ ! -f "agents.json" ]; then
echo "Initializing project..."
elevenlabs agents init
fi
# Create agent
echo "Creating agent: $AGENT_NAME"
elevenlabs agents add "$AGENT_NAME" --template "$TEMPLATE"
# Push to platform
echo "Deploying to environment: $ENV"
elevenlabs agents push --env "$ENV"
echo "✓ Agent created successfully!"
echo "Edit configuration in: agent_configs/"
echo "Test with: elevenlabs agents test \"$AGENT_NAME\""

46
scripts/deploy-agent.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
# Multi-environment deployment for ElevenLabs agents
set -e
ENV="${1:-dev}"
AGENT_NAME="${2}"
echo "Deploying ElevenLabs agent to environment: $ENV"
# Check if CLI is installed
if ! command -v elevenlabs &> /dev/null; then
echo "Error: @elevenlabs/cli is not installed"
echo "Install with: npm install -g @elevenlabs/cli"
exit 1
fi
# Check if authenticated
if ! elevenlabs auth whoami &> /dev/null; then
echo "Not authenticated. Please login:"
elevenlabs auth login
fi
# Dry run first to show changes
echo "Preview of changes for $ENV:"
if [ -n "$AGENT_NAME" ]; then
elevenlabs agents push --env "$ENV" --agent "$AGENT_NAME" --dry-run
else
elevenlabs agents push --env "$ENV" --dry-run
fi
# Confirm deployment
read -p "Deploy to $ENV? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
if [ -n "$AGENT_NAME" ]; then
elevenlabs agents push --env "$ENV" --agent "$AGENT_NAME"
else
elevenlabs agents push --env "$ENV"
fi
echo "✓ Deployment to $ENV completed successfully!"
else
echo "Deployment cancelled"
exit 0
fi

View File

@@ -0,0 +1,53 @@
#!/bin/bash
# Programmatically simulate conversation for testing
set -e
AGENT_ID="${1}"
SIMULATION_FILE="${2:-simulation.json}"
if [ -z "$AGENT_ID" ]; then
echo "Usage: ./simulate-conversation.sh <agent_id> [simulation_file]"
echo "Example: ./simulate-conversation.sh agent_abc123 simulation.json"
exit 1
fi
# Check if API key is set
if [ -z "$ELEVENLABS_API_KEY" ]; then
echo "Error: ELEVENLABS_API_KEY environment variable not set"
exit 1
fi
# Check if simulation file exists
if [ ! -f "$SIMULATION_FILE" ]; then
echo "Creating example simulation file: $SIMULATION_FILE"
cat > "$SIMULATION_FILE" << 'EOF'
{
"scenario": "Customer requests refund",
"user_messages": [
"I want a refund for order #12345",
"I ordered it last week",
"Yes, please process it"
],
"success_criteria": [
"Agent acknowledges request",
"Agent asks for order details",
"Agent provides refund timeline"
]
}
EOF
echo "Example simulation file created. Edit $SIMULATION_FILE and run again."
exit 0
fi
echo "Running conversation simulation..."
echo "Agent ID: $AGENT_ID"
echo "Simulation file: $SIMULATION_FILE"
# Run simulation
curl -X POST "https://api.elevenlabs.io/v1/convai/agents/$AGENT_ID/simulate" \
-H "xi-api-key: $ELEVENLABS_API_KEY" \
-H "Content-Type: application/json" \
-d @"$SIMULATION_FILE" | jq .
echo "✓ Simulation completed!"

33
scripts/test-agent.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# Run automated tests on ElevenLabs agent
set -e
AGENT_NAME="${1:-Support Agent}"
echo "Testing ElevenLabs agent: $AGENT_NAME"
# Check if CLI is installed
if ! command -v elevenlabs &> /dev/null; then
echo "Error: @elevenlabs/cli is not installed"
echo "Install with: npm install -g @elevenlabs/cli"
exit 1
fi
# Check if authenticated
if ! elevenlabs auth whoami &> /dev/null; then
echo "Not authenticated. Please login:"
elevenlabs auth login
fi
# Push tests to platform
if [ -f "tests.json" ]; then
echo "Deploying tests..."
elevenlabs tests push
fi
# Run agent tests
echo "Running tests for: $AGENT_NAME"
elevenlabs agents test "$AGENT_NAME"
echo "✓ Tests completed!"