Initial commit
This commit is contained in:
12
.claude-plugin/plugin.json
Normal file
12
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"name": "thesys-generative-ui",
|
||||
"description": "Integrate TheSys C1 Generative UI API to stream interactive React components (forms, charts, tables) from LLM responses. Supports Vite+React, Next.js, and Cloudflare Workers with OpenAI, Anthropic Claude, and Workers AI. Use when building conversational UIs, AI assistants with rich interactions, or troubleshooting empty responses, theme application failures, streaming issues, or tool calling errors.",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "Jeremy Dawes",
|
||||
"email": "jeremy@jezweb.net"
|
||||
},
|
||||
"skills": [
|
||||
"./"
|
||||
]
|
||||
}
|
||||
96
CHANGELOG.md
Normal file
96
CHANGELOG.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to the TheSys Generative UI skill will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
---
|
||||
|
||||
## [1.1.0] - 2025-10-26
|
||||
|
||||
### Updated
|
||||
- **Model IDs to v-20250930** - Updated all model references to current stable versions
|
||||
- Claude Sonnet 4: `c1/anthropic/claude-sonnet-4/v-20250617` → `c1/anthropic/claude-sonnet-4/v-20250930`
|
||||
- GPT 5: `c1/openai/gpt-4` → `c1/openai/gpt-5/v-20250930`
|
||||
- **Package versions** - Updated @crayonai/react-ui to 0.8.42 (from 0.8.27)
|
||||
- **Template dependencies** - Updated all template package.json files with current versions
|
||||
- **Reference documentation** - Completely revised `references/ai-provider-setup.md` with:
|
||||
- Current model IDs and pricing
|
||||
- Model selection guide
|
||||
- Python backend examples
|
||||
- Deprecation notices
|
||||
|
||||
### Added
|
||||
- **Experimental models section** with `c1-exp/` prefix support
|
||||
- GPT 4.1: `c1-exp/openai/gpt-4.1/v-20250617`
|
||||
- Claude 3.5 Haiku: `c1-exp/anthropic/claude-3.5-haiku/v-20250709`
|
||||
- **Pricing and specifications table** for all supported models
|
||||
- Input/output token costs
|
||||
- Context window sizes
|
||||
- Maximum output tokens
|
||||
- **Python backend support** (major addition):
|
||||
- Complete Python integration section in SKILL.md
|
||||
- FastAPI template (`templates/python-backend/fastapi-chat.py`)
|
||||
- Flask template (`templates/python-backend/flask-chat.py`)
|
||||
- Python requirements.txt with all dependencies
|
||||
- Comprehensive Python backend README
|
||||
- **New error documentation** (Error #13: Invalid Model ID Error)
|
||||
- Covers outdated model IDs
|
||||
- Lists current stable vs experimental models
|
||||
- Provides verification steps
|
||||
- **Model version notes** section explaining version date format
|
||||
|
||||
### Removed
|
||||
- **Non-existent model IDs**:
|
||||
- `c1/openai/gpt-5-mini` (never existed)
|
||||
- `c1/openai/gpt-5-nano` (never existed)
|
||||
- `c1/openai/gpt-4o` (not available via C1)
|
||||
- **Outdated v-20250617 model versions** throughout documentation
|
||||
|
||||
### Fixed
|
||||
- All model IDs now match official TheSys documentation (verified 2025-10-26)
|
||||
- Version compatibility table updated with correct package versions
|
||||
- Deprecated models (Claude 3.5 Sonnet, Claude 3.7 Sonnet) now explicitly noted
|
||||
|
||||
### Documentation
|
||||
- Updated README.md with Python SDK package information
|
||||
- Added model version checking notes
|
||||
- Enhanced troubleshooting guide in `references/common-errors.md`
|
||||
- Improved AI provider setup guide with pricing comparison
|
||||
|
||||
---
|
||||
|
||||
## [1.0.0] - 2025-10-26
|
||||
|
||||
### Added
|
||||
- Initial release of TheSys Generative UI skill
|
||||
- Complete integration guide for Vite + React
|
||||
- Next.js App Router templates
|
||||
- Cloudflare Workers integration patterns
|
||||
- 15+ working templates across frameworks
|
||||
- Tool calling with Zod schemas
|
||||
- Theming and customization guides
|
||||
- Thread management patterns
|
||||
- Streaming implementation examples
|
||||
- Common errors and solutions (12 documented issues)
|
||||
- Component API reference
|
||||
- AI provider integration (OpenAI, Anthropic, Cloudflare Workers AI)
|
||||
|
||||
### Metadata
|
||||
- Package: `@thesysai/genui-sdk@0.6.40`
|
||||
- Token savings: ~65-70% vs manual implementation
|
||||
- Errors prevented: 12+ documented issues
|
||||
- Production tested: ✅ Yes
|
||||
- Official standards compliant: ✅ Yes
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
- **1.1.0** (2025-10-26) - Model updates, Python support, pricing tables
|
||||
- **1.0.0** (2025-10-26) - Initial release
|
||||
|
||||
---
|
||||
|
||||
**Note**: For detailed implementation guides and examples, see [SKILL.md](SKILL.md).
|
||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# thesys-generative-ui
|
||||
|
||||
Integrate TheSys C1 Generative UI API to stream interactive React components (forms, charts, tables) from LLM responses. Supports Vite+React, Next.js, and Cloudflare Workers with OpenAI, Anthropic Claude, and Workers AI. Use when building conversational UIs, AI assistants with rich interactions, or troubleshooting empty responses, theme application failures, streaming issues, or tool calling errors.
|
||||
145
plugin.lock.json
Normal file
145
plugin.lock.json
Normal file
@@ -0,0 +1,145 @@
|
||||
{
|
||||
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||
"pluginId": "gh:jezweb/claude-skills:skills/thesys-generative-ui",
|
||||
"normalized": {
|
||||
"repo": null,
|
||||
"ref": "refs/tags/v20251128.0",
|
||||
"commit": "4c081e2eaa6b974d0c99ae84ab1887f0d61ffd09",
|
||||
"treeHash": "e53cfd96dbb9707586da114280b73473d4205d8e103c553873b82feeb7368ef0",
|
||||
"generatedAt": "2025-11-28T10:19:02.661304Z",
|
||||
"toolVersion": "publish_plugins.py@0.2.0"
|
||||
},
|
||||
"origin": {
|
||||
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||
"branch": "master",
|
||||
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||
},
|
||||
"manifest": {
|
||||
"name": "thesys-generative-ui",
|
||||
"description": "Integrate TheSys C1 Generative UI API to stream interactive React components (forms, charts, tables) from LLM responses. Supports Vite+React, Next.js, and Cloudflare Workers with OpenAI, Anthropic Claude, and Workers AI. Use when building conversational UIs, AI assistants with rich interactions, or troubleshooting empty responses, theme application failures, streaming issues, or tool calling errors.",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
"content": {
|
||||
"files": [
|
||||
{
|
||||
"path": "CHANGELOG.md",
|
||||
"sha256": "9e661a3a3b65cbcbc953d9f31dee115b6f8e88ddc0d8f4e589f6bba9b86636f3"
|
||||
},
|
||||
{
|
||||
"path": "README.md",
|
||||
"sha256": "babe5e371722d039aa6f38d5b134f1388c7d682d808eca7934e188d8e2042cdc"
|
||||
},
|
||||
{
|
||||
"path": "SKILL.md",
|
||||
"sha256": "44b807a63a5f63e33344d2924d2c04f650c843f7ee6b7ef189a07c5a74928312"
|
||||
},
|
||||
{
|
||||
"path": "references/ai-provider-setup.md",
|
||||
"sha256": "0a5f6c383fcc76b743aa74b681820f2fbe8540ec58c2e9d1991a991ea0662892"
|
||||
},
|
||||
{
|
||||
"path": "references/common-errors.md",
|
||||
"sha256": "6e1697157218e741c6bdc533d692069cd5219a3235d5596676a8cf43a732f14b"
|
||||
},
|
||||
{
|
||||
"path": "references/component-api.md",
|
||||
"sha256": "7b638dbe1320e792325d1c9ea06fb274aac943d75cf5d4da93b7c2ce794ef7b8"
|
||||
},
|
||||
{
|
||||
"path": "scripts/check-versions.sh",
|
||||
"sha256": "ed522fc4816b7774460eb083929fe08fea5d134097924b99c7487a1f2f11ef50"
|
||||
},
|
||||
{
|
||||
"path": "scripts/install-dependencies.sh",
|
||||
"sha256": "7c877d7e4e536f5abfe39469a390b6352d30fda0cf55f4a1b79737f2050c2578"
|
||||
},
|
||||
{
|
||||
"path": ".claude-plugin/plugin.json",
|
||||
"sha256": "3c263658066c013bbfebf250094f4ba4a7c4539ead3f90dd1dfe8753e0aad914"
|
||||
},
|
||||
{
|
||||
"path": "templates/shared/streaming-utils.ts",
|
||||
"sha256": "5bd6c6bee195e1932d05bf7f33ebfb7766e4482c146b48aca7beeb6b3ce8d8b0"
|
||||
},
|
||||
{
|
||||
"path": "templates/shared/theme-config.ts",
|
||||
"sha256": "b33fec27d1d035c526f153d6f1681f0b3477f32a1b289154f762f988f80633a6"
|
||||
},
|
||||
{
|
||||
"path": "templates/shared/tool-schemas.ts",
|
||||
"sha256": "88e4f0b1607f1731112f83773c224e1eefe8bd3c7f8a5b57f54c407a73890e53"
|
||||
},
|
||||
{
|
||||
"path": "templates/nextjs/api-chat-route.ts",
|
||||
"sha256": "bdf9292cd0d6452e0e5282cf407d8c964c8b13de2b920514f233f0f757d6df9e"
|
||||
},
|
||||
{
|
||||
"path": "templates/nextjs/app-page.tsx",
|
||||
"sha256": "7e3faa00f1685ae702863bbf3982f9778466778c1c9e94499e831a5770b2fe2e"
|
||||
},
|
||||
{
|
||||
"path": "templates/nextjs/package.json",
|
||||
"sha256": "773a05c37c6a8e98b589499d679cca1e74ad81055130c31b81f93967320fca2a"
|
||||
},
|
||||
{
|
||||
"path": "templates/nextjs/tool-calling-route.ts",
|
||||
"sha256": "51cbb2e5f2cd44b2d03484fb44a7f5cfce4057176730d43e260d36ee002e4fc2"
|
||||
},
|
||||
{
|
||||
"path": "templates/cloudflare-workers/wrangler.jsonc",
|
||||
"sha256": "bdd7538346bdb8450a5489d0b1d3420dec0c1c609c2d947238e30735567052c3"
|
||||
},
|
||||
{
|
||||
"path": "templates/cloudflare-workers/frontend-setup.tsx",
|
||||
"sha256": "74a06a2fab82ee77f1549496ffec0db0a77f87612f77578534e8d47d03ced57e"
|
||||
},
|
||||
{
|
||||
"path": "templates/cloudflare-workers/worker-backend.ts",
|
||||
"sha256": "85f3016d6e57a4f35a748264a0dec326c0832cc5ac907fff0d3293add85317e9"
|
||||
},
|
||||
{
|
||||
"path": "templates/vite-react/theme-dark-mode.tsx",
|
||||
"sha256": "005bb05a45f6d893d35e9d8a830ec9981db1370603ce06cec105edb4d0355d00"
|
||||
},
|
||||
{
|
||||
"path": "templates/vite-react/package.json",
|
||||
"sha256": "777f4fb23fa8cf2476a63b9c8b163d868d74fdacf8d7c0435588579875522628"
|
||||
},
|
||||
{
|
||||
"path": "templates/vite-react/basic-chat.tsx",
|
||||
"sha256": "7498273e9cb3e94f62c20ad4239070bdd4418934c3a54a68c924083e6d63052d"
|
||||
},
|
||||
{
|
||||
"path": "templates/vite-react/custom-component.tsx",
|
||||
"sha256": "6fcac6479d1820d6335c05755698984908e99268d75f5204193ea344bd9eeaf3"
|
||||
},
|
||||
{
|
||||
"path": "templates/vite-react/tool-calling.tsx",
|
||||
"sha256": "60d53767cd33c68c9d2b9122b5cd82d87dff40637e1d72a162209ef2fe47371a"
|
||||
},
|
||||
{
|
||||
"path": "templates/python-backend/requirements.txt",
|
||||
"sha256": "78650d4f06ec2a479933663e276116d2ab49f30109bdae389d45a1a4382c060a"
|
||||
},
|
||||
{
|
||||
"path": "templates/python-backend/flask-chat.py",
|
||||
"sha256": "4c26e87e583be3c0d22550c46312c194ad398c2705ee4e2c372b2594ea595033"
|
||||
},
|
||||
{
|
||||
"path": "templates/python-backend/fastapi-chat.py",
|
||||
"sha256": "4965d4f44ad869dce1f16dd716a2695d9f6ae213ac9c0fda92dc649b7d8c201a"
|
||||
},
|
||||
{
|
||||
"path": "templates/python-backend/README.md",
|
||||
"sha256": "97ab97dcaf6354f2424ffd6a01e4f6fe2654356c388912bef1c7c99da41862ab"
|
||||
}
|
||||
],
|
||||
"dirSha256": "e53cfd96dbb9707586da114280b73473d4205d8e103c553873b82feeb7368ef0"
|
||||
},
|
||||
"security": {
|
||||
"scannedAt": null,
|
||||
"scannerVersion": null,
|
||||
"flags": []
|
||||
}
|
||||
}
|
||||
225
references/ai-provider-setup.md
Normal file
225
references/ai-provider-setup.md
Normal file
@@ -0,0 +1,225 @@
|
||||
# AI Provider Setup Guide
|
||||
|
||||
Step-by-step setup for each AI provider with TheSys C1, including current model IDs, pricing, and specifications.
|
||||
|
||||
---
|
||||
|
||||
## OpenAI
|
||||
|
||||
```typescript
|
||||
import OpenAI from "openai";
|
||||
|
||||
const client = new OpenAI({
|
||||
baseURL: "https://api.thesys.dev/v1/embed",
|
||||
apiKey: process.env.THESYS_API_KEY,
|
||||
});
|
||||
```
|
||||
|
||||
### Available Models
|
||||
|
||||
**Stable (Production)**:
|
||||
- `c1/openai/gpt-5/v-20250930` - GPT 5
|
||||
- Input: $2.50/M | Output: $12.50/M
|
||||
- Context: 380K | Max Output: 128K
|
||||
|
||||
**Experimental**:
|
||||
- `c1-exp/openai/gpt-4.1/v-20250617` - GPT 4.1
|
||||
- Input: $4.00/M | Output: $10.00/M
|
||||
- Context: 1M | Max Output: 32K
|
||||
|
||||
### Example Usage
|
||||
|
||||
```typescript
|
||||
const response = await client.chat.completions.create({
|
||||
model: "c1/openai/gpt-5/v-20250930",
|
||||
messages: [
|
||||
{ role: "system", content: "You are a helpful assistant." },
|
||||
{ role: "user", content: "Create a product comparison table." }
|
||||
],
|
||||
stream: true,
|
||||
temperature: 0.7,
|
||||
max_tokens: 2000
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anthropic (Claude)
|
||||
|
||||
```typescript
|
||||
// Same OpenAI client! TheSys handles the conversion
|
||||
const client = new OpenAI({
|
||||
baseURL: "https://api.thesys.dev/v1/embed",
|
||||
apiKey: process.env.THESYS_API_KEY,
|
||||
});
|
||||
```
|
||||
|
||||
### Available Models
|
||||
|
||||
**Stable (Production)**:
|
||||
- `c1/anthropic/claude-sonnet-4/v-20250930` - Claude Sonnet 4
|
||||
- Input: $6.00/M | Output: $18.00/M
|
||||
- Context: 180K | Max Output: 64K
|
||||
|
||||
**Experimental**:
|
||||
- `c1-exp/anthropic/claude-3.5-haiku/v-20250709` - Claude 3.5 Haiku
|
||||
- Input: $1.60/M | Output: $5.00/M
|
||||
- Context: 180K | Max Output: 8K
|
||||
|
||||
**Deprecated** (not recommended):
|
||||
- `c1/anthropic/claude-sonnet-3-5`
|
||||
- `c1/anthropic/claude-3.7-sonnet`
|
||||
|
||||
### Example Usage
|
||||
|
||||
```typescript
|
||||
const response = await client.chat.completions.create({
|
||||
model: "c1/anthropic/claude-sonnet-4/v-20250930",
|
||||
messages: [
|
||||
{ role: "system", content: "You are Claude, an AI assistant." },
|
||||
{ role: "user", content: "Generate a data visualization chart." }
|
||||
],
|
||||
stream: true,
|
||||
temperature: 0.8,
|
||||
max_tokens: 4096
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cloudflare Workers AI
|
||||
|
||||
### Option 1: Workers AI Only (No C1)
|
||||
|
||||
Use Workers AI directly for cost optimization on simple use cases.
|
||||
|
||||
```typescript
|
||||
const aiResponse = await env.AI.run('@cf/meta/llama-3-8b-instruct', {
|
||||
messages: [
|
||||
{ role: "system", content: "You are a helpful assistant." },
|
||||
{ role: "user", content: "Hello!" }
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
### Option 2: Hybrid Approach (Workers AI + C1)
|
||||
|
||||
Use Workers AI for processing, then TheSys C1 for UI generation.
|
||||
|
||||
```typescript
|
||||
// Step 1: Process with Workers AI (cheap)
|
||||
const analysis = await env.AI.run('@cf/meta/llama-3-8b-instruct', {
|
||||
messages: [{ role: "user", content: "Analyze this data..." }]
|
||||
});
|
||||
|
||||
// Step 2: Generate UI with C1 (interactive components)
|
||||
const c1Response = await fetch("https://api.thesys.dev/v1/embed/chat/completions", {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Authorization": `Bearer ${env.THESYS_API_KEY}`,
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
body: JSON.stringify({
|
||||
model: "c1/openai/gpt-5/v-20250930",
|
||||
messages: [
|
||||
{
|
||||
role: "system",
|
||||
content: "Create a chart visualization for this data."
|
||||
},
|
||||
{
|
||||
role: "user",
|
||||
content: analysis.response
|
||||
}
|
||||
]
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
**Cost Benefits**:
|
||||
- Workers AI: Very low cost for text generation
|
||||
- C1 API: Only used for final UI generation
|
||||
- Combined: Best of both worlds
|
||||
|
||||
---
|
||||
|
||||
## Python Backend (FastAPI/Flask)
|
||||
|
||||
```python
|
||||
import openai
|
||||
import os
|
||||
|
||||
client = openai.OpenAI(
|
||||
base_url="https://api.thesys.dev/v1/embed",
|
||||
api_key=os.getenv("THESYS_API_KEY")
|
||||
)
|
||||
```
|
||||
|
||||
### Example with TheSys SDK
|
||||
|
||||
```python
|
||||
from thesys_genui_sdk import with_c1_response, write_content
|
||||
|
||||
@app.post("/api/chat")
|
||||
@with_c1_response
|
||||
async def chat(request: dict):
|
||||
stream = client.chat.completions.create(
|
||||
model="c1/anthropic/claude-sonnet-4/v-20250930",
|
||||
messages=[
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": request["prompt"]}
|
||||
],
|
||||
stream=True
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
content = chunk.choices[0].delta.content
|
||||
if content:
|
||||
yield write_content(content)
|
||||
```
|
||||
|
||||
See `templates/python-backend/` for complete examples.
|
||||
|
||||
---
|
||||
|
||||
## Model Selection Guide
|
||||
|
||||
### When to Use Each Provider
|
||||
|
||||
**GPT 5** (`c1/openai/gpt-5/v-20250930`):
|
||||
- Best for: General-purpose applications
|
||||
- Pros: Large context window (380K), lower cost
|
||||
- Cons: Less nuanced than Claude for some tasks
|
||||
|
||||
**Claude Sonnet 4** (`c1/anthropic/claude-sonnet-4/v-20250930`):
|
||||
- Best for: Complex reasoning, code generation
|
||||
- Pros: Superior code understanding, detailed responses
|
||||
- Cons: Higher cost, smaller context window
|
||||
|
||||
**Experimental Models** (`c1-exp/...`):
|
||||
- Best for: Testing new features, non-production use
|
||||
- Pros: Access to cutting-edge capabilities
|
||||
- Cons: May have unexpected behavior, pricing subject to change
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
# Required
|
||||
THESYS_API_KEY=sk-th-your-api-key-here
|
||||
|
||||
# Optional (for CORS configuration)
|
||||
ALLOWED_ORIGINS=http://localhost:5173,https://your-domain.com
|
||||
```
|
||||
|
||||
Get your API key: https://console.thesys.dev/keys
|
||||
|
||||
---
|
||||
|
||||
## Version Notes
|
||||
|
||||
Model version identifiers (e.g., `v-20250930`) may change as new versions are released. Always check the [TheSys Playground](https://console.thesys.dev/playground) for the latest available versions.
|
||||
|
||||
---
|
||||
|
||||
For complete integration examples and advanced patterns, see the main SKILL.md documentation.
|
||||
488
references/common-errors.md
Normal file
488
references/common-errors.md
Normal file
@@ -0,0 +1,488 @@
|
||||
# Common Errors & Solutions
|
||||
|
||||
Complete troubleshooting guide for TheSys C1 Generative UI integration.
|
||||
|
||||
---
|
||||
|
||||
## 1. Empty Agent Responses
|
||||
|
||||
**Symptom**: AI returns empty responses, UI shows nothing or blank content.
|
||||
|
||||
**Causes**:
|
||||
- Incorrect streaming transformation
|
||||
- Response not properly extracted from API
|
||||
- Empty content in stream chunks
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```typescript
|
||||
// ✅ Correct - use transformStream with fallback
|
||||
import { transformStream } from "@crayonai/stream";
|
||||
|
||||
const c1Stream = transformStream(llmStream, (chunk) => {
|
||||
return chunk.choices[0]?.delta?.content || ""; // Empty string fallback
|
||||
});
|
||||
|
||||
// ❌ Wrong - no fallback
|
||||
const c1Stream = transformStream(llmStream, (chunk) => {
|
||||
return chunk.choices[0].delta.content; // May be undefined
|
||||
});
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# Check API response format
|
||||
curl -X POST https://api.thesys.dev/v1/embed/chat/completions \
|
||||
-H "Authorization: Bearer $THESYS_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model":"c1/openai/gpt-5/v-20250930","messages":[{"role":"user","content":"test"}]}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Model Not Following System Prompt
|
||||
|
||||
**Symptom**: AI ignores instructions, doesn't follow guidelines in system prompt.
|
||||
|
||||
**Cause**: System prompt is not first in messages array or formatted incorrectly.
|
||||
|
||||
**Solution**:
|
||||
|
||||
```typescript
|
||||
// ✅ Correct - system prompt FIRST
|
||||
const messages = [
|
||||
{ role: "system", content: "You are a helpful assistant." }, // MUST BE FIRST
|
||||
...conversationHistory,
|
||||
{ role: "user", content: userPrompt },
|
||||
];
|
||||
|
||||
// ❌ Wrong - system prompt after user messages
|
||||
const messages = [
|
||||
{ role: "user", content: "Hello" },
|
||||
{ role: "system", content: "..." }, // TOO LATE
|
||||
];
|
||||
|
||||
// ❌ Wrong - no system prompt at all
|
||||
const messages = [
|
||||
{ role: "user", content: userPrompt },
|
||||
];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Version Compatibility Errors
|
||||
|
||||
**Symptom**: `TypeError: Cannot read property 'X' of undefined`, component crashes.
|
||||
|
||||
**Cause**: Mismatched package versions between SDK and API.
|
||||
|
||||
**Compatibility Matrix**:
|
||||
|
||||
| C1 API Version | @thesysai/genui-sdk | @crayonai/react-ui | @crayonai/react-core |
|
||||
|----------------|--------------------|--------------------|---------------------|
|
||||
| v-20250930 | ~0.6.40 | ~0.8.42 | ~0.7.6 |
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# Check current versions
|
||||
npm list @thesysai/genui-sdk @crayonai/react-ui @crayonai/react-core
|
||||
|
||||
# Update to compatible versions (October 2025)
|
||||
npm install @thesysai/genui-sdk@0.6.40 @crayonai/react-ui@0.8.42 @crayonai/react-core@0.7.6
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Theme Not Applying
|
||||
|
||||
**Symptom**: UI components don't match custom theme, default styles persist.
|
||||
|
||||
**Cause**: Missing `ThemeProvider` wrapper.
|
||||
|
||||
**Solution**:
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong - no ThemeProvider
|
||||
<C1Component c1Response={response} />
|
||||
|
||||
// ✅ Correct - wrapped with ThemeProvider
|
||||
<ThemeProvider theme={customTheme}>
|
||||
<C1Component c1Response={response} />
|
||||
</ThemeProvider>
|
||||
|
||||
// ✅ Also correct - for C1Chat
|
||||
<C1Chat apiUrl="/api/chat" theme={customTheme} />
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Streaming Not Working
|
||||
|
||||
**Symptom**: UI doesn't update in real-time, waits for full response.
|
||||
|
||||
**Causes**:
|
||||
- `stream: false` in API call
|
||||
- Missing proper headers
|
||||
- Not passing `isStreaming` prop
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```typescript
|
||||
// 1. Enable streaming in API call
|
||||
const stream = await client.chat.completions.create({
|
||||
model: "c1/openai/gpt-5/v-20250930",
|
||||
messages: [...],
|
||||
stream: true, // ✅ IMPORTANT
|
||||
});
|
||||
|
||||
// 2. Set proper response headers (Next.js)
|
||||
return new NextResponse(responseStream, {
|
||||
headers: {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache, no-transform",
|
||||
"Connection": "keep-alive",
|
||||
},
|
||||
});
|
||||
|
||||
// 3. Pass isStreaming prop
|
||||
<C1Component
|
||||
c1Response={response}
|
||||
isStreaming={true} // ✅ Shows loading indicator
|
||||
updateMessage={(msg) => setResponse(msg)}
|
||||
/>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Tool Calling Failures
|
||||
|
||||
**Symptom**: Tools not executing, validation errors, or crashes.
|
||||
|
||||
**Causes**:
|
||||
- Invalid Zod schema
|
||||
- Missing descriptions
|
||||
- Incorrect tool format
|
||||
- Arguments not parsed correctly
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```typescript
|
||||
import { z } from "zod";
|
||||
import zodToJsonSchema from "zod-to-json-schema";
|
||||
|
||||
// ✅ Proper schema with descriptions
|
||||
const toolSchema = z.object({
|
||||
query: z.string().describe("Search query"), // DESCRIBE all fields!
|
||||
limit: z.number().int().min(1).max(100).describe("Max results"),
|
||||
});
|
||||
|
||||
// ✅ Convert to OpenAI format
|
||||
const tool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "search_web",
|
||||
description: "Search the web for information", // Clear description
|
||||
parameters: zodToJsonSchema(toolSchema), // Convert schema
|
||||
},
|
||||
};
|
||||
|
||||
// ✅ Validate incoming tool calls
|
||||
try {
|
||||
const args = toolSchema.parse(JSON.parse(toolCall.function.arguments));
|
||||
const result = await executeTool(args);
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
console.error("Validation failed:", error.errors);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Thread State Not Persisting
|
||||
|
||||
**Symptom**: Threads disappear on page refresh, conversation history lost.
|
||||
|
||||
**Cause**: Using in-memory storage instead of database.
|
||||
|
||||
**Solution**:
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong - loses data on restart
|
||||
const messageStore = new Map<string, Message[]>();
|
||||
|
||||
// ✅ Correct - use database (D1 example)
|
||||
import { db } from "./database";
|
||||
|
||||
export async function saveMessage(threadId: string, message: Message) {
|
||||
await db.insert(messages).values({
|
||||
threadId,
|
||||
role: message.role,
|
||||
content: message.content,
|
||||
createdAt: new Date(),
|
||||
});
|
||||
}
|
||||
|
||||
export async function getThreadMessages(threadId: string) {
|
||||
return db
|
||||
.select()
|
||||
.from(messages)
|
||||
.where(eq(messages.threadId, threadId))
|
||||
.orderBy(messages.createdAt);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. CSS Conflicts
|
||||
|
||||
**Symptom**: Styles from C1 components clash with app styles, layout breaks.
|
||||
|
||||
**Cause**: CSS import order or global styles overriding.
|
||||
|
||||
**Solution**:
|
||||
|
||||
```typescript
|
||||
// ✅ Correct import order
|
||||
import "@crayonai/react-ui/styles/index.css"; // C1 styles FIRST
|
||||
import "./your-app.css"; // Your styles SECOND
|
||||
|
||||
// In your CSS, use specificity if needed
|
||||
.your-custom-class .c1-message {
|
||||
/* Override specific styles */
|
||||
background-color: var(--custom-bg);
|
||||
}
|
||||
|
||||
// Avoid global overrides
|
||||
/* ❌ Wrong - too broad */
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
/* ✅ Better - scoped */
|
||||
.app-container * {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. TypeScript Type Errors
|
||||
|
||||
**Symptom**: TypeScript complains about missing types or incompatible types.
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```bash
|
||||
# 1. Update packages
|
||||
npm install @thesysai/genui-sdk@latest @crayonai/react-ui@latest
|
||||
|
||||
# 2. Check tsconfig.json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"moduleResolution": "bundler", // or "node16"
|
||||
"skipLibCheck": true, // Skip type checking for node_modules
|
||||
"types": ["vite/client", "node"]
|
||||
}
|
||||
}
|
||||
|
||||
# 3. If still errors, regenerate types
|
||||
rm -rf node_modules package-lock.json
|
||||
npm install
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. CORS Errors
|
||||
|
||||
**Symptom**: `Access-Control-Allow-Origin` errors when calling API.
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```typescript
|
||||
// Next.js API Route
|
||||
export async function POST(req: NextRequest) {
|
||||
const response = new NextResponse(stream, {
|
||||
headers: {
|
||||
"Access-Control-Allow-Origin": "*", // Or specific domain
|
||||
"Access-Control-Allow-Methods": "POST, OPTIONS",
|
||||
"Access-Control-Allow-Headers": "Content-Type",
|
||||
},
|
||||
});
|
||||
return response;
|
||||
}
|
||||
|
||||
// Express
|
||||
import cors from "cors";
|
||||
app.use(cors({
|
||||
origin: "http://localhost:5173", // Your frontend URL
|
||||
methods: ["POST", "OPTIONS"],
|
||||
}));
|
||||
|
||||
// Cloudflare Workers (Hono)
|
||||
import { cors } from "hono/cors";
|
||||
app.use("/*", cors({
|
||||
origin: "*",
|
||||
allowMethods: ["POST", "OPTIONS"],
|
||||
}));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Rate Limiting Issues
|
||||
|
||||
**Symptom**: API calls fail with 429 errors, no retry mechanism.
|
||||
|
||||
**Solution**:
|
||||
|
||||
```typescript
|
||||
async function callApiWithRetry(
|
||||
apiCall: () => Promise<any>,
|
||||
maxRetries: number = 3
|
||||
) {
|
||||
for (let i = 0; i < maxRetries; i++) {
|
||||
try {
|
||||
return await apiCall();
|
||||
} catch (error: any) {
|
||||
if (error.status === 429 && i < maxRetries - 1) {
|
||||
const waitTime = Math.pow(2, i) * 1000; // Exponential backoff
|
||||
console.log(`Rate limited. Waiting ${waitTime}ms...`);
|
||||
await new Promise((resolve) => setTimeout(resolve, waitTime));
|
||||
continue;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
const response = await callApiWithRetry(() =>
|
||||
client.chat.completions.create({...})
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12. Authentication Token Errors
|
||||
|
||||
**Symptom**: `401 Unauthorized` even with API key set.
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```bash
|
||||
# 1. Verify environment variable is set
|
||||
echo $THESYS_API_KEY # Should show your key
|
||||
|
||||
# 2. Check .env file location and format
|
||||
# .env (in project root)
|
||||
THESYS_API_KEY=your_api_key_here
|
||||
|
||||
# 3. For Vite, use VITE_ prefix for client-side
|
||||
VITE_THESYS_API_KEY=your_key # Client-side
|
||||
THESYS_API_KEY=your_key # Server-side
|
||||
|
||||
# 4. For Cloudflare Workers, use secrets
|
||||
npx wrangler secret put THESYS_API_KEY
|
||||
|
||||
# 5. Verify in code
|
||||
if (!process.env.THESYS_API_KEY) {
|
||||
throw new Error("THESYS_API_KEY is not set");
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 13. Invalid Model ID Error
|
||||
|
||||
**Symptom**: API returns 400 error: "Model not found" or "Invalid model ID".
|
||||
|
||||
**Causes**:
|
||||
- Using outdated model version identifier
|
||||
- Typo in model name
|
||||
- Using deprecated model
|
||||
- Wrong prefix (`c1/` vs `c1-exp/`)
|
||||
|
||||
**Solutions**:
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong - old version
|
||||
model: "c1/anthropic/claude-sonnet-4/v-20250617"
|
||||
|
||||
// ✅ Correct - current stable version (as of Oct 2025)
|
||||
model: "c1/anthropic/claude-sonnet-4/v-20250930"
|
||||
|
||||
// ❌ Wrong - non-existent models
|
||||
model: "c1/openai/gpt-5-mini" // Doesn't exist
|
||||
model: "c1/openai/gpt-4o" // Not available via C1
|
||||
|
||||
// ✅ Correct - actual models
|
||||
model: "c1/openai/gpt-5/v-20250930" // GPT 5 stable
|
||||
model: "c1-exp/openai/gpt-4.1/v-20250617" // GPT 4.1 experimental
|
||||
|
||||
// ❌ Wrong - deprecated
|
||||
model: "c1/anthropic/claude-sonnet-3-5"
|
||||
model: "c1/anthropic/claude-3.7-sonnet"
|
||||
|
||||
// ✅ Correct - current stable
|
||||
model: "c1/anthropic/claude-sonnet-4/v-20250930"
|
||||
```
|
||||
|
||||
**Current Stable Models** (October 2025):
|
||||
|
||||
| Provider | Model ID | Status |
|
||||
|----------|----------|--------|
|
||||
| Anthropic | `c1/anthropic/claude-sonnet-4/v-20250930` | ✅ Stable |
|
||||
| OpenAI | `c1/openai/gpt-5/v-20250930` | ✅ Stable |
|
||||
| OpenAI | `c1-exp/openai/gpt-4.1/v-20250617` | ⚠️ Experimental |
|
||||
| Anthropic | `c1-exp/anthropic/claude-3.5-haiku/v-20250709` | ⚠️ Experimental |
|
||||
|
||||
**How to Find Latest Models**:
|
||||
1. Visit [TheSys Playground](https://console.thesys.dev/playground)
|
||||
2. Check the model dropdown for current versions
|
||||
3. Look for `v-YYYYMMDD` format in the model ID
|
||||
4. Prefer stable (`c1/`) over experimental (`c1-exp/`) for production
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# Test if model ID works
|
||||
curl -X POST https://api.thesys.dev/v1/embed/chat/completions \
|
||||
-H "Authorization: Bearer $THESYS_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "c1/anthropic/claude-sonnet-4/v-20250930",
|
||||
"messages": [{"role": "user", "content": "test"}]
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Debugging Checklist
|
||||
|
||||
When encountering issues:
|
||||
|
||||
- [ ] Check package versions match compatibility matrix
|
||||
- [ ] Verify API key is set and correct
|
||||
- [ ] Inspect network tab for actual API responses
|
||||
- [ ] Check console for errors and warnings
|
||||
- [ ] Verify streaming is enabled (`stream: true`)
|
||||
- [ ] Confirm ThemeProvider is wrapping components
|
||||
- [ ] Check message array format (system first)
|
||||
- [ ] Validate Zod schemas have descriptions
|
||||
- [ ] Test with minimal example first
|
||||
- [ ] Check official TheSys docs for updates
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
1. **Official Docs**: https://docs.thesys.dev
|
||||
2. **Playground**: https://console.thesys.dev/playground
|
||||
3. **GitHub Issues**: Search for similar errors
|
||||
4. **Context7**: Use `/websites/thesys_dev` for latest docs
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-10-26
|
||||
78
references/component-api.md
Normal file
78
references/component-api.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Component API Reference
|
||||
|
||||
Complete prop reference for all TheSys C1 components.
|
||||
|
||||
---
|
||||
|
||||
## `<C1Chat>`
|
||||
|
||||
Pre-built chat component with state management.
|
||||
|
||||
```typescript
|
||||
import { C1Chat } from "@thesysai/genui-sdk";
|
||||
|
||||
<C1Chat
|
||||
apiUrl="/api/chat"
|
||||
agentName="Assistant"
|
||||
logoUrl="https://..."
|
||||
theme={themeObject}
|
||||
threadManager={threadManager}
|
||||
threadListManager={threadListManager}
|
||||
customizeC1={{ ... }}
|
||||
/>
|
||||
```
|
||||
|
||||
**Props**:
|
||||
- `apiUrl` (required): Backend API endpoint
|
||||
- `agentName`: Display name for AI
|
||||
- `logoUrl`: Avatar image URL
|
||||
- `theme`: Theme configuration object
|
||||
- `threadManager`: For multi-thread support
|
||||
- `threadListManager`: For thread list UI
|
||||
- `customizeC1`: Custom components object
|
||||
|
||||
---
|
||||
|
||||
## `<C1Component>`
|
||||
|
||||
Low-level renderer for custom integration.
|
||||
|
||||
```typescript
|
||||
import { C1Component } from "@thesysai/genui-sdk";
|
||||
|
||||
<C1Component
|
||||
c1Response={response}
|
||||
isStreaming={boolean}
|
||||
updateMessage={(msg) => setResponse(msg)}
|
||||
onAction={({ llmFriendlyMessage, rawAction }) => {...}}
|
||||
/>
|
||||
```
|
||||
|
||||
**Props**:
|
||||
- `c1Response` (required): C1 API response string
|
||||
- `isStreaming`: Shows loading indicator
|
||||
- `updateMessage`: Callback for response updates
|
||||
- `onAction`: Handle user interactions
|
||||
|
||||
---
|
||||
|
||||
## `<ThemeProvider>`
|
||||
|
||||
Theme wrapper component.
|
||||
|
||||
```typescript
|
||||
import { ThemeProvider } from "@thesysai/genui-sdk";
|
||||
|
||||
<ThemeProvider theme={customTheme} mode="dark">
|
||||
<C1Component {...} />
|
||||
</ThemeProvider>
|
||||
```
|
||||
|
||||
**Props**:
|
||||
- `theme`: Theme object
|
||||
- `mode`: "light" | "dark" | "system"
|
||||
- `children`: React nodes to wrap
|
||||
|
||||
---
|
||||
|
||||
For complete details, see SKILL.md.
|
||||
71
scripts/check-versions.sh
Executable file
71
scripts/check-versions.sh
Executable file
@@ -0,0 +1,71 @@
|
||||
#!/bin/bash
|
||||
|
||||
# TheSys Generative UI - Version Checker
|
||||
#
|
||||
# Verifies installed package versions match recommended versions
|
||||
# Usage: ./scripts/check-versions.sh
|
||||
|
||||
set -e
|
||||
|
||||
echo "========================================="
|
||||
echo "TheSys Generative UI - Version Checker"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
|
||||
# Check if node_modules exists
|
||||
if [ ! -d "node_modules" ]; then
|
||||
echo "❌ node_modules not found. Run npm/pnpm install first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Recommended versions
|
||||
declare -A RECOMMENDED=(
|
||||
["@thesysai/genui-sdk"]="0.6.40"
|
||||
["@crayonai/react-ui"]="0.8.27"
|
||||
["@crayonai/react-core"]="0.7.6"
|
||||
["openai"]="4.73.0"
|
||||
["zod"]="3.24.1"
|
||||
["react"]="19.0.0"
|
||||
)
|
||||
|
||||
echo "Checking package versions..."
|
||||
echo ""
|
||||
|
||||
ALL_OK=true
|
||||
|
||||
for package in "${!RECOMMENDED[@]}"; do
|
||||
recommended="${RECOMMENDED[$package]}"
|
||||
|
||||
# Try to get installed version
|
||||
if [ -f "node_modules/$package/package.json" ]; then
|
||||
installed=$(node -p "require('./node_modules/$package/package.json').version" 2>/dev/null || echo "unknown")
|
||||
|
||||
# Simple version comparison (ignores patch for minor updates)
|
||||
installed_major=$(echo "$installed" | cut -d. -f1)
|
||||
installed_minor=$(echo "$installed" | cut -d. -f2)
|
||||
recommended_major=$(echo "$recommended" | cut -d. -f1)
|
||||
recommended_minor=$(echo "$recommended" | cut -d. -f2)
|
||||
|
||||
if [ "$installed_major" -eq "$recommended_major" ] && [ "$installed_minor" -ge "$recommended_minor" ]; then
|
||||
echo "✅ $package: $installed (recommended: ~$recommended)"
|
||||
else
|
||||
echo "⚠️ $package: $installed (recommended: ~$recommended)"
|
||||
ALL_OK=false
|
||||
fi
|
||||
else
|
||||
echo "❌ $package: NOT INSTALLED (recommended: ~$recommended)"
|
||||
ALL_OK=false
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
|
||||
if [ "$ALL_OK" = true ]; then
|
||||
echo "✅ All packages are at compatible versions!"
|
||||
else
|
||||
echo "⚠️ Some packages need updating. Run:"
|
||||
echo " npm install @thesysai/genui-sdk@^0.6.40 @crayonai/react-ui@^0.8.27 @crayonai/react-core@^0.7.6"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "For version compatibility matrix, see references/common-errors.md"
|
||||
75
scripts/install-dependencies.sh
Executable file
75
scripts/install-dependencies.sh
Executable file
@@ -0,0 +1,75 @@
|
||||
#!/bin/bash
|
||||
|
||||
# TheSys Generative UI - Dependency Installation Script
|
||||
#
|
||||
# Installs all required packages for TheSys C1 integration
|
||||
# Usage: ./scripts/install-dependencies.sh
|
||||
|
||||
set -e
|
||||
|
||||
echo "========================================="
|
||||
echo "TheSys Generative UI - Dependency Installation"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
|
||||
# Detect package manager
|
||||
if command -v pnpm &> /dev/null; then
|
||||
PM="pnpm"
|
||||
elif command -v npm &> /dev/null; then
|
||||
PM="npm"
|
||||
else
|
||||
echo "❌ Error: No package manager found (npm or pnpm required)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📦 Using package manager: $PM"
|
||||
echo ""
|
||||
|
||||
# Core packages
|
||||
echo "Installing core TheSys packages..."
|
||||
$PM install @thesysai/genui-sdk@^0.6.40 \
|
||||
@crayonai/react-ui@^0.8.27 \
|
||||
@crayonai/react-core@^0.7.6 \
|
||||
@crayonai/stream@^0.1.0
|
||||
|
||||
# React dependencies (if not already installed)
|
||||
echo ""
|
||||
echo "Checking React dependencies..."
|
||||
if ! $PM list react &> /dev/null; then
|
||||
echo "Installing React..."
|
||||
$PM install react@^19.0.0 react-dom@^19.0.0
|
||||
fi
|
||||
|
||||
# Error boundary
|
||||
echo ""
|
||||
echo "Installing React Error Boundary..."
|
||||
$PM install react-error-boundary@^5.0.0
|
||||
|
||||
# AI integration
|
||||
echo ""
|
||||
echo "Installing OpenAI SDK..."
|
||||
$PM install openai@^4.73.0
|
||||
|
||||
# Tool calling
|
||||
echo ""
|
||||
echo "Installing Zod for tool calling..."
|
||||
$PM install zod@^3.24.1 zod-to-json-schema@^3.24.1
|
||||
|
||||
# Optional dependencies
|
||||
echo ""
|
||||
read -p "Install optional dependencies (Tavily for web search)? [y/N]: " install_optional
|
||||
|
||||
if [[ $install_optional =~ ^[Yy]$ ]]; then
|
||||
echo "Installing optional dependencies..."
|
||||
$PM install @tavily/core@^1.0.0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ Installation complete!"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Set THESYS_API_KEY environment variable"
|
||||
echo "2. Choose a template from templates/ directory"
|
||||
echo "3. Start building!"
|
||||
echo ""
|
||||
echo "For help, see README.md or SKILL.md"
|
||||
191
templates/cloudflare-workers/frontend-setup.tsx
Normal file
191
templates/cloudflare-workers/frontend-setup.tsx
Normal file
@@ -0,0 +1,191 @@
|
||||
/**
|
||||
* Cloudflare Workers + Vite Frontend Setup
|
||||
*
|
||||
* File: src/App.tsx
|
||||
*
|
||||
* Frontend configuration for Vite + React app deployed with Cloudflare Workers.
|
||||
* Uses relative paths since Worker and frontend run on same origin.
|
||||
*
|
||||
* Key Differences from standalone Vite:
|
||||
* - API URLs are relative (not absolute)
|
||||
* - No CORS issues (same origin)
|
||||
* - Worker handles routing, serves static assets
|
||||
*/
|
||||
|
||||
import "@crayonai/react-ui/styles/index.css";
|
||||
import { ThemeProvider, C1Component } from "@thesysai/genui-sdk";
|
||||
import { useState } from "react";
|
||||
import "./App.css";
|
||||
|
||||
export default function App() {
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [c1Response, setC1Response] = useState("");
|
||||
const [question, setQuestion] = useState("");
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
const makeApiCall = async (query: string, previousResponse?: string) => {
|
||||
if (!query.trim()) return;
|
||||
|
||||
setIsLoading(true);
|
||||
setError(null);
|
||||
|
||||
try {
|
||||
// NOTE: Using relative path - Worker handles this on same domain
|
||||
const response = await fetch("/api/chat", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
prompt: query,
|
||||
previousC1Response: previousResponse || c1Response,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json();
|
||||
throw new Error(errorData.error || `HTTP ${response.status}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
setC1Response(data.response);
|
||||
setQuestion("");
|
||||
} catch (err) {
|
||||
console.error("API Error:", err);
|
||||
setError(err instanceof Error ? err.message : "Failed to get response");
|
||||
} finally {
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
makeApiCall(question);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="app-container">
|
||||
<header className="app-header">
|
||||
<h1>Cloudflare AI Assistant</h1>
|
||||
<p>Powered by Workers + TheSys C1</p>
|
||||
</header>
|
||||
|
||||
<form onSubmit={handleSubmit} className="input-form">
|
||||
<input
|
||||
type="text"
|
||||
value={question}
|
||||
onChange={(e) => setQuestion(e.target.value)}
|
||||
placeholder="Ask me anything..."
|
||||
disabled={isLoading}
|
||||
className="question-input"
|
||||
autoFocus
|
||||
/>
|
||||
<button
|
||||
type="submit"
|
||||
disabled={isLoading || !question.trim()}
|
||||
className="submit-button"
|
||||
>
|
||||
{isLoading ? "Processing..." : "Send"}
|
||||
</button>
|
||||
</form>
|
||||
|
||||
{error && (
|
||||
<div className="error-message">
|
||||
<strong>Error:</strong> {error}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{c1Response && (
|
||||
<div className="response-container">
|
||||
<ThemeProvider>
|
||||
<C1Component
|
||||
c1Response={c1Response}
|
||||
isStreaming={isLoading}
|
||||
updateMessage={(message) => setC1Response(message)}
|
||||
onAction={({ llmFriendlyMessage }) => {
|
||||
if (!isLoading) {
|
||||
makeApiCall(llmFriendlyMessage, c1Response);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</ThemeProvider>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* vite.config.ts Configuration
|
||||
*
|
||||
* IMPORTANT: When using @cloudflare/vite-plugin, the Worker runs
|
||||
* alongside Vite on the same port, so use relative API paths.
|
||||
*
|
||||
* import { defineConfig } from "vite";
|
||||
* import react from "@vitejs/plugin-react";
|
||||
* import { cloudflare } from "@cloudflare/vite-plugin";
|
||||
*
|
||||
* export default defineConfig({
|
||||
* plugins: [
|
||||
* react(),
|
||||
* cloudflare({
|
||||
* configPath: "./wrangler.jsonc",
|
||||
* }),
|
||||
* ],
|
||||
* build: {
|
||||
* outDir: "dist",
|
||||
* },
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Alternative: Streaming Setup
|
||||
*
|
||||
* For streaming responses, modify the API call:
|
||||
*
|
||||
* const makeStreamingApiCall = async (query: string) => {
|
||||
* setIsLoading(true);
|
||||
* setC1Response("");
|
||||
*
|
||||
* const response = await fetch("/api/chat/stream", {
|
||||
* method: "POST",
|
||||
* headers: { "Content-Type": "application/json" },
|
||||
* body: JSON.stringify({ prompt: query }),
|
||||
* });
|
||||
*
|
||||
* if (!response.ok) {
|
||||
* throw new Error("Stream failed");
|
||||
* }
|
||||
*
|
||||
* const reader = response.body?.getReader();
|
||||
* if (!reader) return;
|
||||
*
|
||||
* const decoder = new TextDecoder();
|
||||
* let accumulated = "";
|
||||
*
|
||||
* while (true) {
|
||||
* const { done, value } = await reader.read();
|
||||
* if (done) break;
|
||||
*
|
||||
* const chunk = decoder.decode(value);
|
||||
* accumulated += chunk;
|
||||
* setC1Response(accumulated);
|
||||
* }
|
||||
*
|
||||
* setIsLoading(false);
|
||||
* };
|
||||
*/
|
||||
|
||||
/**
|
||||
* Deployment Steps:
|
||||
*
|
||||
* 1. Build frontend:
|
||||
* npm run build
|
||||
*
|
||||
* 2. Deploy to Cloudflare:
|
||||
* npx wrangler deploy
|
||||
*
|
||||
* 3. Set secrets:
|
||||
* npx wrangler secret put THESYS_API_KEY
|
||||
*
|
||||
* 4. Test:
|
||||
* Visit your-worker.workers.dev
|
||||
*/
|
||||
247
templates/cloudflare-workers/worker-backend.ts
Normal file
247
templates/cloudflare-workers/worker-backend.ts
Normal file
@@ -0,0 +1,247 @@
|
||||
/**
|
||||
* Cloudflare Worker Backend with Hono + TheSys C1
|
||||
*
|
||||
* File: backend/src/index.ts
|
||||
*
|
||||
* Features:
|
||||
* - Hono routing
|
||||
* - TheSys C1 API proxy
|
||||
* - Streaming support
|
||||
* - Static assets serving
|
||||
* - CORS handling
|
||||
*/
|
||||
|
||||
import { Hono } from "hono";
|
||||
import { cors } from "hono/cors";
|
||||
import { serveStatic } from "hono/cloudflare-workers";
|
||||
|
||||
type Bindings = {
|
||||
THESYS_API_KEY: string;
|
||||
ASSETS: Fetcher;
|
||||
};
|
||||
|
||||
const app = new Hono<{ Bindings: Bindings }>();
|
||||
|
||||
// CORS middleware
|
||||
app.use("/*", cors({
|
||||
origin: "*",
|
||||
allowMethods: ["GET", "POST", "OPTIONS"],
|
||||
allowHeaders: ["Content-Type", "Authorization"],
|
||||
}));
|
||||
|
||||
// ============================================================================
|
||||
// Chat API Endpoint
|
||||
// ============================================================================
|
||||
|
||||
app.post("/api/chat", async (c) => {
|
||||
try {
|
||||
const { prompt, previousC1Response } = await c.req.json();
|
||||
|
||||
if (!prompt || typeof prompt !== "string") {
|
||||
return c.json({ error: "Invalid prompt" }, 400);
|
||||
}
|
||||
|
||||
// Check API key binding
|
||||
if (!c.env.THESYS_API_KEY) {
|
||||
console.error("THESYS_API_KEY binding not found");
|
||||
return c.json({ error: "Server configuration error" }, 500);
|
||||
}
|
||||
|
||||
// Build messages
|
||||
const messages = [
|
||||
{
|
||||
role: "system",
|
||||
content: "You are a helpful AI assistant that generates interactive UI.",
|
||||
},
|
||||
{
|
||||
role: "user",
|
||||
content: prompt,
|
||||
},
|
||||
];
|
||||
|
||||
if (previousC1Response) {
|
||||
messages.splice(1, 0, {
|
||||
role: "assistant",
|
||||
content: previousC1Response,
|
||||
});
|
||||
}
|
||||
|
||||
// Call TheSys C1 API
|
||||
const response = await fetch(
|
||||
"https://api.thesys.dev/v1/embed/chat/completions",
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Authorization": `Bearer ${c.env.THESYS_API_KEY}`,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
model: "c1/openai/gpt-5/v-20250930",
|
||||
messages,
|
||||
stream: false, // Or handle streaming
|
||||
temperature: 0.7,
|
||||
max_tokens: 2000,
|
||||
}),
|
||||
}
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
const error = await response.text();
|
||||
console.error("TheSys API Error:", error);
|
||||
return c.json(
|
||||
{ error: "Failed to get AI response" },
|
||||
response.status
|
||||
);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
return c.json({
|
||||
response: data.choices[0]?.message?.content || "",
|
||||
usage: data.usage,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Chat endpoint error:", error);
|
||||
return c.json(
|
||||
{ error: error instanceof Error ? error.message : "Internal error" },
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Streaming Chat Endpoint
|
||||
// ============================================================================
|
||||
|
||||
app.post("/api/chat/stream", async (c) => {
|
||||
try {
|
||||
const { prompt } = await c.req.json();
|
||||
|
||||
const response = await fetch(
|
||||
"https://api.thesys.dev/v1/embed/chat/completions",
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Authorization": `Bearer ${c.env.THESYS_API_KEY}`,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
model: "c1/openai/gpt-5/v-20250930",
|
||||
messages: [
|
||||
{ role: "system", content: "You are a helpful assistant." },
|
||||
{ role: "user", content: prompt },
|
||||
],
|
||||
stream: true,
|
||||
}),
|
||||
}
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
return c.json({ error: "Stream failed" }, response.status);
|
||||
}
|
||||
|
||||
// Return the stream directly
|
||||
return new Response(response.body, {
|
||||
headers: {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache",
|
||||
"Connection": "keep-alive",
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Stream error:", error);
|
||||
return c.json({ error: "Stream failed" }, 500);
|
||||
}
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Health Check
|
||||
// ============================================================================
|
||||
|
||||
app.get("/api/health", (c) => {
|
||||
return c.json({
|
||||
status: "ok",
|
||||
timestamp: new Date().toISOString(),
|
||||
});
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Serve Static Assets (Vite build output)
|
||||
// ============================================================================
|
||||
|
||||
app.get("/*", serveStatic({ root: "./", mimes: {} }));
|
||||
|
||||
export default app;
|
||||
|
||||
/**
|
||||
* Alternative: Using Workers AI directly (cheaper for some models)
|
||||
*
|
||||
* type Bindings = {
|
||||
* AI: any; // Cloudflare AI binding
|
||||
* };
|
||||
*
|
||||
* app.post("/api/chat", async (c) => {
|
||||
* const { prompt } = await c.req.json();
|
||||
*
|
||||
* const aiResponse = await c.env.AI.run('@cf/meta/llama-3-8b-instruct', {
|
||||
* messages: [
|
||||
* { role: "system", content: "You are a helpful assistant." },
|
||||
* { role: "user", content: prompt },
|
||||
* ],
|
||||
* });
|
||||
*
|
||||
* // Then optionally send to TheSys C1 for UI generation
|
||||
* const c1Response = await fetch("https://api.thesys.dev/v1/embed/chat/completions", {
|
||||
* method: "POST",
|
||||
* headers: {
|
||||
* "Authorization": `Bearer ${c.env.THESYS_API_KEY}`,
|
||||
* "Content-Type": "application/json",
|
||||
* },
|
||||
* body: JSON.stringify({
|
||||
* model: "c1/openai/gpt-5/v-20250930",
|
||||
* messages: [
|
||||
* {
|
||||
* role: "system",
|
||||
* content: "Generate a UI for this content: " + aiResponse.response,
|
||||
* },
|
||||
* ],
|
||||
* }),
|
||||
* });
|
||||
*
|
||||
* // ... return c1Response
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Alternative: With D1 Database for message persistence
|
||||
*
|
||||
* type Bindings = {
|
||||
* THESYS_API_KEY: string;
|
||||
* DB: D1Database; // D1 binding
|
||||
* };
|
||||
*
|
||||
* app.post("/api/chat", async (c) => {
|
||||
* const { userId, threadId, prompt } = await c.req.json();
|
||||
*
|
||||
* // Save user message
|
||||
* await c.env.DB.prepare(
|
||||
* "INSERT INTO messages (thread_id, user_id, role, content) VALUES (?, ?, ?, ?)"
|
||||
* )
|
||||
* .bind(threadId, userId, "user", prompt)
|
||||
* .run();
|
||||
*
|
||||
* // Get conversation history
|
||||
* const { results } = await c.env.DB.prepare(
|
||||
* "SELECT role, content FROM messages WHERE thread_id = ? ORDER BY created_at"
|
||||
* )
|
||||
* .bind(threadId)
|
||||
* .all();
|
||||
*
|
||||
* const messages = [
|
||||
* { role: "system", content: "You are a helpful assistant." },
|
||||
* ...results,
|
||||
* ];
|
||||
*
|
||||
* // Call TheSys API with full history...
|
||||
* });
|
||||
*/
|
||||
106
templates/cloudflare-workers/wrangler.jsonc
Normal file
106
templates/cloudflare-workers/wrangler.jsonc
Normal file
@@ -0,0 +1,106 @@
|
||||
{
|
||||
// Cloudflare Worker Configuration with Static Assets
|
||||
//
|
||||
// This configures a Worker that serves a Vite+React frontend
|
||||
// and handles API routes for TheSys C1 integration.
|
||||
//
|
||||
// Prerequisites:
|
||||
// 1. Set THESYS_API_KEY secret: npx wrangler secret put THESYS_API_KEY
|
||||
// 2. Build frontend: npm run build
|
||||
// 3. Deploy: npx wrangler deploy
|
||||
|
||||
"name": "thesys-chat-worker",
|
||||
"compatibility_date": "2025-10-26",
|
||||
"compatibility_flags": ["nodejs_compat"],
|
||||
|
||||
// Main worker file (Hono backend)
|
||||
"main": "backend/src/index.ts",
|
||||
|
||||
// Static assets configuration (Vite build output)
|
||||
"assets": {
|
||||
"directory": "dist",
|
||||
"binding": "ASSETS",
|
||||
"html_handling": "auto-trailing-slash",
|
||||
"not_found_handling": "single-page-application"
|
||||
},
|
||||
|
||||
// Environment variables (non-sensitive)
|
||||
"vars": {
|
||||
"ENVIRONMENT": "production",
|
||||
"LOG_LEVEL": "info"
|
||||
},
|
||||
|
||||
// Secrets (set via CLI, not in this file!)
|
||||
// npx wrangler secret put THESYS_API_KEY
|
||||
// npx wrangler secret put TAVILY_API_KEY (optional, for tool calling)
|
||||
|
||||
// Optional: D1 Database binding for message persistence
|
||||
// "d1_databases": [
|
||||
// {
|
||||
// "binding": "DB",
|
||||
// "database_name": "thesys-chat-db",
|
||||
// "database_id": "your-database-id"
|
||||
// }
|
||||
// ],
|
||||
|
||||
// Optional: KV namespace for caching
|
||||
// "kv_namespaces": [
|
||||
// {
|
||||
// "binding": "KV",
|
||||
// "id": "your-kv-id"
|
||||
// }
|
||||
// ],
|
||||
|
||||
// Optional: Workers AI binding (for hybrid approach)
|
||||
// "ai": {
|
||||
// "binding": "AI"
|
||||
// },
|
||||
|
||||
// Optional: Durable Objects for real-time features
|
||||
// "durable_objects": {
|
||||
// "bindings": [
|
||||
// {
|
||||
// "name": "CHAT_SESSION",
|
||||
// "class_name": "ChatSession",
|
||||
// "script_name": "thesys-chat-worker"
|
||||
// }
|
||||
// ]
|
||||
// },
|
||||
|
||||
// Node.js compatibility for packages like OpenAI SDK
|
||||
"node_compat": true,
|
||||
|
||||
// Build configuration
|
||||
"build": {
|
||||
"command": "npm run build"
|
||||
},
|
||||
|
||||
// Development settings
|
||||
"dev": {
|
||||
"port": 8787,
|
||||
"local_protocol": "http"
|
||||
},
|
||||
|
||||
// Observability
|
||||
"observability": {
|
||||
"enabled": true
|
||||
},
|
||||
|
||||
// Routes (optional - for custom domains)
|
||||
// "routes": [
|
||||
// {
|
||||
// "pattern": "chat.yourdomain.com/*",
|
||||
// "zone_name": "yourdomain.com"
|
||||
// }
|
||||
// ],
|
||||
|
||||
// Workers Limits
|
||||
"limits": {
|
||||
"cpu_ms": 50000
|
||||
},
|
||||
|
||||
// Placement (optional - for closer to users)
|
||||
// "placement": {
|
||||
// "mode": "smart"
|
||||
// }
|
||||
}
|
||||
175
templates/nextjs/api-chat-route.ts
Normal file
175
templates/nextjs/api-chat-route.ts
Normal file
@@ -0,0 +1,175 @@
|
||||
/**
|
||||
* Next.js App Router - API Route for Chat
|
||||
*
|
||||
* File: app/api/chat/route.ts
|
||||
*
|
||||
* Handles streaming chat completions with TheSys C1 API.
|
||||
*
|
||||
* Features:
|
||||
* - Streaming responses
|
||||
* - OpenAI SDK integration
|
||||
* - Error handling
|
||||
* - CORS headers
|
||||
*/
|
||||
|
||||
import { NextRequest, NextResponse } from "next/server";
|
||||
import OpenAI from "openai";
|
||||
import { transformStream } from "@crayonai/stream";
|
||||
|
||||
const client = new OpenAI({
|
||||
baseURL: "https://api.thesys.dev/v1/embed",
|
||||
apiKey: process.env.THESYS_API_KEY,
|
||||
});
|
||||
|
||||
// System prompt for the AI
|
||||
const SYSTEM_PROMPT = `You are a helpful AI assistant that generates interactive user interfaces.
|
||||
When responding:
|
||||
- Use clear, concise language
|
||||
- Generate appropriate UI components (charts, tables, forms) when beneficial
|
||||
- Ask clarifying questions when needed
|
||||
- Be friendly and professional`;
|
||||
|
||||
export async function POST(req: NextRequest) {
|
||||
try {
|
||||
const { prompt, previousC1Response } = await req.json();
|
||||
|
||||
if (!prompt || typeof prompt !== "string") {
|
||||
return NextResponse.json(
|
||||
{ error: "Invalid prompt" },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
// Check API key
|
||||
if (!process.env.THESYS_API_KEY) {
|
||||
console.error("THESYS_API_KEY is not set");
|
||||
return NextResponse.json(
|
||||
{ error: "Server configuration error" },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
|
||||
// Build messages array
|
||||
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
|
||||
{ role: "system", content: SYSTEM_PROMPT },
|
||||
{ role: "user", content: prompt },
|
||||
];
|
||||
|
||||
// If there's previous context, include it
|
||||
if (previousC1Response) {
|
||||
messages.splice(1, 0, {
|
||||
role: "assistant",
|
||||
content: previousC1Response,
|
||||
});
|
||||
}
|
||||
|
||||
// Create streaming completion
|
||||
const stream = await client.chat.completions.create({
|
||||
model: "c1/openai/gpt-5/v-20250930", // or claude-sonnet-4/v-20250930
|
||||
messages,
|
||||
stream: true,
|
||||
temperature: 0.7,
|
||||
max_tokens: 2000,
|
||||
});
|
||||
|
||||
// Transform OpenAI stream to C1 format
|
||||
const responseStream = transformStream(stream, (chunk) => {
|
||||
return chunk.choices[0]?.delta?.content || "";
|
||||
}) as ReadableStream<string>;
|
||||
|
||||
return new NextResponse(responseStream, {
|
||||
headers: {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache, no-transform",
|
||||
"Connection": "keep-alive",
|
||||
"Access-Control-Allow-Origin": "*",
|
||||
"Access-Control-Allow-Methods": "POST, OPTIONS",
|
||||
"Access-Control-Allow-Headers": "Content-Type",
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Chat API Error:", error);
|
||||
|
||||
// Handle specific OpenAI errors
|
||||
if (error instanceof OpenAI.APIError) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: error.message,
|
||||
type: error.type,
|
||||
code: error.code,
|
||||
},
|
||||
{ status: error.status || 500 }
|
||||
);
|
||||
}
|
||||
|
||||
return NextResponse.json(
|
||||
{ error: "Internal server error" },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle preflight requests
|
||||
export async function OPTIONS() {
|
||||
return new NextResponse(null, {
|
||||
headers: {
|
||||
"Access-Control-Allow-Origin": "*",
|
||||
"Access-Control-Allow-Methods": "POST, OPTIONS",
|
||||
"Access-Control-Allow-Headers": "Content-Type",
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative: Using Anthropic (Claude) models
|
||||
*
|
||||
* const stream = await client.chat.completions.create({
|
||||
* model: "c1/anthropic/claude-sonnet-4/v-20250617",
|
||||
* messages,
|
||||
* stream: true,
|
||||
* temperature: 0.8,
|
||||
* max_tokens: 4096,
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Alternative: With message persistence
|
||||
*
|
||||
* import { db } from "@/lib/db";
|
||||
*
|
||||
* export async function POST(req: NextRequest) {
|
||||
* const { userId } = auth(); // Clerk, NextAuth, etc.
|
||||
* const { prompt, threadId } = await req.json();
|
||||
*
|
||||
* // Save user message
|
||||
* await db.insert(messages).values({
|
||||
* threadId,
|
||||
* userId,
|
||||
* role: "user",
|
||||
* content: prompt,
|
||||
* });
|
||||
*
|
||||
* // Get conversation history
|
||||
* const history = await db
|
||||
* .select()
|
||||
* .from(messages)
|
||||
* .where(eq(messages.threadId, threadId))
|
||||
* .orderBy(messages.createdAt);
|
||||
*
|
||||
* const llmMessages = history.map((m) => ({
|
||||
* role: m.role,
|
||||
* content: m.content,
|
||||
* }));
|
||||
*
|
||||
* const stream = await client.chat.completions.create({
|
||||
* model: "c1/openai/gpt-5/v-20250930",
|
||||
* messages: [{ role: "system", content: SYSTEM_PROMPT }, ...llmMessages],
|
||||
* stream: true,
|
||||
* });
|
||||
*
|
||||
* // ... transform and return stream
|
||||
*
|
||||
* // Save assistant response after streaming completes
|
||||
* // (You'd need to handle this in the client or use a callback)
|
||||
* }
|
||||
*/
|
||||
128
templates/nextjs/app-page.tsx
Normal file
128
templates/nextjs/app-page.tsx
Normal file
@@ -0,0 +1,128 @@
|
||||
/**
|
||||
* Next.js App Router - Page Component with C1Chat
|
||||
*
|
||||
* File: app/page.tsx
|
||||
*
|
||||
* Simplest possible integration - just drop in C1Chat and point to API route.
|
||||
*
|
||||
* Features:
|
||||
* - Pre-built C1Chat component
|
||||
* - Automatic state management
|
||||
* - Thread support (optional)
|
||||
* - Responsive design
|
||||
*/
|
||||
|
||||
"use client";
|
||||
|
||||
import { C1Chat } from "@thesysai/genui-sdk";
|
||||
import { themePresets } from "@crayonai/react-ui";
|
||||
import "@crayonai/react-ui/styles/index.css";
|
||||
|
||||
export default function Home() {
|
||||
return (
|
||||
<main className="min-h-screen bg-gray-50 dark:bg-gray-900">
|
||||
<div className="container mx-auto p-4">
|
||||
<C1Chat
|
||||
apiUrl="/api/chat"
|
||||
agentName="AI Assistant"
|
||||
logoUrl="https://placehold.co/100x100/3b82f6/ffffff?text=AI"
|
||||
theme={themePresets.default}
|
||||
/>
|
||||
</div>
|
||||
</main>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Alternative: With custom theme and dark mode
|
||||
*
|
||||
* import { useState, useEffect } from "react";
|
||||
*
|
||||
* function useSystemTheme() {
|
||||
* const [theme, setTheme] = useState<"light" | "dark">("light");
|
||||
*
|
||||
* useEffect(() => {
|
||||
* const mediaQuery = window.matchMedia("(prefers-color-scheme: dark)");
|
||||
* setTheme(mediaQuery.matches ? "dark" : "light");
|
||||
*
|
||||
* const handler = (e: MediaQueryListEvent) => {
|
||||
* setTheme(e.matches ? "dark" : "light");
|
||||
* };
|
||||
*
|
||||
* mediaQuery.addEventListener("change", handler);
|
||||
* return () => mediaQuery.removeEventListener("change", handler);
|
||||
* }, []);
|
||||
*
|
||||
* return theme;
|
||||
* }
|
||||
*
|
||||
* export default function Home() {
|
||||
* const systemTheme = useSystemTheme();
|
||||
*
|
||||
* return (
|
||||
* <C1Chat
|
||||
* apiUrl="/api/chat"
|
||||
* theme={{ ...themePresets.candy, mode: systemTheme }}
|
||||
* />
|
||||
* );
|
||||
* }
|
||||
*/
|
||||
|
||||
/**
|
||||
* Alternative: With thread management
|
||||
*
|
||||
* import {
|
||||
* useThreadListManager,
|
||||
* useThreadManager,
|
||||
* } from "@thesysai/genui-sdk";
|
||||
*
|
||||
* export default function Home() {
|
||||
* const threadListManager = useThreadListManager({
|
||||
* fetchThreadList: async () => {
|
||||
* const res = await fetch("/api/threads");
|
||||
* return res.json();
|
||||
* },
|
||||
* deleteThread: async (threadId: string) => {
|
||||
* await fetch(`/api/threads/${threadId}`, { method: "DELETE" });
|
||||
* },
|
||||
* updateThread: async (thread) => {
|
||||
* const res = await fetch(`/api/threads/${thread.threadId}`, {
|
||||
* method: "PUT",
|
||||
* body: JSON.stringify(thread),
|
||||
* });
|
||||
* return res.json();
|
||||
* },
|
||||
* createThread: async (firstMessage) => {
|
||||
* const res = await fetch("/api/threads", {
|
||||
* method: "POST",
|
||||
* body: JSON.stringify({ title: firstMessage.message }),
|
||||
* });
|
||||
* return res.json();
|
||||
* },
|
||||
* onSwitchToNew: () => {
|
||||
* window.history.replaceState(null, "", "/");
|
||||
* },
|
||||
* onSelectThread: (threadId) => {
|
||||
* window.history.replaceState(null, "", `/?threadId=${threadId}`);
|
||||
* },
|
||||
* });
|
||||
*
|
||||
* const threadManager = useThreadManager({
|
||||
* threadListManager,
|
||||
* loadThread: async (threadId) => {
|
||||
* const res = await fetch(`/api/threads/${threadId}/messages`);
|
||||
* return res.json();
|
||||
* },
|
||||
* onUpdateMessage: async ({ message }) => {
|
||||
* // Handle message updates
|
||||
* },
|
||||
* });
|
||||
*
|
||||
* return (
|
||||
* <C1Chat
|
||||
* threadManager={threadManager}
|
||||
* threadListManager={threadListManager}
|
||||
* />
|
||||
* );
|
||||
* }
|
||||
*/
|
||||
43
templates/nextjs/package.json
Normal file
43
templates/nextjs/package.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"name": "thesys-nextjs-example",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "Next.js App Router integration with TheSys Generative UI",
|
||||
"scripts": {
|
||||
"dev": "next dev",
|
||||
"build": "next build",
|
||||
"start": "next start",
|
||||
"lint": "next lint"
|
||||
},
|
||||
"dependencies": {
|
||||
"@thesysai/genui-sdk": "^0.6.40",
|
||||
"@crayonai/react-ui": "^0.8.42",
|
||||
"@crayonai/react-core": "^0.7.6",
|
||||
"@crayonai/stream": "^0.1.0",
|
||||
"next": "^15.1.4",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0",
|
||||
"react-error-boundary": "^5.0.0",
|
||||
"openai": "^4.73.0",
|
||||
"zod": "^3.24.1",
|
||||
"zod-to-json-schema": "^3.24.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^22.0.0",
|
||||
"@types/react": "^19.0.0",
|
||||
"@types/react-dom": "^19.0.0",
|
||||
"typescript": "^5.7.3",
|
||||
"eslint": "^9.0.0",
|
||||
"eslint-config-next": "^15.1.4",
|
||||
"tailwindcss": "^4.1.14",
|
||||
"postcss": "^8.4.49",
|
||||
"autoprefixer": "^10.4.20"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@tavily/core": "^1.0.0",
|
||||
"@clerk/nextjs": "^6.10.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
}
|
||||
}
|
||||
325
templates/nextjs/tool-calling-route.ts
Normal file
325
templates/nextjs/tool-calling-route.ts
Normal file
@@ -0,0 +1,325 @@
|
||||
/**
|
||||
* Next.js API Route with Tool Calling
|
||||
*
|
||||
* File: app/api/chat-with-tools/route.ts
|
||||
*
|
||||
* Demonstrates tool calling integration with TheSys C1.
|
||||
* Includes:
|
||||
* - Zod schema definitions
|
||||
* - Web search tool (Tavily)
|
||||
* - Product inventory tool
|
||||
* - Order creation tool
|
||||
* - Streaming with tool execution
|
||||
*/
|
||||
|
||||
import { NextRequest, NextResponse } from "next/server";
|
||||
import OpenAI from "openai";
|
||||
import { z } from "zod";
|
||||
import zodToJsonSchema from "zod-to-json-schema";
|
||||
import { transformStream } from "@crayonai/stream";
|
||||
import { TavilySearchAPIClient } from "@tavily/core";
|
||||
|
||||
const client = new OpenAI({
|
||||
baseURL: "https://api.thesys.dev/v1/embed",
|
||||
apiKey: process.env.THESYS_API_KEY,
|
||||
});
|
||||
|
||||
const tavily = new TavilySearchAPIClient({
|
||||
apiKey: process.env.TAVILY_API_KEY || "",
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Tool Schemas
|
||||
// ============================================================================
|
||||
|
||||
const webSearchSchema = z.object({
|
||||
query: z.string().describe("The search query"),
|
||||
max_results: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.default(5)
|
||||
.describe("Maximum number of results"),
|
||||
});
|
||||
|
||||
const productLookupSchema = z.object({
|
||||
product_type: z
|
||||
.enum(["gloves", "hat", "scarf", "all"])
|
||||
.optional()
|
||||
.describe("Type of product to lookup, or 'all' for everything"),
|
||||
});
|
||||
|
||||
const orderItemSchema = z.discriminatedUnion("type", [
|
||||
z.object({
|
||||
type: z.literal("gloves"),
|
||||
size: z.enum(["S", "M", "L", "XL"]),
|
||||
color: z.string(),
|
||||
quantity: z.number().int().min(1),
|
||||
}),
|
||||
z.object({
|
||||
type: z.literal("hat"),
|
||||
style: z.enum(["beanie", "baseball", "fedora"]),
|
||||
color: z.string(),
|
||||
quantity: z.number().int().min(1),
|
||||
}),
|
||||
z.object({
|
||||
type: z.literal("scarf"),
|
||||
length: z.enum(["short", "medium", "long"]),
|
||||
material: z.enum(["wool", "cotton", "silk"]),
|
||||
quantity: z.number().int().min(1),
|
||||
}),
|
||||
]);
|
||||
|
||||
const createOrderSchema = z.object({
|
||||
customer_email: z.string().email().describe("Customer email address"),
|
||||
items: z.array(orderItemSchema).min(1).describe("Items to order"),
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Tool Definitions
|
||||
// ============================================================================
|
||||
|
||||
const webSearchTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "web_search",
|
||||
description: "Search the web for current information using Tavily API",
|
||||
parameters: zodToJsonSchema(webSearchSchema),
|
||||
},
|
||||
};
|
||||
|
||||
const productLookupTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "lookup_product",
|
||||
description: "Look up products in inventory",
|
||||
parameters: zodToJsonSchema(productLookupSchema),
|
||||
},
|
||||
};
|
||||
|
||||
const createOrderTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "create_order",
|
||||
description: "Create a new product order",
|
||||
parameters: zodToJsonSchema(createOrderSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Tool Execution Functions
|
||||
// ============================================================================
|
||||
|
||||
async function executeWebSearch(args: z.infer<typeof webSearchSchema>) {
|
||||
const validated = webSearchSchema.parse(args);
|
||||
|
||||
const results = await tavily.search(validated.query, {
|
||||
maxResults: validated.max_results,
|
||||
includeAnswer: true,
|
||||
});
|
||||
|
||||
return {
|
||||
query: validated.query,
|
||||
answer: results.answer,
|
||||
results: results.results.map((r) => ({
|
||||
title: r.title,
|
||||
url: r.url,
|
||||
snippet: r.content,
|
||||
})),
|
||||
};
|
||||
}
|
||||
|
||||
async function executeProductLookup(
|
||||
args: z.infer<typeof productLookupSchema>
|
||||
) {
|
||||
const validated = productLookupSchema.parse(args);
|
||||
|
||||
// Mock inventory - replace with actual database query
|
||||
const inventory = {
|
||||
gloves: [
|
||||
{ id: 1, size: "M", color: "blue", price: 29.99, stock: 15 },
|
||||
{ id: 2, size: "L", color: "red", price: 29.99, stock: 8 },
|
||||
],
|
||||
hat: [
|
||||
{ id: 3, style: "beanie", color: "black", price: 19.99, stock: 20 },
|
||||
{ id: 4, style: "baseball", color: "navy", price: 24.99, stock: 12 },
|
||||
],
|
||||
scarf: [
|
||||
{ id: 5, length: "medium", material: "wool", price: 34.99, stock: 10 },
|
||||
],
|
||||
};
|
||||
|
||||
if (validated.product_type && validated.product_type !== "all") {
|
||||
return {
|
||||
type: validated.product_type,
|
||||
products: inventory[validated.product_type],
|
||||
};
|
||||
}
|
||||
|
||||
return { type: "all", inventory };
|
||||
}
|
||||
|
||||
async function executeCreateOrder(args: z.infer<typeof createOrderSchema>) {
|
||||
const validated = createOrderSchema.parse(args);
|
||||
|
||||
// Mock order creation - replace with actual database insert
|
||||
const orderId = `ORD-${Date.now()}`;
|
||||
|
||||
// Simulate saving to database
|
||||
console.log("Creating order:", {
|
||||
orderId,
|
||||
customer: validated.customer_email,
|
||||
items: validated.items,
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
orderId,
|
||||
customer_email: validated.customer_email,
|
||||
items: validated.items,
|
||||
total: validated.items.reduce(
|
||||
(sum, item) => sum + item.quantity * 29.99,
|
||||
0
|
||||
), // Mock price
|
||||
message: `Order ${orderId} created successfully`,
|
||||
};
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// API Route Handler
|
||||
// ============================================================================
|
||||
|
||||
export async function POST(req: NextRequest) {
|
||||
try {
|
||||
const { prompt, previousC1Response } = await req.json();
|
||||
|
||||
if (!prompt || typeof prompt !== "string") {
|
||||
return NextResponse.json(
|
||||
{ error: "Invalid prompt" },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
|
||||
{
|
||||
role: "system",
|
||||
content: `You are a helpful shopping assistant with access to tools.
|
||||
You can:
|
||||
1. Search the web for product information
|
||||
2. Look up products in our inventory
|
||||
3. Create orders for customers
|
||||
|
||||
Always use tools when appropriate. Be friendly and helpful.`,
|
||||
},
|
||||
{ role: "user", content: prompt },
|
||||
];
|
||||
|
||||
if (previousC1Response) {
|
||||
messages.splice(1, 0, {
|
||||
role: "assistant",
|
||||
content: previousC1Response,
|
||||
});
|
||||
}
|
||||
|
||||
// Create streaming completion with tools
|
||||
const llmStream = await client.beta.chat.completions.runTools({
|
||||
model: "c1/anthropic/claude-sonnet-4/v-20250617",
|
||||
messages,
|
||||
stream: true,
|
||||
tools: [webSearchTool, productLookupTool, createOrderTool],
|
||||
toolChoice: "auto", // Let AI decide when to use tools
|
||||
temperature: 0.7,
|
||||
});
|
||||
|
||||
// Handle tool execution
|
||||
llmStream.on("message", async (event) => {
|
||||
if (event.tool_calls) {
|
||||
for (const toolCall of event.tool_calls) {
|
||||
try {
|
||||
let result;
|
||||
|
||||
switch (toolCall.function.name) {
|
||||
case "web_search":
|
||||
const searchArgs = JSON.parse(toolCall.function.arguments);
|
||||
result = await executeWebSearch(searchArgs);
|
||||
break;
|
||||
|
||||
case "lookup_product":
|
||||
const lookupArgs = JSON.parse(toolCall.function.arguments);
|
||||
result = await executeProductLookup(lookupArgs);
|
||||
break;
|
||||
|
||||
case "create_order":
|
||||
const orderArgs = JSON.parse(toolCall.function.arguments);
|
||||
result = await executeCreateOrder(orderArgs);
|
||||
break;
|
||||
|
||||
default:
|
||||
throw new Error(`Unknown tool: ${toolCall.function.name}`);
|
||||
}
|
||||
|
||||
console.log(`Tool ${toolCall.function.name} executed:`, result);
|
||||
|
||||
// Tool results are automatically sent back to the LLM
|
||||
// by the runTools method
|
||||
} catch (error) {
|
||||
console.error(`Tool execution error:`, error);
|
||||
// Error will be sent back to LLM
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Transform stream to C1 format
|
||||
const responseStream = transformStream(llmStream, (chunk) => {
|
||||
return chunk.choices[0]?.delta?.content || "";
|
||||
}) as ReadableStream<string>;
|
||||
|
||||
return new NextResponse(responseStream, {
|
||||
headers: {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache, no-transform",
|
||||
"Connection": "keep-alive",
|
||||
"Access-Control-Allow-Origin": "*",
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Chat API Error:", error);
|
||||
|
||||
if (error instanceof z.ZodError) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: "Validation error",
|
||||
details: error.errors,
|
||||
},
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
if (error instanceof OpenAI.APIError) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: error.message,
|
||||
type: error.type,
|
||||
},
|
||||
{ status: error.status || 500 }
|
||||
);
|
||||
}
|
||||
|
||||
return NextResponse.json(
|
||||
{ error: "Internal server error" },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export async function OPTIONS() {
|
||||
return new NextResponse(null, {
|
||||
headers: {
|
||||
"Access-Control-Allow-Origin": "*",
|
||||
"Access-Control-Allow-Methods": "POST, OPTIONS",
|
||||
"Access-Control-Allow-Headers": "Content-Type",
|
||||
},
|
||||
});
|
||||
}
|
||||
267
templates/python-backend/README.md
Normal file
267
templates/python-backend/README.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Python Backend Templates for TheSys Generative UI
|
||||
|
||||
This directory contains production-ready Python backend templates for integrating TheSys C1 Generative UI API.
|
||||
|
||||
## Available Templates
|
||||
|
||||
### 1. FastAPI Backend (`fastapi-chat.py`)
|
||||
|
||||
Modern async web framework with automatic API documentation.
|
||||
|
||||
**Features**:
|
||||
- Async streaming support
|
||||
- Built-in request validation with Pydantic
|
||||
- Automatic OpenAPI docs
|
||||
- CORS middleware configured
|
||||
- Type hints throughout
|
||||
|
||||
**Run**:
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Set environment variable
|
||||
export THESYS_API_KEY=sk-th-your-key-here
|
||||
|
||||
# Run server
|
||||
python fastapi-chat.py
|
||||
|
||||
# Or with uvicorn directly
|
||||
uvicorn fastapi-chat:app --reload --port 8000
|
||||
```
|
||||
|
||||
**API Docs**: Visit `http://localhost:8000/docs` for interactive API documentation
|
||||
|
||||
---
|
||||
|
||||
### 2. Flask Backend (`flask-chat.py`)
|
||||
|
||||
Lightweight and flexible web framework.
|
||||
|
||||
**Features**:
|
||||
- Simple and familiar Flask API
|
||||
- CORS support with flask-cors
|
||||
- Streaming response handling
|
||||
- Easy to customize and extend
|
||||
|
||||
**Run**:
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Set environment variable
|
||||
export THESYS_API_KEY=sk-th-your-key-here
|
||||
|
||||
# Run server
|
||||
python flask-chat.py
|
||||
|
||||
# Or with flask CLI
|
||||
export FLASK_APP=flask-chat.py
|
||||
flask run --port 5000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
# Create virtual environment
|
||||
python -m venv venv
|
||||
source venv/bin/activate # On Windows: venv\Scripts\activate
|
||||
|
||||
# Install all dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# OR install only what you need
|
||||
pip install thesys-genui-sdk openai python-dotenv
|
||||
|
||||
# For FastAPI
|
||||
pip install fastapi uvicorn
|
||||
|
||||
# For Flask
|
||||
pip install flask flask-cors
|
||||
```
|
||||
|
||||
### 2. Environment Variables
|
||||
|
||||
Create a `.env` file:
|
||||
|
||||
```bash
|
||||
THESYS_API_KEY=sk-th-your-api-key-here
|
||||
```
|
||||
|
||||
Get your API key from: https://console.thesys.dev/keys
|
||||
|
||||
### 3. Choose Your Model
|
||||
|
||||
Both templates use different models by default to show variety:
|
||||
|
||||
**FastAPI**: Uses Claude Sonnet 4
|
||||
```python
|
||||
model="c1/anthropic/claude-sonnet-4/v-20250930"
|
||||
```
|
||||
|
||||
**Flask**: Uses GPT 5
|
||||
```python
|
||||
model="c1/openai/gpt-5/v-20250930"
|
||||
```
|
||||
|
||||
Change to any supported model:
|
||||
- `c1/anthropic/claude-sonnet-4/v-20250930` - Claude Sonnet 4 (stable)
|
||||
- `c1/openai/gpt-5/v-20250930` - GPT 5 (stable)
|
||||
- `c1-exp/openai/gpt-4.1/v-20250617` - GPT 4.1 (experimental)
|
||||
- `c1-exp/anthropic/claude-3.5-haiku/v-20250709` - Claude 3.5 Haiku (experimental)
|
||||
|
||||
---
|
||||
|
||||
## Frontend Integration
|
||||
|
||||
### React + Vite Example
|
||||
|
||||
```typescript
|
||||
const makeApiCall = async (prompt: string) => {
|
||||
const response = await fetch("http://localhost:8000/api/chat", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ prompt })
|
||||
});
|
||||
|
||||
const reader = response.body?.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
const chunk = decoder.decode(value);
|
||||
setC1Response(prev => prev + chunk);
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Next.js API Route (Proxy)
|
||||
|
||||
```typescript
|
||||
// app/api/chat/route.ts
|
||||
export async function POST(req: Request) {
|
||||
const { prompt } = await req.json();
|
||||
|
||||
const response = await fetch("http://localhost:8000/api/chat", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ prompt })
|
||||
});
|
||||
|
||||
return new Response(response.body, {
|
||||
headers: {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Production Deployment
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Production
|
||||
THESYS_API_KEY=sk-th-production-key
|
||||
HOST=0.0.0.0
|
||||
PORT=8000
|
||||
ENVIRONMENT=production
|
||||
ALLOWED_ORIGINS=https://your-frontend.com
|
||||
```
|
||||
|
||||
### FastAPI (Recommended for Production)
|
||||
|
||||
```bash
|
||||
# Install production server
|
||||
pip install gunicorn
|
||||
|
||||
# Run with Gunicorn
|
||||
gunicorn fastapi-chat:app \
|
||||
--workers 4 \
|
||||
--worker-class uvicorn.workers.UvicornWorker \
|
||||
--bind 0.0.0.0:8000 \
|
||||
--timeout 120
|
||||
```
|
||||
|
||||
### Flask Production
|
||||
|
||||
```bash
|
||||
# Install production server
|
||||
pip install gunicorn
|
||||
|
||||
# Run with Gunicorn
|
||||
gunicorn flask-chat:app \
|
||||
--workers 4 \
|
||||
--bind 0.0.0.0:5000 \
|
||||
--timeout 120
|
||||
```
|
||||
|
||||
### Docker Example
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.12-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY fastapi-chat.py .
|
||||
|
||||
ENV THESYS_API_KEY=""
|
||||
ENV PORT=8000
|
||||
|
||||
CMD ["uvicorn", "fastapi-chat:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**1. Import Error: `thesys_genui_sdk` not found**
|
||||
```bash
|
||||
pip install thesys-genui-sdk
|
||||
```
|
||||
|
||||
**2. CORS Errors**
|
||||
Update CORS configuration in the template to match your frontend URL:
|
||||
```python
|
||||
allow_origins=["http://localhost:5173"] # Vite default
|
||||
```
|
||||
|
||||
**3. Streaming Not Working**
|
||||
Ensure:
|
||||
- `stream=True` in the API call
|
||||
- Using `@with_c1_response` decorator
|
||||
- Proper response headers set
|
||||
|
||||
**4. Authentication Failed (401)**
|
||||
Check that `THESYS_API_KEY` is set correctly:
|
||||
```python
|
||||
import os
|
||||
print(os.getenv("THESYS_API_KEY")) # Should not be None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Copy the template you want to use
|
||||
2. Install dependencies from `requirements.txt`
|
||||
3. Set your `THESYS_API_KEY` in `.env`
|
||||
4. Run the server
|
||||
5. Connect your React frontend
|
||||
6. Customize the system prompt and model as needed
|
||||
|
||||
For more examples, see the main SKILL.md documentation.
|
||||
125
templates/python-backend/fastapi-chat.py
Normal file
125
templates/python-backend/fastapi-chat.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""
|
||||
TheSys Generative UI - FastAPI Backend Example
|
||||
|
||||
This example demonstrates how to set up a FastAPI backend that integrates
|
||||
with TheSys C1 API for streaming generative UI responses.
|
||||
|
||||
Dependencies:
|
||||
- fastapi
|
||||
- uvicorn
|
||||
- thesys-genui-sdk
|
||||
- openai
|
||||
- python-dotenv
|
||||
"""
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import StreamingResponse
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from pydantic import BaseModel
|
||||
from thesys_genui_sdk import with_c1_response, write_content
|
||||
import openai
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Initialize FastAPI app
|
||||
app = FastAPI(
|
||||
title="TheSys C1 API Backend",
|
||||
description="FastAPI backend for TheSys Generative UI",
|
||||
version="1.0.0"
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # Configure for your frontend URL in production
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Initialize OpenAI client for TheSys C1 API
|
||||
client = openai.OpenAI(
|
||||
base_url="https://api.thesys.dev/v1/embed",
|
||||
api_key=os.getenv("THESYS_API_KEY")
|
||||
)
|
||||
|
||||
# Request model
|
||||
class ChatRequest(BaseModel):
|
||||
prompt: str
|
||||
thread_id: str | None = None
|
||||
response_id: str | None = None
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def root():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"status": "ok",
|
||||
"message": "TheSys C1 API Backend is running"
|
||||
}
|
||||
|
||||
|
||||
@app.post("/api/chat")
|
||||
@with_c1_response # Automatically handles streaming headers
|
||||
async def chat_endpoint(request: ChatRequest):
|
||||
"""
|
||||
Streaming chat endpoint that generates UI components.
|
||||
|
||||
Args:
|
||||
request: ChatRequest with prompt and optional thread/response IDs
|
||||
|
||||
Returns:
|
||||
StreamingResponse with C1-formatted UI chunks
|
||||
"""
|
||||
try:
|
||||
# Create streaming completion request
|
||||
stream = client.chat.completions.create(
|
||||
model="c1/anthropic/claude-sonnet-4/v-20250930",
|
||||
messages=[
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful AI assistant that creates interactive user interfaces."
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": request.prompt
|
||||
}
|
||||
],
|
||||
stream=True,
|
||||
temperature=0.7,
|
||||
max_tokens=4096
|
||||
)
|
||||
|
||||
# Stream chunks to frontend
|
||||
async def generate():
|
||||
for chunk in stream:
|
||||
content = chunk.choices[0].delta.content
|
||||
if content:
|
||||
yield write_content(content)
|
||||
|
||||
return StreamingResponse(
|
||||
generate(),
|
||||
media_type="text/event-stream"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
"error": str(e),
|
||||
"message": "Failed to generate response"
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
# Run the server
|
||||
uvicorn.run(
|
||||
"fastapi-chat:app",
|
||||
host="0.0.0.0",
|
||||
port=8000,
|
||||
reload=True,
|
||||
log_level="info"
|
||||
)
|
||||
119
templates/python-backend/flask-chat.py
Normal file
119
templates/python-backend/flask-chat.py
Normal file
@@ -0,0 +1,119 @@
|
||||
"""
|
||||
TheSys Generative UI - Flask Backend Example
|
||||
|
||||
This example demonstrates how to set up a Flask backend that integrates
|
||||
with TheSys C1 API for streaming generative UI responses.
|
||||
|
||||
Dependencies:
|
||||
- flask
|
||||
- flask-cors
|
||||
- thesys-genui-sdk
|
||||
- openai
|
||||
- python-dotenv
|
||||
"""
|
||||
|
||||
from flask import Flask, request, Response, jsonify
|
||||
from flask_cors import CORS
|
||||
from thesys_genui_sdk import with_c1_response, write_content
|
||||
import openai
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Initialize Flask app
|
||||
app = Flask(__name__)
|
||||
|
||||
# Configure CORS
|
||||
CORS(app, resources={
|
||||
r"/api/*": {
|
||||
"origins": "*", # Configure for your frontend URL in production
|
||||
"allow_headers": "*",
|
||||
"expose_headers": "*"
|
||||
}
|
||||
})
|
||||
|
||||
# Initialize OpenAI client for TheSys C1 API
|
||||
client = openai.OpenAI(
|
||||
base_url="https://api.thesys.dev/v1/embed",
|
||||
api_key=os.getenv("THESYS_API_KEY")
|
||||
)
|
||||
|
||||
|
||||
@app.route("/")
|
||||
def root():
|
||||
"""Health check endpoint"""
|
||||
return jsonify({
|
||||
"status": "ok",
|
||||
"message": "TheSys C1 API Backend is running"
|
||||
})
|
||||
|
||||
|
||||
@app.route("/api/chat", methods=["POST"])
|
||||
@with_c1_response # Automatically handles streaming headers
|
||||
def chat():
|
||||
"""
|
||||
Streaming chat endpoint that generates UI components.
|
||||
|
||||
Request JSON:
|
||||
{
|
||||
"prompt": str,
|
||||
"thread_id": str (optional),
|
||||
"response_id": str (optional)
|
||||
}
|
||||
|
||||
Returns:
|
||||
StreamingResponse with C1-formatted UI chunks
|
||||
"""
|
||||
try:
|
||||
data = request.get_json()
|
||||
prompt = data.get("prompt")
|
||||
|
||||
if not prompt:
|
||||
return jsonify({"error": "Prompt is required"}), 400
|
||||
|
||||
# Create streaming completion request
|
||||
stream = client.chat.completions.create(
|
||||
model="c1/openai/gpt-5/v-20250930",
|
||||
messages=[
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful AI assistant that creates interactive user interfaces."
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": prompt
|
||||
}
|
||||
],
|
||||
stream=True,
|
||||
temperature=0.7,
|
||||
max_tokens=4096
|
||||
)
|
||||
|
||||
# Stream chunks to frontend
|
||||
def generate():
|
||||
for chunk in stream:
|
||||
content = chunk.choices[0].delta.content
|
||||
if content:
|
||||
yield write_content(content)
|
||||
|
||||
return Response(
|
||||
generate(),
|
||||
mimetype="text/event-stream"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
return jsonify({
|
||||
"error": str(e),
|
||||
"message": "Failed to generate response"
|
||||
}), 500
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the server
|
||||
app.run(
|
||||
host="0.0.0.0",
|
||||
port=5000,
|
||||
debug=True
|
||||
)
|
||||
18
templates/python-backend/requirements.txt
Normal file
18
templates/python-backend/requirements.txt
Normal file
@@ -0,0 +1,18 @@
|
||||
# TheSys Generative UI - Python Backend Dependencies
|
||||
|
||||
# Core dependencies
|
||||
thesys-genui-sdk>=0.1.0
|
||||
openai>=1.59.5
|
||||
python-dotenv>=1.0.1
|
||||
|
||||
# FastAPI dependencies (for fastapi-chat.py)
|
||||
fastapi>=0.115.6
|
||||
uvicorn[standard]>=0.34.0
|
||||
pydantic>=2.10.5
|
||||
|
||||
# Flask dependencies (for flask-chat.py)
|
||||
flask>=3.1.0
|
||||
flask-cors>=5.0.0
|
||||
|
||||
# Optional: For enhanced error handling
|
||||
python-multipart>=0.0.20
|
||||
409
templates/shared/streaming-utils.ts
Normal file
409
templates/shared/streaming-utils.ts
Normal file
@@ -0,0 +1,409 @@
|
||||
/**
|
||||
* Streaming Utilities for TheSys C1
|
||||
*
|
||||
* Helper functions for handling streaming responses from
|
||||
* OpenAI SDK, TheSys API, and transforming streams for C1.
|
||||
*
|
||||
* Works with any framework (Vite, Next.js, Cloudflare Workers).
|
||||
*/
|
||||
|
||||
/**
|
||||
* Convert a ReadableStream to a string
|
||||
*/
|
||||
export async function streamToString(stream: ReadableStream<string>): Promise<string> {
|
||||
const reader = stream.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
let result = "";
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
// value might be string or Uint8Array
|
||||
if (typeof value === "string") {
|
||||
result += value;
|
||||
} else {
|
||||
result += decoder.decode(value, { stream: true });
|
||||
}
|
||||
}
|
||||
|
||||
// Final decode with stream: false
|
||||
result += decoder.decode();
|
||||
|
||||
return result;
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a ReadableStream to an array of chunks
|
||||
*/
|
||||
export async function streamToArray<T>(stream: ReadableStream<T>): Promise<T[]> {
|
||||
const reader = stream.getReader();
|
||||
const chunks: T[] = [];
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
chunks.push(value);
|
||||
}
|
||||
|
||||
return chunks;
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a pass-through stream that allows reading while data flows
|
||||
*/
|
||||
export function createPassThroughStream<T>(): {
|
||||
readable: ReadableStream<T>;
|
||||
writable: WritableStream<T>;
|
||||
} {
|
||||
const { readable, writable } = new TransformStream<T, T>();
|
||||
return { readable, writable };
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform a stream with a callback function
|
||||
* Similar to @crayonai/stream's transformStream
|
||||
*/
|
||||
export function transformStream<TInput, TOutput>(
|
||||
source: ReadableStream<TInput>,
|
||||
transformer: (chunk: TInput) => TOutput | null,
|
||||
options?: {
|
||||
onStart?: () => void;
|
||||
onEnd?: (data: { accumulated: TOutput[] }) => void;
|
||||
onError?: (error: Error) => void;
|
||||
}
|
||||
): ReadableStream<TOutput> {
|
||||
const accumulated: TOutput[] = [];
|
||||
|
||||
return new ReadableStream<TOutput>({
|
||||
async start(controller) {
|
||||
options?.onStart?.();
|
||||
|
||||
const reader = source.getReader();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
if (done) {
|
||||
options?.onEnd?.({ accumulated });
|
||||
controller.close();
|
||||
break;
|
||||
}
|
||||
|
||||
const transformed = transformer(value);
|
||||
|
||||
if (transformed !== null) {
|
||||
accumulated.push(transformed);
|
||||
controller.enqueue(transformed);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
const err = error instanceof Error ? error : new Error(String(error));
|
||||
options?.onError?.(err);
|
||||
controller.error(err);
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge multiple streams into one
|
||||
*/
|
||||
export function mergeStreams<T>(...streams: ReadableStream<T>[]): ReadableStream<T> {
|
||||
return new ReadableStream<T>({
|
||||
async start(controller) {
|
||||
try {
|
||||
await Promise.all(
|
||||
streams.map(async (stream) => {
|
||||
const reader = stream.getReader();
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
controller.enqueue(value);
|
||||
}
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
})
|
||||
);
|
||||
controller.close();
|
||||
} catch (error) {
|
||||
controller.error(error);
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Split a stream into multiple streams
|
||||
*/
|
||||
export function splitStream<T>(
|
||||
source: ReadableStream<T>,
|
||||
count: number
|
||||
): ReadableStream<T>[] {
|
||||
if (count < 2) throw new Error("Count must be at least 2");
|
||||
|
||||
const readers: ReadableStreamDefaultController<T>[] = [];
|
||||
const streams = Array.from({ length: count }, () => {
|
||||
return new ReadableStream<T>({
|
||||
start(controller) {
|
||||
readers.push(controller);
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
(async () => {
|
||||
const reader = source.getReader();
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
if (done) {
|
||||
readers.forEach((r) => r.close());
|
||||
break;
|
||||
}
|
||||
|
||||
readers.forEach((r) => r.enqueue(value));
|
||||
}
|
||||
} catch (error) {
|
||||
readers.forEach((r) => r.error(error));
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
})();
|
||||
|
||||
return streams;
|
||||
}
|
||||
|
||||
/**
|
||||
* Buffer chunks until a condition is met, then flush
|
||||
*/
|
||||
export function bufferStream<T>(
|
||||
source: ReadableStream<T>,
|
||||
shouldFlush: (buffer: T[]) => boolean
|
||||
): ReadableStream<T[]> {
|
||||
return new ReadableStream<T[]>({
|
||||
async start(controller) {
|
||||
const reader = source.getReader();
|
||||
let buffer: T[] = [];
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
if (done) {
|
||||
if (buffer.length > 0) {
|
||||
controller.enqueue([...buffer]);
|
||||
}
|
||||
controller.close();
|
||||
break;
|
||||
}
|
||||
|
||||
buffer.push(value);
|
||||
|
||||
if (shouldFlush(buffer)) {
|
||||
controller.enqueue([...buffer]);
|
||||
buffer = [];
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
controller.error(error);
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Rate limit a stream (delay between chunks)
|
||||
*/
|
||||
export function rateLimit<T>(
|
||||
source: ReadableStream<T>,
|
||||
delayMs: number
|
||||
): ReadableStream<T> {
|
||||
return new ReadableStream<T>({
|
||||
async start(controller) {
|
||||
const reader = source.getReader();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
if (done) {
|
||||
controller.close();
|
||||
break;
|
||||
}
|
||||
|
||||
controller.enqueue(value);
|
||||
|
||||
// Wait before next chunk
|
||||
if (delayMs > 0) {
|
||||
await new Promise((resolve) => setTimeout(resolve, delayMs));
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
controller.error(error);
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Retry a stream creation if it fails
|
||||
*/
|
||||
export async function retryStream<T>(
|
||||
createStream: () => Promise<ReadableStream<T>>,
|
||||
maxRetries: number = 3,
|
||||
delayMs: number = 1000
|
||||
): Promise<ReadableStream<T>> {
|
||||
let lastError: Error | null = null;
|
||||
|
||||
for (let attempt = 0; attempt < maxRetries; attempt++) {
|
||||
try {
|
||||
return await createStream();
|
||||
} catch (error) {
|
||||
lastError = error instanceof Error ? error : new Error(String(error));
|
||||
console.error(`Stream creation attempt ${attempt + 1} failed:`, lastError);
|
||||
|
||||
if (attempt < maxRetries - 1) {
|
||||
// Exponential backoff
|
||||
const waitTime = delayMs * Math.pow(2, attempt);
|
||||
await new Promise((resolve) => setTimeout(resolve, waitTime));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw lastError || new Error("Failed to create stream");
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse Server-Sent Events (SSE) stream
|
||||
*/
|
||||
export function parseSSE(
|
||||
source: ReadableStream<Uint8Array>
|
||||
): ReadableStream<{ event?: string; data: string }> {
|
||||
const decoder = new TextDecoder();
|
||||
let buffer = "";
|
||||
|
||||
return new ReadableStream({
|
||||
async start(controller) {
|
||||
const reader = source.getReader();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
if (done) {
|
||||
controller.close();
|
||||
break;
|
||||
}
|
||||
|
||||
buffer += decoder.decode(value, { stream: true });
|
||||
const lines = buffer.split("\n");
|
||||
buffer = lines.pop() || "";
|
||||
|
||||
let event = "";
|
||||
let data = "";
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith("event:")) {
|
||||
event = line.slice(6).trim();
|
||||
} else if (line.startsWith("data:")) {
|
||||
data += line.slice(5).trim();
|
||||
} else if (line === "") {
|
||||
// Empty line signals end of message
|
||||
if (data) {
|
||||
controller.enqueue({ event: event || undefined, data });
|
||||
event = "";
|
||||
data = "";
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
controller.error(error);
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle backpressure in streams
|
||||
*/
|
||||
export function handleBackpressure<T>(
|
||||
source: ReadableStream<T>,
|
||||
highWaterMark: number = 10
|
||||
): ReadableStream<T> {
|
||||
return new ReadableStream<T>(
|
||||
{
|
||||
async start(controller) {
|
||||
const reader = source.getReader();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
if (done) {
|
||||
controller.close();
|
||||
break;
|
||||
}
|
||||
|
||||
controller.enqueue(value);
|
||||
|
||||
// Check if we need to apply backpressure
|
||||
if (controller.desiredSize !== null && controller.desiredSize <= 0) {
|
||||
// Wait a bit before continuing
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
controller.error(error);
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
},
|
||||
},
|
||||
{ highWaterMark }
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Log stream chunks for debugging
|
||||
*/
|
||||
export function debugStream<T>(
|
||||
source: ReadableStream<T>,
|
||||
label: string = "Stream"
|
||||
): ReadableStream<T> {
|
||||
let count = 0;
|
||||
|
||||
return transformStream(
|
||||
source,
|
||||
(chunk) => {
|
||||
console.log(`[${label}] Chunk ${++count}:`, chunk);
|
||||
return chunk;
|
||||
},
|
||||
{
|
||||
onStart: () => console.log(`[${label}] Stream started`),
|
||||
onEnd: ({ accumulated }) =>
|
||||
console.log(`[${label}] Stream ended. Total chunks: ${accumulated.length}`),
|
||||
onError: (error) => console.error(`[${label}] Stream error:`, error),
|
||||
}
|
||||
);
|
||||
}
|
||||
318
templates/shared/theme-config.ts
Normal file
318
templates/shared/theme-config.ts
Normal file
@@ -0,0 +1,318 @@
|
||||
/**
|
||||
* Reusable Theme Configurations for TheSys C1
|
||||
*
|
||||
* Collection of custom theme objects that can be used across
|
||||
* any framework (Vite, Next.js, Cloudflare Workers).
|
||||
*
|
||||
* Usage:
|
||||
* import { darkTheme, lightTheme, oceanTheme } from "./theme-config";
|
||||
*
|
||||
* <C1Chat theme={oceanTheme} />
|
||||
*/
|
||||
|
||||
export interface C1Theme {
|
||||
mode: "light" | "dark";
|
||||
colors: {
|
||||
primary: string;
|
||||
secondary: string;
|
||||
background: string;
|
||||
foreground: string;
|
||||
border: string;
|
||||
muted: string;
|
||||
accent: string;
|
||||
destructive?: string;
|
||||
success?: string;
|
||||
warning?: string;
|
||||
};
|
||||
fonts: {
|
||||
body: string;
|
||||
heading: string;
|
||||
mono?: string;
|
||||
};
|
||||
borderRadius: string;
|
||||
spacing: {
|
||||
base: string;
|
||||
};
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Light Themes
|
||||
// ============================================================================
|
||||
|
||||
export const lightTheme: C1Theme = {
|
||||
mode: "light",
|
||||
colors: {
|
||||
primary: "#3b82f6", // Blue
|
||||
secondary: "#8b5cf6", // Purple
|
||||
background: "#ffffff",
|
||||
foreground: "#1f2937",
|
||||
border: "#e5e7eb",
|
||||
muted: "#f3f4f6",
|
||||
accent: "#10b981", // Green
|
||||
destructive: "#ef4444", // Red
|
||||
success: "#10b981", // Green
|
||||
warning: "#f59e0b", // Amber
|
||||
},
|
||||
fonts: {
|
||||
body: "'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif",
|
||||
heading: "'Inter', sans-serif",
|
||||
mono: "'Fira Code', 'Courier New', monospace",
|
||||
},
|
||||
borderRadius: "8px",
|
||||
spacing: {
|
||||
base: "16px",
|
||||
},
|
||||
};
|
||||
|
||||
export const oceanTheme: C1Theme = {
|
||||
mode: "light",
|
||||
colors: {
|
||||
primary: "#0ea5e9", // Sky blue
|
||||
secondary: "#06b6d4", // Cyan
|
||||
background: "#f0f9ff",
|
||||
foreground: "#0c4a6e",
|
||||
border: "#bae6fd",
|
||||
muted: "#e0f2fe",
|
||||
accent: "#0891b2",
|
||||
destructive: "#dc2626",
|
||||
success: "#059669",
|
||||
warning: "#d97706",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Nunito', sans-serif",
|
||||
heading: "'Nunito', sans-serif",
|
||||
mono: "'JetBrains Mono', monospace",
|
||||
},
|
||||
borderRadius: "12px",
|
||||
spacing: {
|
||||
base: "16px",
|
||||
},
|
||||
};
|
||||
|
||||
export const sunsetTheme: C1Theme = {
|
||||
mode: "light",
|
||||
colors: {
|
||||
primary: "#f59e0b", // Amber
|
||||
secondary: "#f97316", // Orange
|
||||
background: "#fffbeb",
|
||||
foreground: "#78350f",
|
||||
border: "#fed7aa",
|
||||
muted: "#fef3c7",
|
||||
accent: "#ea580c",
|
||||
destructive: "#dc2626",
|
||||
success: "#16a34a",
|
||||
warning: "#f59e0b",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Poppins', sans-serif",
|
||||
heading: "'Poppins', sans-serif",
|
||||
mono: "'Source Code Pro', monospace",
|
||||
},
|
||||
borderRadius: "6px",
|
||||
spacing: {
|
||||
base: "16px",
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Dark Themes
|
||||
// ============================================================================
|
||||
|
||||
export const darkTheme: C1Theme = {
|
||||
mode: "dark",
|
||||
colors: {
|
||||
primary: "#60a5fa", // Light blue
|
||||
secondary: "#a78bfa", // Light purple
|
||||
background: "#111827",
|
||||
foreground: "#f9fafb",
|
||||
border: "#374151",
|
||||
muted: "#1f2937",
|
||||
accent: "#34d399",
|
||||
destructive: "#f87171",
|
||||
success: "#34d399",
|
||||
warning: "#fbbf24",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif",
|
||||
heading: "'Inter', sans-serif",
|
||||
mono: "'Fira Code', 'Courier New', monospace",
|
||||
},
|
||||
borderRadius: "8px",
|
||||
spacing: {
|
||||
base: "16px",
|
||||
},
|
||||
};
|
||||
|
||||
export const midnightTheme: C1Theme = {
|
||||
mode: "dark",
|
||||
colors: {
|
||||
primary: "#818cf8", // Indigo
|
||||
secondary: "#c084fc", // Purple
|
||||
background: "#0f172a",
|
||||
foreground: "#e2e8f0",
|
||||
border: "#334155",
|
||||
muted: "#1e293b",
|
||||
accent: "#8b5cf6",
|
||||
destructive: "#f87171",
|
||||
success: "#4ade80",
|
||||
warning: "#facc15",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Roboto', sans-serif",
|
||||
heading: "'Roboto', sans-serif",
|
||||
mono: "'IBM Plex Mono', monospace",
|
||||
},
|
||||
borderRadius: "10px",
|
||||
spacing: {
|
||||
base: "16px",
|
||||
},
|
||||
};
|
||||
|
||||
export const forestTheme: C1Theme = {
|
||||
mode: "dark",
|
||||
colors: {
|
||||
primary: "#4ade80", // Green
|
||||
secondary: "#22d3ee", // Cyan
|
||||
background: "#064e3b",
|
||||
foreground: "#d1fae5",
|
||||
border: "#065f46",
|
||||
muted: "#047857",
|
||||
accent: "#10b981",
|
||||
destructive: "#fca5a5",
|
||||
success: "#6ee7b7",
|
||||
warning: "#fde047",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Lato', sans-serif",
|
||||
heading: "'Lato', sans-serif",
|
||||
mono: "'Consolas', monospace",
|
||||
},
|
||||
borderRadius: "8px",
|
||||
spacing: {
|
||||
base: "18px",
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// High Contrast Themes (Accessibility)
|
||||
// ============================================================================
|
||||
|
||||
export const highContrastLight: C1Theme = {
|
||||
mode: "light",
|
||||
colors: {
|
||||
primary: "#0000ff", // Pure blue
|
||||
secondary: "#ff00ff", // Pure magenta
|
||||
background: "#ffffff",
|
||||
foreground: "#000000",
|
||||
border: "#000000",
|
||||
muted: "#f5f5f5",
|
||||
accent: "#008000", // Pure green
|
||||
destructive: "#ff0000",
|
||||
success: "#008000",
|
||||
warning: "#ff8800",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Arial', sans-serif",
|
||||
heading: "'Arial', bold, sans-serif",
|
||||
mono: "'Courier New', monospace",
|
||||
},
|
||||
borderRadius: "2px",
|
||||
spacing: {
|
||||
base: "20px",
|
||||
},
|
||||
};
|
||||
|
||||
export const highContrastDark: C1Theme = {
|
||||
mode: "dark",
|
||||
colors: {
|
||||
primary: "#00ccff", // Bright cyan
|
||||
secondary: "#ff00ff", // Bright magenta
|
||||
background: "#000000",
|
||||
foreground: "#ffffff",
|
||||
border: "#ffffff",
|
||||
muted: "#1a1a1a",
|
||||
accent: "#00ff00", // Bright green
|
||||
destructive: "#ff0000",
|
||||
success: "#00ff00",
|
||||
warning: "#ffaa00",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Arial', sans-serif",
|
||||
heading: "'Arial', bold, sans-serif",
|
||||
mono: "'Courier New', monospace",
|
||||
},
|
||||
borderRadius: "2px",
|
||||
spacing: {
|
||||
base: "20px",
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Theme Utilities
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Get system theme preference
|
||||
*/
|
||||
export function getSystemTheme(): "light" | "dark" {
|
||||
if (typeof window === "undefined") return "light";
|
||||
return window.matchMedia("(prefers-color-scheme: dark)").matches
|
||||
? "dark"
|
||||
: "light";
|
||||
}
|
||||
|
||||
/**
|
||||
* Listen to system theme changes
|
||||
*/
|
||||
export function onSystemThemeChange(callback: (theme: "light" | "dark") => void) {
|
||||
if (typeof window === "undefined") return () => {};
|
||||
|
||||
const mediaQuery = window.matchMedia("(prefers-color-scheme: dark)");
|
||||
|
||||
const handler = (e: MediaQueryListEvent) => {
|
||||
callback(e.matches ? "dark" : "light");
|
||||
};
|
||||
|
||||
mediaQuery.addEventListener("change", handler);
|
||||
|
||||
return () => mediaQuery.removeEventListener("change", handler);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get theme based on user preference
|
||||
*/
|
||||
export function getTheme(
|
||||
preference: "light" | "dark" | "system",
|
||||
lightThemeConfig: C1Theme = lightTheme,
|
||||
darkThemeConfig: C1Theme = darkTheme
|
||||
): C1Theme {
|
||||
if (preference === "system") {
|
||||
const systemPref = getSystemTheme();
|
||||
return systemPref === "dark" ? darkThemeConfig : lightThemeConfig;
|
||||
}
|
||||
|
||||
return preference === "dark" ? darkThemeConfig : lightThemeConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* All available themes by name
|
||||
*/
|
||||
export const themes = {
|
||||
light: lightTheme,
|
||||
dark: darkTheme,
|
||||
ocean: oceanTheme,
|
||||
sunset: sunsetTheme,
|
||||
midnight: midnightTheme,
|
||||
forest: forestTheme,
|
||||
"high-contrast-light": highContrastLight,
|
||||
"high-contrast-dark": highContrastDark,
|
||||
} as const;
|
||||
|
||||
export type ThemeName = keyof typeof themes;
|
||||
|
||||
/**
|
||||
* Get theme by name
|
||||
*/
|
||||
export function getThemeByName(name: ThemeName): C1Theme {
|
||||
return themes[name];
|
||||
}
|
||||
327
templates/shared/tool-schemas.ts
Normal file
327
templates/shared/tool-schemas.ts
Normal file
@@ -0,0 +1,327 @@
|
||||
/**
|
||||
* Common Zod Schemas for Tool Calling
|
||||
*
|
||||
* Reusable schemas for common tools across any framework.
|
||||
* These schemas provide runtime validation and type safety.
|
||||
*
|
||||
* Usage:
|
||||
* import { webSearchTool, createOrderTool } from "./tool-schemas";
|
||||
* import zodToJsonSchema from "zod-to-json-schema";
|
||||
*
|
||||
* const tools = [webSearchTool, createOrderTool];
|
||||
*
|
||||
* await client.beta.chat.completions.runTools({
|
||||
* model: "c1/openai/gpt-5/v-20250930",
|
||||
* messages: [...],
|
||||
* tools,
|
||||
* });
|
||||
*/
|
||||
|
||||
import { z } from "zod";
|
||||
import zodToJsonSchema from "zod-to-json-schema";
|
||||
|
||||
// ============================================================================
|
||||
// Web Search Tool
|
||||
// ============================================================================
|
||||
|
||||
export const webSearchSchema = z.object({
|
||||
query: z.string().min(1).describe("The search query"),
|
||||
max_results: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.default(5)
|
||||
.describe("Maximum number of results to return (1-10)"),
|
||||
include_answer: z
|
||||
.boolean()
|
||||
.default(true)
|
||||
.describe("Include AI-generated answer summary"),
|
||||
});
|
||||
|
||||
export type WebSearchArgs = z.infer<typeof webSearchSchema>;
|
||||
|
||||
export const webSearchTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "web_search",
|
||||
description:
|
||||
"Search the web for current information using a search API. Use this for recent events, news, or information that may have changed recently.",
|
||||
parameters: zodToJsonSchema(webSearchSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Product/Inventory Tools
|
||||
// ============================================================================
|
||||
|
||||
export const productLookupSchema = z.object({
|
||||
product_type: z
|
||||
.enum(["gloves", "hat", "scarf", "all"])
|
||||
.optional()
|
||||
.describe("Type of product to lookup, or 'all' for entire inventory"),
|
||||
filter: z
|
||||
.object({
|
||||
min_price: z.number().optional(),
|
||||
max_price: z.number().optional(),
|
||||
in_stock_only: z.boolean().default(true),
|
||||
})
|
||||
.optional()
|
||||
.describe("Optional filters for product search"),
|
||||
});
|
||||
|
||||
export type ProductLookupArgs = z.infer<typeof productLookupSchema>;
|
||||
|
||||
export const productLookupTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "lookup_product",
|
||||
description:
|
||||
"Look up products in the inventory database. Returns product details including price, availability, and specifications.",
|
||||
parameters: zodToJsonSchema(productLookupSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Order Creation Tool
|
||||
// ============================================================================
|
||||
|
||||
const orderItemSchema = z.discriminatedUnion("type", [
|
||||
z.object({
|
||||
type: z.literal("gloves"),
|
||||
size: z.enum(["XS", "S", "M", "L", "XL", "XXL"]),
|
||||
color: z.string().min(1),
|
||||
quantity: z.number().int().min(1).max(100),
|
||||
}),
|
||||
z.object({
|
||||
type: z.literal("hat"),
|
||||
style: z.enum(["beanie", "baseball", "fedora", "bucket"]),
|
||||
color: z.string().min(1),
|
||||
quantity: z.number().int().min(1).max(100),
|
||||
}),
|
||||
z.object({
|
||||
type: z.literal("scarf"),
|
||||
length: z.enum(["short", "medium", "long"]),
|
||||
material: z.enum(["wool", "cotton", "silk", "cashmere"]),
|
||||
quantity: z.number().int().min(1).max(100),
|
||||
}),
|
||||
]);
|
||||
|
||||
export const createOrderSchema = z.object({
|
||||
customer_email: z
|
||||
.string()
|
||||
.email()
|
||||
.describe("Customer's email address for order confirmation"),
|
||||
items: z
|
||||
.array(orderItemSchema)
|
||||
.min(1)
|
||||
.max(20)
|
||||
.describe("Array of items to include in the order (max 20)"),
|
||||
shipping_address: z.object({
|
||||
street: z.string().min(1),
|
||||
city: z.string().min(1),
|
||||
state: z.string().length(2), // US state code
|
||||
zip: z.string().regex(/^\d{5}(-\d{4})?$/), // ZIP or ZIP+4
|
||||
country: z.string().default("US"),
|
||||
}),
|
||||
notes: z.string().optional().describe("Optional order notes or instructions"),
|
||||
});
|
||||
|
||||
export type CreateOrderArgs = z.infer<typeof createOrderSchema>;
|
||||
export type OrderItem = z.infer<typeof orderItemSchema>;
|
||||
|
||||
export const createOrderTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "create_order",
|
||||
description:
|
||||
"Create a new product order with customer information, items, and shipping address. Returns order ID and confirmation details.",
|
||||
parameters: zodToJsonSchema(createOrderSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Database Query Tool
|
||||
// ============================================================================
|
||||
|
||||
export const databaseQuerySchema = z.object({
|
||||
query_type: z
|
||||
.enum(["select", "aggregate", "search"])
|
||||
.describe("Type of database query to perform"),
|
||||
table: z
|
||||
.string()
|
||||
.describe("Database table name (e.g., 'users', 'products', 'orders')"),
|
||||
filters: z
|
||||
.record(z.any())
|
||||
.optional()
|
||||
.describe("Filter conditions as key-value pairs"),
|
||||
limit: z.number().int().min(1).max(100).default(20).describe("Result limit"),
|
||||
});
|
||||
|
||||
export type DatabaseQueryArgs = z.infer<typeof databaseQuerySchema>;
|
||||
|
||||
export const databaseQueryTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "query_database",
|
||||
description:
|
||||
"Query the database for information. Supports select, aggregate, and search operations on various tables.",
|
||||
parameters: zodToJsonSchema(databaseQuerySchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Data Visualization Tool
|
||||
// ============================================================================
|
||||
|
||||
export const createVisualizationSchema = z.object({
|
||||
chart_type: z
|
||||
.enum(["bar", "line", "pie", "scatter", "area"])
|
||||
.describe("Type of chart to create"),
|
||||
data: z
|
||||
.array(
|
||||
z.object({
|
||||
label: z.string(),
|
||||
value: z.number(),
|
||||
})
|
||||
)
|
||||
.min(1)
|
||||
.describe("Data points for the visualization"),
|
||||
title: z.string().min(1).describe("Chart title"),
|
||||
x_label: z.string().optional().describe("X-axis label"),
|
||||
y_label: z.string().optional().describe("Y-axis label"),
|
||||
});
|
||||
|
||||
export type CreateVisualizationArgs = z.infer<typeof createVisualizationSchema>;
|
||||
|
||||
export const createVisualizationTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "create_visualization",
|
||||
description:
|
||||
"Create a data visualization chart. Returns chart configuration that will be rendered in the UI.",
|
||||
parameters: zodToJsonSchema(createVisualizationSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Email Tool
|
||||
// ============================================================================
|
||||
|
||||
export const sendEmailSchema = z.object({
|
||||
to: z.string().email().describe("Recipient email address"),
|
||||
subject: z.string().min(1).max(200).describe("Email subject line"),
|
||||
body: z.string().min(1).describe("Email body content (supports HTML)"),
|
||||
cc: z.array(z.string().email()).optional().describe("CC recipients"),
|
||||
bcc: z.array(z.string().email()).optional().describe("BCC recipients"),
|
||||
});
|
||||
|
||||
export type SendEmailArgs = z.infer<typeof sendEmailSchema>;
|
||||
|
||||
export const sendEmailTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "send_email",
|
||||
description:
|
||||
"Send an email to one or more recipients. Use this to send notifications, confirmations, or responses to customers.",
|
||||
parameters: zodToJsonSchema(sendEmailSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Calendar/Scheduling Tool
|
||||
// ============================================================================
|
||||
|
||||
export const scheduleEventSchema = z.object({
|
||||
title: z.string().min(1).describe("Event title"),
|
||||
start_time: z.string().datetime().describe("Event start time (ISO 8601)"),
|
||||
end_time: z.string().datetime().describe("Event end time (ISO 8601)"),
|
||||
description: z.string().optional().describe("Event description"),
|
||||
attendees: z
|
||||
.array(z.string().email())
|
||||
.optional()
|
||||
.describe("List of attendee email addresses"),
|
||||
location: z.string().optional().describe("Event location or meeting link"),
|
||||
reminder_minutes: z
|
||||
.number()
|
||||
.int()
|
||||
.min(0)
|
||||
.default(15)
|
||||
.describe("Minutes before event to send reminder"),
|
||||
});
|
||||
|
||||
export type ScheduleEventArgs = z.infer<typeof scheduleEventSchema>;
|
||||
|
||||
export const scheduleEventTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "schedule_event",
|
||||
description:
|
||||
"Schedule a calendar event with attendees, location, and reminders.",
|
||||
parameters: zodToJsonSchema(scheduleEventSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// File Upload Tool
|
||||
// ============================================================================
|
||||
|
||||
export const uploadFileSchema = z.object({
|
||||
file_name: z.string().min(1).describe("Name of the file"),
|
||||
file_type: z
|
||||
.string()
|
||||
.describe("MIME type (e.g., 'image/png', 'application/pdf')"),
|
||||
file_size: z.number().int().min(1).describe("File size in bytes"),
|
||||
description: z.string().optional().describe("File description or metadata"),
|
||||
});
|
||||
|
||||
export type UploadFileArgs = z.infer<typeof uploadFileSchema>;
|
||||
|
||||
export const uploadFileTool = {
|
||||
type: "function" as const,
|
||||
function: {
|
||||
name: "upload_file",
|
||||
description:
|
||||
"Upload a file to cloud storage. Returns storage URL and file metadata.",
|
||||
parameters: zodToJsonSchema(uploadFileSchema),
|
||||
},
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Export All Tools
|
||||
// ============================================================================
|
||||
|
||||
export const allTools = [
|
||||
webSearchTool,
|
||||
productLookupTool,
|
||||
createOrderTool,
|
||||
databaseQueryTool,
|
||||
createVisualizationTool,
|
||||
sendEmailTool,
|
||||
scheduleEventTool,
|
||||
uploadFileTool,
|
||||
];
|
||||
|
||||
/**
|
||||
* Helper to get tools by category
|
||||
*/
|
||||
export function getToolsByCategory(category: "ecommerce" | "data" | "communication" | "all") {
|
||||
const categories = {
|
||||
ecommerce: [productLookupTool, createOrderTool],
|
||||
data: [databaseQueryTool, createVisualizationTool],
|
||||
communication: [sendEmailTool, scheduleEventTool],
|
||||
all: allTools,
|
||||
};
|
||||
|
||||
return categories[category];
|
||||
}
|
||||
|
||||
/**
|
||||
* Validation helper
|
||||
*/
|
||||
export function validateToolArgs<T extends z.ZodType>(
|
||||
schema: T,
|
||||
args: unknown
|
||||
): z.infer<T> {
|
||||
return schema.parse(args);
|
||||
}
|
||||
118
templates/vite-react/basic-chat.tsx
Normal file
118
templates/vite-react/basic-chat.tsx
Normal file
@@ -0,0 +1,118 @@
|
||||
/**
|
||||
* Basic C1Chat Integration for Vite + React
|
||||
*
|
||||
* Minimal setup showing how to integrate TheSys Generative UI
|
||||
* into a Vite + React application with custom backend.
|
||||
*
|
||||
* Features:
|
||||
* - Simple form input
|
||||
* - C1Component for custom UI control
|
||||
* - Manual state management
|
||||
* - Basic error handling
|
||||
*
|
||||
* Prerequisites:
|
||||
* - Backend API endpoint at /api/chat
|
||||
* - Environment variable: VITE_API_URL (optional, defaults to relative path)
|
||||
*/
|
||||
|
||||
import "@crayonai/react-ui/styles/index.css";
|
||||
import { ThemeProvider, C1Component } from "@thesysai/genui-sdk";
|
||||
import { useState } from "react";
|
||||
import "./App.css";
|
||||
|
||||
export default function App() {
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [c1Response, setC1Response] = useState("");
|
||||
const [question, setQuestion] = useState("");
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
const apiUrl = import.meta.env.VITE_API_URL || "/api/chat";
|
||||
|
||||
const makeApiCall = async (query: string, previousResponse?: string) => {
|
||||
if (!query.trim()) return;
|
||||
|
||||
setIsLoading(true);
|
||||
setError(null);
|
||||
|
||||
try {
|
||||
const response = await fetch(apiUrl, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
prompt: query,
|
||||
previousC1Response: previousResponse || c1Response,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`API Error: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
setC1Response(data.response || data.c1Response);
|
||||
setQuestion(""); // Clear input after successful request
|
||||
} catch (err) {
|
||||
console.error("Error calling API:", err);
|
||||
setError(err instanceof Error ? err.message : "Failed to get response");
|
||||
} finally {
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
makeApiCall(question);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="app-container">
|
||||
<header>
|
||||
<h1>TheSys AI Assistant</h1>
|
||||
<p>Ask me anything and I'll generate an interactive response</p>
|
||||
</header>
|
||||
|
||||
<form onSubmit={handleSubmit} className="input-form">
|
||||
<input
|
||||
type="text"
|
||||
value={question}
|
||||
onChange={(e) => setQuestion(e.target.value)}
|
||||
placeholder="Ask me anything..."
|
||||
className="question-input"
|
||||
disabled={isLoading}
|
||||
autoFocus
|
||||
/>
|
||||
<button
|
||||
type="submit"
|
||||
className="submit-button"
|
||||
disabled={isLoading || !question.trim()}
|
||||
>
|
||||
{isLoading ? "Processing..." : "Send"}
|
||||
</button>
|
||||
</form>
|
||||
|
||||
{error && (
|
||||
<div className="error-message">
|
||||
<strong>Error:</strong> {error}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{c1Response && (
|
||||
<div className="response-container">
|
||||
<ThemeProvider>
|
||||
<C1Component
|
||||
c1Response={c1Response}
|
||||
isStreaming={isLoading}
|
||||
updateMessage={(message) => setC1Response(message)}
|
||||
onAction={({ llmFriendlyMessage }) => {
|
||||
// Handle interactive actions from generated UI
|
||||
if (!isLoading) {
|
||||
makeApiCall(llmFriendlyMessage, c1Response);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</ThemeProvider>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
208
templates/vite-react/custom-component.tsx
Normal file
208
templates/vite-react/custom-component.tsx
Normal file
@@ -0,0 +1,208 @@
|
||||
/**
|
||||
* Custom C1Component Integration with Advanced State Management
|
||||
*
|
||||
* Shows how to use C1Component with full control over:
|
||||
* - Message history
|
||||
* - Conversation state
|
||||
* - Custom UI layout
|
||||
* - Error boundaries
|
||||
*
|
||||
* Use this when you need more control than C1Chat provides.
|
||||
*/
|
||||
|
||||
import "@crayonai/react-ui/styles/index.css";
|
||||
import { ThemeProvider, C1Component } from "@thesysai/genui-sdk";
|
||||
import { useState, useRef, useEffect } from "react";
|
||||
import { ErrorBoundary } from "react-error-boundary";
|
||||
import "./App.css";
|
||||
|
||||
interface Message {
|
||||
id: string;
|
||||
role: "user" | "assistant";
|
||||
content: string;
|
||||
timestamp: Date;
|
||||
}
|
||||
|
||||
function ErrorFallback({ error, resetErrorBoundary }: {
|
||||
error: Error;
|
||||
resetErrorBoundary: () => void;
|
||||
}) {
|
||||
return (
|
||||
<div className="error-boundary">
|
||||
<h2>Something went wrong</h2>
|
||||
<pre className="error-details">{error.message}</pre>
|
||||
<button onClick={resetErrorBoundary} className="retry-button">
|
||||
Try again
|
||||
</button>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function App() {
|
||||
const [messages, setMessages] = useState<Message[]>([]);
|
||||
const [currentResponse, setCurrentResponse] = useState("");
|
||||
const [isStreaming, setIsStreaming] = useState(false);
|
||||
const [inputValue, setInputValue] = useState("");
|
||||
const messagesEndRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
// Auto-scroll to bottom when new messages arrive
|
||||
useEffect(() => {
|
||||
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
|
||||
}, [messages, currentResponse]);
|
||||
|
||||
const sendMessage = async (userMessage: string) => {
|
||||
if (!userMessage.trim() || isStreaming) return;
|
||||
|
||||
// Add user message
|
||||
const userMsg: Message = {
|
||||
id: crypto.randomUUID(),
|
||||
role: "user",
|
||||
content: userMessage,
|
||||
timestamp: new Date(),
|
||||
};
|
||||
|
||||
setMessages((prev) => [...prev, userMsg]);
|
||||
setInputValue("");
|
||||
setIsStreaming(true);
|
||||
setCurrentResponse("");
|
||||
|
||||
try {
|
||||
const response = await fetch("/api/chat", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
messages: [...messages, userMsg].map((m) => ({
|
||||
role: m.role,
|
||||
content: m.content,
|
||||
})),
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
// Add assistant response
|
||||
const assistantMsg: Message = {
|
||||
id: crypto.randomUUID(),
|
||||
role: "assistant",
|
||||
content: data.response,
|
||||
timestamp: new Date(),
|
||||
};
|
||||
|
||||
setCurrentResponse(data.response);
|
||||
setMessages((prev) => [...prev, assistantMsg]);
|
||||
} catch (error) {
|
||||
console.error("Error sending message:", error);
|
||||
|
||||
// Add error message
|
||||
const errorMsg: Message = {
|
||||
id: crypto.randomUUID(),
|
||||
role: "assistant",
|
||||
content: `Error: ${error instanceof Error ? error.message : "Failed to get response"}`,
|
||||
timestamp: new Date(),
|
||||
};
|
||||
|
||||
setMessages((prev) => [...prev, errorMsg]);
|
||||
} finally {
|
||||
setIsStreaming(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
sendMessage(inputValue);
|
||||
};
|
||||
|
||||
const clearConversation = () => {
|
||||
setMessages([]);
|
||||
setCurrentResponse("");
|
||||
};
|
||||
|
||||
return (
|
||||
<ErrorBoundary FallbackComponent={ErrorFallback}>
|
||||
<div className="chat-container">
|
||||
<div className="chat-header">
|
||||
<h1>AI Assistant</h1>
|
||||
<button onClick={clearConversation} className="clear-button">
|
||||
Clear
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div className="messages-container">
|
||||
{messages.map((message, index) => (
|
||||
<div
|
||||
key={message.id}
|
||||
className={`message message-${message.role}`}
|
||||
>
|
||||
<div className="message-header">
|
||||
<span className="message-role">
|
||||
{message.role === "user" ? "You" : "AI"}
|
||||
</span>
|
||||
<span className="message-time">
|
||||
{message.timestamp.toLocaleTimeString()}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{message.role === "assistant" ? (
|
||||
<ThemeProvider>
|
||||
<C1Component
|
||||
c1Response={message.content}
|
||||
isStreaming={
|
||||
index === messages.length - 1 && isStreaming
|
||||
}
|
||||
updateMessage={(updatedContent) => {
|
||||
setCurrentResponse(updatedContent);
|
||||
setMessages((prev) =>
|
||||
prev.map((m) =>
|
||||
m.id === message.id
|
||||
? { ...m, content: updatedContent }
|
||||
: m
|
||||
)
|
||||
);
|
||||
}}
|
||||
onAction={({ llmFriendlyMessage }) => {
|
||||
sendMessage(llmFriendlyMessage);
|
||||
}}
|
||||
/>
|
||||
</ThemeProvider>
|
||||
) : (
|
||||
<div className="message-content">{message.content}</div>
|
||||
)}
|
||||
</div>
|
||||
))}
|
||||
|
||||
{isStreaming && !currentResponse && (
|
||||
<div className="loading-indicator">
|
||||
<div className="spinner" />
|
||||
<span>AI is thinking...</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div ref={messagesEndRef} />
|
||||
</div>
|
||||
|
||||
<form onSubmit={handleSubmit} className="input-container">
|
||||
<input
|
||||
type="text"
|
||||
value={inputValue}
|
||||
onChange={(e) => setInputValue(e.target.value)}
|
||||
placeholder="Type your message..."
|
||||
disabled={isStreaming}
|
||||
className="message-input"
|
||||
autoFocus
|
||||
/>
|
||||
<button
|
||||
type="submit"
|
||||
disabled={!inputValue.trim() || isStreaming}
|
||||
className="send-button"
|
||||
>
|
||||
{isStreaming ? "..." : "Send"}
|
||||
</button>
|
||||
</form>
|
||||
</div>
|
||||
</ErrorBoundary>
|
||||
);
|
||||
}
|
||||
40
templates/vite-react/package.json
Normal file
40
templates/vite-react/package.json
Normal file
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"name": "thesys-vite-react-example",
|
||||
"private": true,
|
||||
"version": "1.0.0",
|
||||
"type": "module",
|
||||
"description": "Vite + React integration with TheSys Generative UI",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"build": "tsc && vite build",
|
||||
"preview": "vite preview",
|
||||
"lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0"
|
||||
},
|
||||
"dependencies": {
|
||||
"@thesysai/genui-sdk": "^0.6.40",
|
||||
"@crayonai/react-ui": "^0.8.42",
|
||||
"@crayonai/react-core": "^0.7.6",
|
||||
"@crayonai/stream": "^0.1.0",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0",
|
||||
"react-error-boundary": "^5.0.0",
|
||||
"openai": "^4.73.0",
|
||||
"zod": "^3.24.1",
|
||||
"zod-to-json-schema": "^3.24.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/react": "^19.0.0",
|
||||
"@types/react-dom": "^19.0.0",
|
||||
"@typescript-eslint/eslint-plugin": "^8.0.0",
|
||||
"@typescript-eslint/parser": "^8.0.0",
|
||||
"@vitejs/plugin-react": "^4.3.4",
|
||||
"eslint": "^9.0.0",
|
||||
"eslint-plugin-react-hooks": "^5.0.0",
|
||||
"eslint-plugin-react-refresh": "^0.4.16",
|
||||
"typescript": "^5.7.3",
|
||||
"vite": "^6.0.5"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"@tavily/core": "^1.0.0"
|
||||
}
|
||||
}
|
||||
220
templates/vite-react/theme-dark-mode.tsx
Normal file
220
templates/vite-react/theme-dark-mode.tsx
Normal file
@@ -0,0 +1,220 @@
|
||||
/**
|
||||
* TheSys C1 with Custom Theming and Dark Mode
|
||||
*
|
||||
* Demonstrates:
|
||||
* - Custom theme configuration
|
||||
* - Dark mode toggle
|
||||
* - System theme detection
|
||||
* - Theme presets
|
||||
* - CSS variable overrides
|
||||
*/
|
||||
|
||||
import "@crayonai/react-ui/styles/index.css";
|
||||
import { C1Chat, ThemeProvider } from "@thesysai/genui-sdk";
|
||||
import { themePresets } from "@crayonai/react-ui";
|
||||
import { useState, useEffect } from "react";
|
||||
import "./App.css";
|
||||
|
||||
type ThemeMode = "light" | "dark" | "system";
|
||||
|
||||
// Custom theme object
|
||||
const customLightTheme = {
|
||||
mode: "light" as const,
|
||||
colors: {
|
||||
primary: "#3b82f6",
|
||||
secondary: "#8b5cf6",
|
||||
background: "#ffffff",
|
||||
foreground: "#1f2937",
|
||||
border: "#e5e7eb",
|
||||
muted: "#f3f4f6",
|
||||
accent: "#10b981",
|
||||
},
|
||||
fonts: {
|
||||
body: "'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif",
|
||||
heading: "'Poppins', sans-serif",
|
||||
mono: "'Fira Code', 'Courier New', monospace",
|
||||
},
|
||||
borderRadius: "8px",
|
||||
spacing: {
|
||||
base: "16px",
|
||||
},
|
||||
};
|
||||
|
||||
const customDarkTheme = {
|
||||
...customLightTheme,
|
||||
mode: "dark" as const,
|
||||
colors: {
|
||||
primary: "#60a5fa",
|
||||
secondary: "#a78bfa",
|
||||
background: "#111827",
|
||||
foreground: "#f9fafb",
|
||||
border: "#374151",
|
||||
muted: "#1f2937",
|
||||
accent: "#34d399",
|
||||
},
|
||||
};
|
||||
|
||||
function useSystemTheme(): "light" | "dark" {
|
||||
const [systemTheme, setSystemTheme] = useState<"light" | "dark">(
|
||||
() =>
|
||||
window.matchMedia("(prefers-color-scheme: dark)").matches
|
||||
? "dark"
|
||||
: "light"
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
const mediaQuery = window.matchMedia("(prefers-color-scheme: dark)");
|
||||
|
||||
const handler = (e: MediaQueryListEvent) => {
|
||||
setSystemTheme(e.matches ? "dark" : "light");
|
||||
};
|
||||
|
||||
mediaQuery.addEventListener("change", handler);
|
||||
return () => mediaQuery.removeEventListener("change", handler);
|
||||
}, []);
|
||||
|
||||
return systemTheme;
|
||||
}
|
||||
|
||||
export default function ThemedChat() {
|
||||
const [themeMode, setThemeMode] = useState<ThemeMode>(
|
||||
() => (localStorage.getItem("theme-mode") as ThemeMode) || "system"
|
||||
);
|
||||
const [usePreset, setUsePreset] = useState(false);
|
||||
const systemTheme = useSystemTheme();
|
||||
|
||||
// Determine actual theme to use
|
||||
const actualTheme =
|
||||
themeMode === "system" ? systemTheme : themeMode;
|
||||
|
||||
// Choose theme object
|
||||
const theme = usePreset
|
||||
? themePresets.candy // Use built-in preset
|
||||
: actualTheme === "dark"
|
||||
? customDarkTheme
|
||||
: customLightTheme;
|
||||
|
||||
// Persist theme preference
|
||||
useEffect(() => {
|
||||
localStorage.setItem("theme-mode", themeMode);
|
||||
|
||||
// Apply to document for app-wide styling
|
||||
document.documentElement.setAttribute("data-theme", actualTheme);
|
||||
}, [themeMode, actualTheme]);
|
||||
|
||||
return (
|
||||
<div className="themed-app">
|
||||
<div className="theme-controls">
|
||||
<div className="theme-selector">
|
||||
<h3>Theme Mode</h3>
|
||||
<div className="button-group">
|
||||
<button
|
||||
className={themeMode === "light" ? "active" : ""}
|
||||
onClick={() => setThemeMode("light")}
|
||||
>
|
||||
☀️ Light
|
||||
</button>
|
||||
<button
|
||||
className={themeMode === "dark" ? "active" : ""}
|
||||
onClick={() => setThemeMode("dark")}
|
||||
>
|
||||
🌙 Dark
|
||||
</button>
|
||||
<button
|
||||
className={themeMode === "system" ? "active" : ""}
|
||||
onClick={() => setThemeMode("system")}
|
||||
>
|
||||
💻 System
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="theme-type">
|
||||
<h3>Theme Type</h3>
|
||||
<div className="button-group">
|
||||
<button
|
||||
className={!usePreset ? "active" : ""}
|
||||
onClick={() => setUsePreset(false)}
|
||||
>
|
||||
Custom
|
||||
</button>
|
||||
<button
|
||||
className={usePreset ? "active" : ""}
|
||||
onClick={() => setUsePreset(true)}
|
||||
>
|
||||
Preset (Candy)
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="chat-wrapper">
|
||||
<ThemeProvider theme={{ ...theme, mode: actualTheme }}>
|
||||
<C1Chat
|
||||
apiUrl="/api/chat"
|
||||
agentName="Themed AI Assistant"
|
||||
logoUrl="https://placehold.co/100x100/3b82f6/ffffff?text=AI"
|
||||
/>
|
||||
</ThemeProvider>
|
||||
</div>
|
||||
|
||||
<div className="theme-info">
|
||||
<h3>Current Theme</h3>
|
||||
<pre className="theme-preview">
|
||||
{JSON.stringify(
|
||||
{
|
||||
mode: actualTheme,
|
||||
usingPreset: usePreset,
|
||||
preferredMode: themeMode,
|
||||
systemPreference: systemTheme,
|
||||
},
|
||||
null,
|
||||
2
|
||||
)}
|
||||
</pre>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* CSS Example (App.css):
|
||||
*
|
||||
* [data-theme="light"] {
|
||||
* --app-bg: #ffffff;
|
||||
* --app-text: #1f2937;
|
||||
* }
|
||||
*
|
||||
* [data-theme="dark"] {
|
||||
* --app-bg: #111827;
|
||||
* --app-text: #f9fafb;
|
||||
* }
|
||||
*
|
||||
* .themed-app {
|
||||
* background: var(--app-bg);
|
||||
* color: var(--app-text);
|
||||
* min-height: 100vh;
|
||||
* transition: background-color 0.3s ease, color 0.3s ease;
|
||||
* }
|
||||
*
|
||||
* .theme-controls {
|
||||
* padding: 2rem;
|
||||
* display: flex;
|
||||
* gap: 2rem;
|
||||
* border-bottom: 1px solid var(--app-text);
|
||||
* }
|
||||
*
|
||||
* .button-group button {
|
||||
* padding: 0.5rem 1rem;
|
||||
* border: 1px solid var(--app-text);
|
||||
* background: transparent;
|
||||
* color: var(--app-text);
|
||||
* cursor: pointer;
|
||||
* transition: all 0.2s;
|
||||
* }
|
||||
*
|
||||
* .button-group button.active {
|
||||
* background: var(--app-text);
|
||||
* color: var(--app-bg);
|
||||
* }
|
||||
*/
|
||||
276
templates/vite-react/tool-calling.tsx
Normal file
276
templates/vite-react/tool-calling.tsx
Normal file
@@ -0,0 +1,276 @@
|
||||
/**
|
||||
* Tool Calling Integration Example
|
||||
*
|
||||
* Demonstrates how to integrate tool calling (function calling) with TheSys C1.
|
||||
* Shows:
|
||||
* - Web search tool with Tavily API
|
||||
* - Product inventory lookup
|
||||
* - Order creation with Zod validation
|
||||
* - Interactive UI for tool results
|
||||
*
|
||||
* Backend Requirements:
|
||||
* - OpenAI SDK with runTools support
|
||||
* - Zod for schema validation
|
||||
* - Tool execution handlers
|
||||
*/
|
||||
|
||||
import "@crayonai/react-ui/styles/index.css";
|
||||
import { ThemeProvider, C1Component } from "@thesysai/genui-sdk";
|
||||
import { useState } from "react";
|
||||
import "./App.css";
|
||||
|
||||
// Example tool schemas (these match backend Zod schemas)
|
||||
interface WebSearchTool {
|
||||
name: "web_search";
|
||||
args: {
|
||||
query: string;
|
||||
max_results: number;
|
||||
};
|
||||
}
|
||||
|
||||
interface ProductLookupTool {
|
||||
name: "lookup_product";
|
||||
args: {
|
||||
product_type?: "gloves" | "hat" | "scarf";
|
||||
};
|
||||
}
|
||||
|
||||
interface CreateOrderTool {
|
||||
name: "create_order";
|
||||
args: {
|
||||
customer_email: string;
|
||||
items: Array<{
|
||||
type: "gloves" | "hat" | "scarf";
|
||||
quantity: number;
|
||||
[key: string]: any;
|
||||
}>;
|
||||
};
|
||||
}
|
||||
|
||||
type ToolCall = WebSearchTool | ProductLookupTool | CreateOrderTool;
|
||||
|
||||
export default function ToolCallingExample() {
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [c1Response, setC1Response] = useState("");
|
||||
const [question, setQuestion] = useState("");
|
||||
const [activeTools, setActiveTools] = useState<string[]>([]);
|
||||
|
||||
const makeApiCall = async (query: string, previousResponse?: string) => {
|
||||
if (!query.trim()) return;
|
||||
|
||||
setIsLoading(true);
|
||||
setActiveTools([]);
|
||||
|
||||
try {
|
||||
const response = await fetch("/api/chat-with-tools", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
prompt: query,
|
||||
previousC1Response: previousResponse,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`API Error: ${response.status}`);
|
||||
}
|
||||
|
||||
// Handle streaming response
|
||||
const reader = response.body?.getReader();
|
||||
if (!reader) throw new Error("No response body");
|
||||
|
||||
const decoder = new TextDecoder();
|
||||
let accumulatedResponse = "";
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
const chunk = decoder.decode(value);
|
||||
const lines = chunk.split("\n");
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith("data: ")) {
|
||||
try {
|
||||
const data = JSON.parse(line.slice(6));
|
||||
|
||||
if (data.type === "tool_call") {
|
||||
// Track which tools are being called
|
||||
setActiveTools((prev) => [...prev, data.tool_name]);
|
||||
} else if (data.type === "content") {
|
||||
accumulatedResponse += data.content;
|
||||
setC1Response(accumulatedResponse);
|
||||
}
|
||||
} catch (e) {
|
||||
// Skip invalid JSON
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
setQuestion("");
|
||||
} catch (err) {
|
||||
console.error("Error:", err);
|
||||
setC1Response(
|
||||
`Error: ${err instanceof Error ? err.message : "Failed to get response"}`
|
||||
);
|
||||
} finally {
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
makeApiCall(question);
|
||||
};
|
||||
|
||||
// Example prompts to demonstrate tools
|
||||
const examplePrompts = [
|
||||
"Search the web for the latest AI news",
|
||||
"Show me available products in the inventory",
|
||||
"Create an order for 2 blue gloves size M and 1 red hat",
|
||||
];
|
||||
|
||||
return (
|
||||
<div className="tool-calling-container">
|
||||
<header>
|
||||
<h1>AI Assistant with Tools</h1>
|
||||
<p>Ask me to search the web, check inventory, or create orders</p>
|
||||
</header>
|
||||
|
||||
<div className="example-prompts">
|
||||
<h3>Try these examples:</h3>
|
||||
{examplePrompts.map((prompt, index) => (
|
||||
<button
|
||||
key={index}
|
||||
onClick={() => {
|
||||
setQuestion(prompt);
|
||||
makeApiCall(prompt);
|
||||
}}
|
||||
className="example-button"
|
||||
disabled={isLoading}
|
||||
>
|
||||
{prompt}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
|
||||
<form onSubmit={handleSubmit} className="input-form">
|
||||
<input
|
||||
type="text"
|
||||
value={question}
|
||||
onChange={(e) => setQuestion(e.target.value)}
|
||||
placeholder="Ask me to use a tool..."
|
||||
className="question-input"
|
||||
disabled={isLoading}
|
||||
/>
|
||||
<button
|
||||
type="submit"
|
||||
className="submit-button"
|
||||
disabled={isLoading || !question.trim()}
|
||||
>
|
||||
{isLoading ? "Processing..." : "Send"}
|
||||
</button>
|
||||
</form>
|
||||
|
||||
{activeTools.length > 0 && (
|
||||
<div className="active-tools">
|
||||
<h4>Active Tools:</h4>
|
||||
<div className="tool-badges">
|
||||
{activeTools.map((tool, index) => (
|
||||
<span key={index} className="tool-badge">
|
||||
{tool}
|
||||
</span>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{c1Response && (
|
||||
<div className="response-container">
|
||||
<ThemeProvider>
|
||||
<C1Component
|
||||
c1Response={c1Response}
|
||||
isStreaming={isLoading}
|
||||
updateMessage={(message) => setC1Response(message)}
|
||||
onAction={({ llmFriendlyMessage, rawAction }) => {
|
||||
console.log("Tool action:", rawAction);
|
||||
|
||||
if (!isLoading) {
|
||||
makeApiCall(llmFriendlyMessage, c1Response);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</ThemeProvider>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="tool-info">
|
||||
<h3>Available Tools</h3>
|
||||
<ul>
|
||||
<li>
|
||||
<strong>web_search</strong> - Search the web for current information
|
||||
</li>
|
||||
<li>
|
||||
<strong>lookup_product</strong> - Check product inventory
|
||||
</li>
|
||||
<li>
|
||||
<strong>create_order</strong> - Create a new product order
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Backend API Example (route.ts or server.ts):
|
||||
*
|
||||
* import { z } from "zod";
|
||||
* import zodToJsonSchema from "zod-to-json-schema";
|
||||
* import OpenAI from "openai";
|
||||
* import { TavilySearchAPIClient } from "@tavily/core";
|
||||
*
|
||||
* const webSearchSchema = z.object({
|
||||
* query: z.string(),
|
||||
* max_results: z.number().int().min(1).max(10).default(5),
|
||||
* });
|
||||
*
|
||||
* const webSearchTool = {
|
||||
* type: "function" as const,
|
||||
* function: {
|
||||
* name: "web_search",
|
||||
* description: "Search the web for current information",
|
||||
* parameters: zodToJsonSchema(webSearchSchema),
|
||||
* },
|
||||
* };
|
||||
*
|
||||
* const client = new OpenAI({
|
||||
* baseURL: "https://api.thesys.dev/v1/embed",
|
||||
* apiKey: process.env.THESYS_API_KEY,
|
||||
* });
|
||||
*
|
||||
* const tavily = new TavilySearchAPIClient({
|
||||
* apiKey: process.env.TAVILY_API_KEY,
|
||||
* });
|
||||
*
|
||||
* export async function POST(req) {
|
||||
* const { prompt } = await req.json();
|
||||
*
|
||||
* const stream = await client.beta.chat.completions.runTools({
|
||||
* model: "c1/openai/gpt-5/v-20250930",
|
||||
* messages: [
|
||||
* {
|
||||
* role: "system",
|
||||
* content: "You are a helpful assistant with access to tools.",
|
||||
* },
|
||||
* { role: "user", content: prompt },
|
||||
* ],
|
||||
* stream: true,
|
||||
* tools: [webSearchTool, productLookupTool, createOrderTool],
|
||||
* toolChoice: "auto",
|
||||
* });
|
||||
*
|
||||
* // Handle tool execution and streaming...
|
||||
* }
|
||||
*/
|
||||
Reference in New Issue
Block a user