Initial commit
This commit is contained in:
12
.claude-plugin/plugin.json
Normal file
12
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"name": "ai-sdk-core",
|
||||||
|
"description": "Build backend AI features with Vercel AI SDK v5: text generation, structured output (Zod schemas), tool calling, and agents. Multi-provider support (OpenAI, Anthropic, Google, Cloudflare). Use when: implementing server-side AI, generating text/structured data, building AI agents, streaming responses, or troubleshooting AI_APICallError, AI_NoObjectGeneratedError.",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"author": {
|
||||||
|
"name": "Jeremy Dawes",
|
||||||
|
"email": "jeremy@jezweb.net"
|
||||||
|
},
|
||||||
|
"skills": [
|
||||||
|
"./"
|
||||||
|
]
|
||||||
|
}
|
||||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# ai-sdk-core
|
||||||
|
|
||||||
|
Build backend AI features with Vercel AI SDK v5: text generation, structured output (Zod schemas), tool calling, and agents. Multi-provider support (OpenAI, Anthropic, Google, Cloudflare). Use when: implementing server-side AI, generating text/structured data, building AI agents, streaming responses, or troubleshooting AI_APICallError, AI_NoObjectGeneratedError.
|
||||||
849
SKILL.md
Normal file
849
SKILL.md
Normal file
@@ -0,0 +1,849 @@
|
|||||||
|
---
|
||||||
|
name: ai-sdk-core
|
||||||
|
description: |
|
||||||
|
Build backend AI with Vercel AI SDK v5/v6. Covers v6 beta (Agent abstraction, tool approval, reranking),
|
||||||
|
v4→v5 migration (breaking changes), latest models (GPT-5/5.1, Claude 4.x, Gemini 2.5), Workers startup
|
||||||
|
fix, and 12 error solutions (AI_APICallError, AI_NoObjectGeneratedError, streamText silent errors).
|
||||||
|
|
||||||
|
Use when: implementing AI SDK v5/v6, migrating v4→v5, troubleshooting errors, fixing Workers startup
|
||||||
|
issues, or updating to latest models.
|
||||||
|
license: MIT
|
||||||
|
metadata:
|
||||||
|
version: 1.2.0
|
||||||
|
last_verified: 2025-11-22
|
||||||
|
ai_sdk_version: 5.0.98 stable / 6.0.0-beta.107
|
||||||
|
breaking_changes: true (v4→v5 migration guide included)
|
||||||
|
production_tested: true
|
||||||
|
keywords:
|
||||||
|
- ai sdk core
|
||||||
|
- vercel ai sdk
|
||||||
|
- ai sdk v5
|
||||||
|
- ai sdk v6 beta
|
||||||
|
- ai sdk 6
|
||||||
|
- agent abstraction
|
||||||
|
- tool approval
|
||||||
|
- reranking support
|
||||||
|
- human in the loop
|
||||||
|
- generateText
|
||||||
|
- streamText
|
||||||
|
- generateObject
|
||||||
|
- streamObject
|
||||||
|
- ai sdk node
|
||||||
|
- ai sdk server
|
||||||
|
- zod ai schema
|
||||||
|
- ai schema validation
|
||||||
|
- ai tool calling
|
||||||
|
- ai agent class
|
||||||
|
- openai sdk
|
||||||
|
- anthropic sdk
|
||||||
|
- google gemini sdk
|
||||||
|
- cloudflare workers ai
|
||||||
|
- workers-ai-provider
|
||||||
|
- gpt-5
|
||||||
|
- gpt-5.1
|
||||||
|
- claude 4
|
||||||
|
- claude sonnet 4.5
|
||||||
|
- claude opus 4.1
|
||||||
|
- gemini 2.5
|
||||||
|
- ai streaming backend
|
||||||
|
- multi-provider ai
|
||||||
|
- ai provider abstraction
|
||||||
|
- AI_APICallError
|
||||||
|
- AI_NoObjectGeneratedError
|
||||||
|
- ai sdk errors
|
||||||
|
- structured ai output
|
||||||
|
- backend llm integration
|
||||||
|
- server-side ai generation
|
||||||
|
---
|
||||||
|
|
||||||
|
# AI SDK Core
|
||||||
|
|
||||||
|
Backend AI with Vercel AI SDK v5 and v6 Beta.
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
npm install ai @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google zod
|
||||||
|
# Beta: npm install ai@beta @ai-sdk/openai@beta
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI SDK 6 Beta (November 2025)
|
||||||
|
|
||||||
|
**Status:** Beta (stable release planned end of 2025)
|
||||||
|
**Latest:** ai@6.0.0-beta.107 (Nov 22, 2025)
|
||||||
|
|
||||||
|
### New Features
|
||||||
|
|
||||||
|
**1. Agent Abstraction**
|
||||||
|
Unified interface for building agents with `ToolLoopAgent` class:
|
||||||
|
- Full control over execution flow, tool loops, and state management
|
||||||
|
- Replaces manual tool calling orchestration
|
||||||
|
|
||||||
|
**2. Tool Execution Approval (Human-in-the-Loop)**
|
||||||
|
Request user confirmation before executing tools:
|
||||||
|
- Static approval: Always ask for specific tools
|
||||||
|
- Dynamic approval: Conditional based on tool inputs
|
||||||
|
- Native human-in-the-loop pattern
|
||||||
|
|
||||||
|
**3. Reranking Support**
|
||||||
|
Improve search relevance by reordering documents:
|
||||||
|
- Supported providers: Cohere, Amazon Bedrock, Together.ai
|
||||||
|
- Specialized reranking models for RAG workflows
|
||||||
|
|
||||||
|
**4. Structured Output (Stable)**
|
||||||
|
Combine multi-step tool calling with structured data generation:
|
||||||
|
- Multiple output strategies: objects, arrays, choices, text formats
|
||||||
|
- Now stable and production-ready in v6
|
||||||
|
|
||||||
|
**5. Call Options**
|
||||||
|
Dynamic runtime configuration:
|
||||||
|
- Type-safe parameter passing
|
||||||
|
- RAG integration, model selection, tool customization
|
||||||
|
- Provider-specific settings adjustments
|
||||||
|
|
||||||
|
**6. Image Editing (Coming Soon)**
|
||||||
|
Native support for image transformation workflows.
|
||||||
|
|
||||||
|
### Migration from v5
|
||||||
|
|
||||||
|
**Unlike v4→v5, v6 has minimal breaking changes:**
|
||||||
|
- Powered by v3 Language Model Specification
|
||||||
|
- Most users require no code changes
|
||||||
|
- Agent abstraction is additive (opt-in)
|
||||||
|
|
||||||
|
**Install Beta:**
|
||||||
|
```bash
|
||||||
|
npm install ai@beta @ai-sdk/openai@beta @ai-sdk/react@beta
|
||||||
|
```
|
||||||
|
|
||||||
|
**Official Docs:** https://ai-sdk.dev/docs/announcing-ai-sdk-6-beta
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Latest AI Models (2025)
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
**GPT-5** (Aug 7, 2025):
|
||||||
|
- 45% less hallucination than GPT-4o
|
||||||
|
- State-of-the-art in math, coding, visual perception, health
|
||||||
|
- Available in ChatGPT, API, GitHub Models, Microsoft Copilot
|
||||||
|
|
||||||
|
**GPT-5.1** (Nov 13, 2025):
|
||||||
|
- Improved speed and efficiency over GPT-5
|
||||||
|
- Available in API platform
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
const gpt5 = openai('gpt-5');
|
||||||
|
const gpt51 = openai('gpt-5.1');
|
||||||
|
```
|
||||||
|
|
||||||
|
### Anthropic
|
||||||
|
|
||||||
|
**Claude 4 Family** (May-Oct 2025):
|
||||||
|
- **Opus 4** (May 22): Best for complex reasoning, $15/$75 per million tokens
|
||||||
|
- **Sonnet 4** (May 22): Balanced performance, $3/$15 per million tokens
|
||||||
|
- **Opus 4.1** (Aug 5): Enhanced agentic tasks, real-world coding
|
||||||
|
- **Sonnet 4.5** (Sept 29): Most capable for coding, agents, computer use
|
||||||
|
- **Haiku 4.5** (Oct 15): Small, fast, low-latency model
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { anthropic } from '@ai-sdk/anthropic';
|
||||||
|
const sonnet45 = anthropic('claude-sonnet-4-5-20250929'); // Latest
|
||||||
|
const opus41 = anthropic('claude-opus-4-1-20250805');
|
||||||
|
const haiku45 = anthropic('claude-haiku-4-5-20251015');
|
||||||
|
```
|
||||||
|
|
||||||
|
### Google
|
||||||
|
|
||||||
|
**Gemini 2.5 Family** (Mar-Sept 2025):
|
||||||
|
- **Pro** (March 2025): Most intelligent, #1 on LMArena at launch
|
||||||
|
- **Pro Deep Think** (May 2025): Enhanced reasoning mode
|
||||||
|
- **Flash** (May 2025): Fast, cost-effective
|
||||||
|
- **Flash-Lite** (Sept 2025): Updated efficiency
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { google } from '@ai-sdk/google';
|
||||||
|
const pro = google('gemini-2.5-pro');
|
||||||
|
const flash = google('gemini-2.5-flash');
|
||||||
|
const lite = google('gemini-2.5-flash-lite');
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## v5 Core Functions (Basics)
|
||||||
|
|
||||||
|
**generateText()** - Text completion with tools
|
||||||
|
**streamText()** - Real-time streaming
|
||||||
|
**generateObject()** - Structured output (Zod schemas)
|
||||||
|
**streamObject()** - Streaming structured data
|
||||||
|
|
||||||
|
See official docs for usage: https://ai-sdk.dev/docs/ai-sdk-core
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cloudflare Workers Startup Fix
|
||||||
|
|
||||||
|
**Problem:** AI SDK v5 + Zod causes >270ms startup time (exceeds Workers 400ms limit).
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Top-level imports cause startup overhead
|
||||||
|
import { createWorkersAI } from 'workers-ai-provider';
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
// ✅ GOOD: Lazy initialization inside handler
|
||||||
|
app.post('/chat', async (c) => {
|
||||||
|
const { createWorkersAI } = await import('workers-ai-provider');
|
||||||
|
const workersai = createWorkersAI({ binding: c.env.AI });
|
||||||
|
// ...
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Additional:**
|
||||||
|
- Minimize top-level Zod schemas
|
||||||
|
- Move complex schemas into route handlers
|
||||||
|
- Monitor startup time with Wrangler
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## v5 Tool Calling Changes
|
||||||
|
|
||||||
|
**Breaking Changes:**
|
||||||
|
- `parameters` → `inputSchema` (Zod schema)
|
||||||
|
- Tool properties: `args` → `input`, `result` → `output`
|
||||||
|
- `ToolExecutionError` removed (now `tool-error` content parts)
|
||||||
|
- `maxSteps` parameter removed → Use `stopWhen(stepCountIs(n))`
|
||||||
|
|
||||||
|
**New in v5:**
|
||||||
|
- Dynamic tools (add tools at runtime based on context)
|
||||||
|
- Agent class (multi-step execution with tools)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Critical v4→v5 Migration
|
||||||
|
|
||||||
|
AI SDK v5 introduced extensive breaking changes. If migrating from v4, follow this guide.
|
||||||
|
|
||||||
|
### Breaking Changes Overview
|
||||||
|
|
||||||
|
1. **Parameter Renames**
|
||||||
|
- `maxTokens` → `maxOutputTokens`
|
||||||
|
- `providerMetadata` → `providerOptions`
|
||||||
|
|
||||||
|
2. **Tool Definitions**
|
||||||
|
- `parameters` → `inputSchema`
|
||||||
|
- Tool properties: `args` → `input`, `result` → `output`
|
||||||
|
|
||||||
|
3. **Message Types**
|
||||||
|
- `CoreMessage` → `ModelMessage`
|
||||||
|
- `Message` → `UIMessage`
|
||||||
|
- `convertToCoreMessages` → `convertToModelMessages`
|
||||||
|
|
||||||
|
4. **Tool Error Handling**
|
||||||
|
- `ToolExecutionError` class removed
|
||||||
|
- Now `tool-error` content parts
|
||||||
|
- Enables automated retry
|
||||||
|
|
||||||
|
5. **Multi-Step Execution**
|
||||||
|
- `maxSteps` → `stopWhen`
|
||||||
|
- Use `stepCountIs()` or `hasToolCall()`
|
||||||
|
|
||||||
|
6. **Message Structure**
|
||||||
|
- Simple `content` string → `parts` array
|
||||||
|
- Parts: text, file, reasoning, tool-call, tool-result
|
||||||
|
|
||||||
|
7. **Streaming Architecture**
|
||||||
|
- Single chunk → start/delta/end lifecycle
|
||||||
|
- Unique IDs for concurrent streams
|
||||||
|
|
||||||
|
8. **Tool Streaming**
|
||||||
|
- Enabled by default
|
||||||
|
- `toolCallStreaming` option removed
|
||||||
|
|
||||||
|
9. **Package Reorganization**
|
||||||
|
- `ai/rsc` → `@ai-sdk/rsc`
|
||||||
|
- `ai/react` → `@ai-sdk/react`
|
||||||
|
- `LangChainAdapter` → `@ai-sdk/langchain`
|
||||||
|
|
||||||
|
### Migration Examples
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai.chat('gpt-4'),
|
||||||
|
maxTokens: 500,
|
||||||
|
providerMetadata: { openai: { user: 'user-123' } },
|
||||||
|
tools: {
|
||||||
|
weather: {
|
||||||
|
description: 'Get weather',
|
||||||
|
parameters: z.object({ location: z.string() }),
|
||||||
|
execute: async (args) => { /* args.location */ },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
maxSteps: 5,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
import { generateText, tool, stopWhen, stepCountIs } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
maxOutputTokens: 500,
|
||||||
|
providerOptions: { openai: { user: 'user-123' } },
|
||||||
|
tools: {
|
||||||
|
weather: tool({
|
||||||
|
description: 'Get weather',
|
||||||
|
inputSchema: z.object({ location: z.string() }),
|
||||||
|
execute: async ({ location }) => { /* input.location */ },
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
stopWhen: stepCountIs(5),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Checklist
|
||||||
|
|
||||||
|
- [ ] Update all `maxTokens` to `maxOutputTokens`
|
||||||
|
- [ ] Update `providerMetadata` to `providerOptions`
|
||||||
|
- [ ] Convert tool `parameters` to `inputSchema`
|
||||||
|
- [ ] Update tool execute functions: `args` → `input`
|
||||||
|
- [ ] Replace `maxSteps` with `stopWhen(stepCountIs(n))`
|
||||||
|
- [ ] Update message types: `CoreMessage` → `ModelMessage`
|
||||||
|
- [ ] Remove `ToolExecutionError` handling
|
||||||
|
- [ ] Update package imports (`ai/rsc` → `@ai-sdk/rsc`)
|
||||||
|
- [ ] Test streaming behavior (architecture changed)
|
||||||
|
- [ ] Update TypeScript types
|
||||||
|
|
||||||
|
### Automated Migration
|
||||||
|
|
||||||
|
AI SDK provides a migration tool:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx ai migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
This will update most breaking changes automatically. Review changes carefully.
|
||||||
|
|
||||||
|
**Official Migration Guide:**
|
||||||
|
https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Top 12 Errors & Solutions
|
||||||
|
|
||||||
|
### 1. AI_APICallError
|
||||||
|
|
||||||
|
**Cause:** API request failed (network, auth, rate limit).
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_APICallError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_APICallError) {
|
||||||
|
console.error('API call failed:', error.message);
|
||||||
|
console.error('Status code:', error.statusCode);
|
||||||
|
console.error('Response:', error.responseBody);
|
||||||
|
|
||||||
|
// Check common causes
|
||||||
|
if (error.statusCode === 401) {
|
||||||
|
// Invalid API key
|
||||||
|
} else if (error.statusCode === 429) {
|
||||||
|
// Rate limit - implement backoff
|
||||||
|
} else if (error.statusCode >= 500) {
|
||||||
|
// Provider issue - retry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Validate API keys at startup
|
||||||
|
- Implement retry logic with exponential backoff
|
||||||
|
- Monitor rate limits
|
||||||
|
- Handle network errors gracefully
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. AI_NoObjectGeneratedError
|
||||||
|
|
||||||
|
**Cause:** Model didn't generate valid object matching schema.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_NoObjectGeneratedError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: z.object({ /* complex schema */ }),
|
||||||
|
prompt: 'Generate data',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_NoObjectGeneratedError) {
|
||||||
|
console.error('No valid object generated');
|
||||||
|
|
||||||
|
// Solutions:
|
||||||
|
// 1. Simplify schema
|
||||||
|
// 2. Add more context to prompt
|
||||||
|
// 3. Provide examples in prompt
|
||||||
|
// 4. Try different model (gpt-4 better than gpt-3.5 for complex objects)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Start with simple schemas, add complexity incrementally
|
||||||
|
- Include examples in prompt: "Generate a person like: { name: 'Alice', age: 30 }"
|
||||||
|
- Use GPT-4 for complex structured output
|
||||||
|
- Test schemas with sample data first
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Worker Startup Limit (270ms+)
|
||||||
|
|
||||||
|
**Cause:** AI SDK v5 + Zod initialization overhead in Cloudflare Workers exceeds startup limits.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
// BAD: Top-level imports cause startup overhead
|
||||||
|
import { createWorkersAI } from 'workers-ai-provider';
|
||||||
|
import { complexSchema } from './schemas';
|
||||||
|
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
// GOOD: Lazy initialization inside handler
|
||||||
|
export default {
|
||||||
|
async fetch(request, env) {
|
||||||
|
const { createWorkersAI } = await import('workers-ai-provider');
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
// Use workersai here
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Move AI SDK imports inside route handlers
|
||||||
|
- Minimize top-level Zod schemas
|
||||||
|
- Monitor Worker startup time (must be <400ms)
|
||||||
|
- Use Wrangler's startup time reporting
|
||||||
|
|
||||||
|
**GitHub Issue:** Search for "Workers startup limit" in Vercel AI SDK issues
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. streamText Fails Silently
|
||||||
|
|
||||||
|
**Cause:** Stream errors can be swallowed by `createDataStreamResponse`.
|
||||||
|
|
||||||
|
**Status:** ✅ **RESOLVED** - Fixed in ai@4.1.22 (February 2025)
|
||||||
|
|
||||||
|
**Solution (Recommended):**
|
||||||
|
```typescript
|
||||||
|
// Use the onError callback (added in v4.1.22)
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
onError({ error }) {
|
||||||
|
console.error('Stream error:', error);
|
||||||
|
// Custom error logging and handling
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Stream safely
|
||||||
|
for await (const chunk of stream.textStream) {
|
||||||
|
process.stdout.write(chunk);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Alternative (Manual try-catch):**
|
||||||
|
```typescript
|
||||||
|
// Fallback if not using onError callback
|
||||||
|
try {
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
for await (const chunk of stream.textStream) {
|
||||||
|
process.stdout.write(chunk);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Stream error:', error);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- **Use `onError` callback** for proper error capture (recommended)
|
||||||
|
- Implement server-side error monitoring
|
||||||
|
- Test stream error handling explicitly
|
||||||
|
- Always log on server side in production
|
||||||
|
|
||||||
|
**GitHub Issue:** #4726 (RESOLVED)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. AI_LoadAPIKeyError
|
||||||
|
|
||||||
|
**Cause:** Missing or invalid API key.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_LoadAPIKeyError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_LoadAPIKeyError) {
|
||||||
|
console.error('API key error:', error.message);
|
||||||
|
|
||||||
|
// Check:
|
||||||
|
// 1. .env file exists and loaded
|
||||||
|
// 2. Correct env variable name (OPENAI_API_KEY)
|
||||||
|
// 3. Key format is valid (starts with sk-)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Validate API keys at application startup
|
||||||
|
- Use environment variable validation (e.g., zod)
|
||||||
|
- Provide clear error messages in development
|
||||||
|
- Document required environment variables
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6. AI_InvalidArgumentError
|
||||||
|
|
||||||
|
**Cause:** Invalid parameters passed to function.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_InvalidArgumentError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
maxOutputTokens: -1, // Invalid!
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_InvalidArgumentError) {
|
||||||
|
console.error('Invalid argument:', error.message);
|
||||||
|
// Check parameter types and values
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Use TypeScript for type checking
|
||||||
|
- Validate inputs before calling AI SDK functions
|
||||||
|
- Read function signatures carefully
|
||||||
|
- Check official docs for parameter constraints
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 7. AI_NoContentGeneratedError
|
||||||
|
|
||||||
|
**Cause:** Model generated no content (safety filters, etc.).
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_NoContentGeneratedError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Some prompt',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_NoContentGeneratedError) {
|
||||||
|
console.error('No content generated');
|
||||||
|
|
||||||
|
// Possible causes:
|
||||||
|
// 1. Safety filters blocked output
|
||||||
|
// 2. Prompt triggered content policy
|
||||||
|
// 3. Model configuration issue
|
||||||
|
|
||||||
|
// Handle gracefully:
|
||||||
|
return { text: 'Unable to generate response. Please try different input.' };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Sanitize user inputs
|
||||||
|
- Avoid prompts that may trigger safety filters
|
||||||
|
- Have fallback messaging
|
||||||
|
- Log occurrences for analysis
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 8. AI_TypeValidationError
|
||||||
|
|
||||||
|
**Cause:** Zod schema validation failed on generated output.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_TypeValidationError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: z.object({
|
||||||
|
age: z.number().min(0).max(120), // Strict validation
|
||||||
|
}),
|
||||||
|
prompt: 'Generate person',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_TypeValidationError) {
|
||||||
|
console.error('Validation failed:', error.message);
|
||||||
|
|
||||||
|
// Solutions:
|
||||||
|
// 1. Relax schema constraints
|
||||||
|
// 2. Add more guidance in prompt
|
||||||
|
// 3. Use .optional() for unreliable fields
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Start with lenient schemas, tighten gradually
|
||||||
|
- Use `.optional()` for fields that may not always be present
|
||||||
|
- Add validation hints in field descriptions
|
||||||
|
- Test with various prompts
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 9. AI_RetryError
|
||||||
|
|
||||||
|
**Cause:** All retry attempts failed.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
import { AI_RetryError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
maxRetries: 3, // Default is 2
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_RetryError) {
|
||||||
|
console.error('All retries failed');
|
||||||
|
console.error('Last error:', error.lastError);
|
||||||
|
|
||||||
|
// Check root cause:
|
||||||
|
// - Persistent network issue
|
||||||
|
// - Provider outage
|
||||||
|
// - Invalid configuration
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Investigate root cause of failures
|
||||||
|
- Adjust retry configuration if needed
|
||||||
|
- Implement circuit breaker pattern for provider outages
|
||||||
|
- Have fallback providers
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 10. Rate Limiting Errors
|
||||||
|
|
||||||
|
**Cause:** Exceeded provider rate limits (RPM/TPM).
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
// Implement exponential backoff
|
||||||
|
async function generateWithBackoff(prompt: string, retries = 3) {
|
||||||
|
for (let i = 0; i < retries; i++) {
|
||||||
|
try {
|
||||||
|
return await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_APICallError && error.statusCode === 429) {
|
||||||
|
const delay = Math.pow(2, i) * 1000; // Exponential backoff
|
||||||
|
console.log(`Rate limited, waiting ${delay}ms`);
|
||||||
|
await new Promise(resolve => setTimeout(resolve, delay));
|
||||||
|
} else {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
throw new Error('Rate limit retries exhausted');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Monitor rate limit headers
|
||||||
|
- Queue requests to stay under limits
|
||||||
|
- Upgrade provider tier if needed
|
||||||
|
- Implement request throttling
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 11. TypeScript Performance with Zod
|
||||||
|
|
||||||
|
**Cause:** Complex Zod schemas slow down TypeScript type checking.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
// Instead of deeply nested schemas at top level:
|
||||||
|
// const complexSchema = z.object({ /* 100+ fields */ });
|
||||||
|
|
||||||
|
// Define inside functions or use type assertions:
|
||||||
|
function generateData() {
|
||||||
|
const schema = z.object({ /* complex schema */ });
|
||||||
|
return generateObject({ model: openai('gpt-4'), schema, prompt: '...' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or use z.lazy() for recursive schemas:
|
||||||
|
type Category = { name: string; subcategories?: Category[] };
|
||||||
|
const CategorySchema: z.ZodType<Category> = z.lazy(() =>
|
||||||
|
z.object({
|
||||||
|
name: z.string(),
|
||||||
|
subcategories: z.array(CategorySchema).optional(),
|
||||||
|
})
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Avoid top-level complex schemas
|
||||||
|
- Use `z.lazy()` for recursive types
|
||||||
|
- Split large schemas into smaller ones
|
||||||
|
- Use type assertions where appropriate
|
||||||
|
|
||||||
|
**Official Docs:**
|
||||||
|
https://ai-sdk.dev/docs/troubleshooting/common-issues/slow-type-checking
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 12. Invalid JSON Response (Provider-Specific)
|
||||||
|
|
||||||
|
**Cause:** Some models occasionally return invalid JSON.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```typescript
|
||||||
|
// Use built-in retry and mode selection
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: mySchema,
|
||||||
|
prompt: 'Generate data',
|
||||||
|
mode: 'json', // Force JSON mode (supported by GPT-4)
|
||||||
|
maxRetries: 3, // Retry on invalid JSON
|
||||||
|
});
|
||||||
|
|
||||||
|
// Or catch and retry manually:
|
||||||
|
try {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: mySchema,
|
||||||
|
prompt: 'Generate data',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
// Retry with different model
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
schema: mySchema,
|
||||||
|
prompt: 'Generate data',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Prevention:**
|
||||||
|
- Use `mode: 'json'` when available
|
||||||
|
- Prefer GPT-4 for structured output
|
||||||
|
- Implement retry logic
|
||||||
|
- Validate responses
|
||||||
|
|
||||||
|
**GitHub Issue:** #4302 (Imagen 3.0 Invalid JSON)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**More Errors:** https://ai-sdk.dev/docs/reference/ai-sdk-errors (28 total)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
### Use ai-sdk-core when:
|
||||||
|
|
||||||
|
- Building backend AI features (server-side text generation)
|
||||||
|
- Implementing server-side text generation (Node.js, Workers, Next.js)
|
||||||
|
- Creating structured AI outputs (JSON, forms, data extraction)
|
||||||
|
- Building AI agents with tools (multi-step workflows)
|
||||||
|
- Integrating multiple AI providers (OpenAI, Anthropic, Google, Cloudflare)
|
||||||
|
- Migrating from AI SDK v4 to v5
|
||||||
|
- Encountering AI SDK errors (AI_APICallError, AI_NoObjectGeneratedError, etc.)
|
||||||
|
- Using AI in Cloudflare Workers (with workers-ai-provider)
|
||||||
|
- Using AI in Next.js Server Components/Actions
|
||||||
|
- Need consistent API across different LLM providers
|
||||||
|
|
||||||
|
### Don't use this skill when:
|
||||||
|
|
||||||
|
- Building React chat UIs (use **ai-sdk-ui** skill instead)
|
||||||
|
- Need frontend hooks like useChat (use **ai-sdk-ui** skill instead)
|
||||||
|
- Need advanced topics like embeddings or image generation (check official docs)
|
||||||
|
- Building native Cloudflare Workers AI apps without multi-provider (use **cloudflare-workers-ai** skill instead)
|
||||||
|
- Need Generative UI / RSC (see https://ai-sdk.dev/docs/ai-sdk-rsc)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Versions
|
||||||
|
|
||||||
|
**AI SDK:**
|
||||||
|
- Stable: ai@5.0.98 (Nov 20, 2025)
|
||||||
|
- Beta: ai@6.0.0-beta.107 (Nov 22, 2025)
|
||||||
|
- Zod 3.x/4.x both supported (3.23.8 recommended)
|
||||||
|
|
||||||
|
**Latest Models (2025):**
|
||||||
|
- OpenAI: GPT-5.1, GPT-5, o3
|
||||||
|
- Anthropic: Claude Sonnet 4.5, Opus 4.1, Haiku 4.5
|
||||||
|
- Google: Gemini 2.5 Pro/Flash/Lite
|
||||||
|
|
||||||
|
**Check Latest:**
|
||||||
|
```bash
|
||||||
|
npm view ai version
|
||||||
|
npm view ai dist-tags # See beta versions
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Official Docs
|
||||||
|
|
||||||
|
**Core:**
|
||||||
|
- AI SDK 6 Beta: https://ai-sdk.dev/docs/announcing-ai-sdk-6-beta
|
||||||
|
- AI SDK Core: https://ai-sdk.dev/docs/ai-sdk-core/overview
|
||||||
|
- v4→v5 Migration: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
|
||||||
|
- All Errors (28): https://ai-sdk.dev/docs/reference/ai-sdk-errors
|
||||||
|
- Providers (25+): https://ai-sdk.dev/providers/overview
|
||||||
|
|
||||||
|
**GitHub:**
|
||||||
|
- Repository: https://github.com/vercel/ai
|
||||||
|
- Issues: https://github.com/vercel/ai/issues
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-11-22
|
||||||
|
**Skill Version:** 1.2.0
|
||||||
|
**AI SDK:** 5.0.98 stable / 6.0.0-beta.107
|
||||||
521
VERIFICATION_REPORT.md
Normal file
521
VERIFICATION_REPORT.md
Normal file
@@ -0,0 +1,521 @@
|
|||||||
|
# Skill Verification Report: ai-sdk-core
|
||||||
|
|
||||||
|
**Date**: 2025-10-29
|
||||||
|
**Verifier**: Claude Code (Sonnet 4.5)
|
||||||
|
**Standard**: claude-code-skill-standards.md
|
||||||
|
**Last Skill Update**: 2025-10-21 (38 days ago)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
**Status**: ⚠️ **WARNING - Multiple Updates Needed**
|
||||||
|
|
||||||
|
**Issues Found**: 12 total
|
||||||
|
- **Critical**: 3 (Claude models outdated, model availability statements, Zod version)
|
||||||
|
- **Moderate**: 5 (package versions, fixed issue still documented)
|
||||||
|
- **Minor**: 4 (missing new features, documentation enhancements)
|
||||||
|
|
||||||
|
**Overall Assessment**: The skill's core API patterns and architecture are correct, but model information is significantly outdated (Claude 3.x → 4.x transition missed), and several package versions need updating. One documented issue (#4726) has been fixed but is still listed as active.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Detailed Findings
|
||||||
|
|
||||||
|
### 1. YAML Frontmatter ✅ **PASS**
|
||||||
|
|
||||||
|
**Status**: Compliant with official standards
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- [x] YAML frontmatter present (lines 1-17)
|
||||||
|
- [x] `name` field present: "AI SDK Core" (matches directory)
|
||||||
|
- [x] `description` field comprehensive (3+ sentences, use cases, keywords)
|
||||||
|
- [x] Third-person voice used correctly
|
||||||
|
- [x] `license` field present: MIT
|
||||||
|
- [x] No non-standard frontmatter fields
|
||||||
|
- [x] Keywords comprehensive (technologies, errors, use cases)
|
||||||
|
|
||||||
|
**Notes**: Frontmatter is well-structured and follows all standards.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Package Versions ⚠️ **WARNING**
|
||||||
|
|
||||||
|
**Status**: Multiple packages outdated (not critical, but recommended to update)
|
||||||
|
|
||||||
|
| Package | Documented | Latest (npm) | Gap | Severity |
|
||||||
|
|---------|------------|--------------|-----|----------|
|
||||||
|
| `ai` | ^5.0.76 | **5.0.81** | +5 patches | LOW |
|
||||||
|
| `@ai-sdk/openai` | ^2.0.53 | **2.0.56** | +3 patches | LOW |
|
||||||
|
| `@ai-sdk/anthropic` | ^2.0.0 | **2.0.38** | +38 patches | MODERATE |
|
||||||
|
| `@ai-sdk/google` | ^2.0.0 | **2.0.24** | +24 patches | MODERATE |
|
||||||
|
| `workers-ai-provider` | ^2.0.0 | **2.0.0** | ✅ Current | ✅ |
|
||||||
|
| `zod` | ^3.23.8 | **4.1.12** | Major version | MODERATE |
|
||||||
|
|
||||||
|
**Findings**:
|
||||||
|
|
||||||
|
1. **ai (5.0.76 → 5.0.81)**: +5 patch versions
|
||||||
|
- **Impact**: Minor bug fixes and improvements
|
||||||
|
- **Breaking changes**: None (patch updates)
|
||||||
|
- **Recommendation**: Update to latest
|
||||||
|
|
||||||
|
2. **@ai-sdk/anthropic (2.0.0 → 2.0.38)**: +38 patch versions (!!)
|
||||||
|
- **Impact**: Significant bug fixes accumulated
|
||||||
|
- **Breaking changes**: None (patch updates)
|
||||||
|
- **Recommendation**: **Update immediately** (most outdated)
|
||||||
|
|
||||||
|
3. **@ai-sdk/google (2.0.0 → 2.0.24)**: +24 patch versions
|
||||||
|
- **Impact**: Multiple bug fixes
|
||||||
|
- **Breaking changes**: None (patch updates)
|
||||||
|
- **Recommendation**: Update to latest
|
||||||
|
|
||||||
|
4. **zod (3.23.8 → 4.1.12)**: Major version jump
|
||||||
|
- **Impact**: Zod 4.0 has breaking changes (error APIs, `.default()` behavior, `ZodError.errors` removed)
|
||||||
|
- **AI SDK Compatibility**: AI SDK 5 officially supports both Zod 3 and Zod 4 (Zod 4 support added July 31, 2025)
|
||||||
|
- **Vercel Recommendation**: Use Zod 4 for new projects
|
||||||
|
- **Known Issues**: Some peer dependency warnings with `zod-to-json-schema` package
|
||||||
|
- **Recommendation**: Document Zod 4 compatibility, keep examples compatible with both versions
|
||||||
|
|
||||||
|
**Sources**:
|
||||||
|
- npm registry (checked 2025-10-29)
|
||||||
|
- Vercel AI SDK 5 blog: https://vercel.com/blog/ai-sdk-5
|
||||||
|
- Zod v4 migration guide: https://zod.dev/v4/changelog
|
||||||
|
- AI SDK Zod 4 support: https://github.com/vercel/ai/issues/5682
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Model Names ❌ **CRITICAL**
|
||||||
|
|
||||||
|
**Status**: Significant inaccuracies - Claude models are a full generation behind, availability statements outdated
|
||||||
|
|
||||||
|
#### Finding 3.1: Claude Models **MAJOR VERSION BEHIND** ❌
|
||||||
|
|
||||||
|
**Documented**:
|
||||||
|
```typescript
|
||||||
|
const sonnet = anthropic('claude-3-5-sonnet-20241022'); // OLD
|
||||||
|
const opus = anthropic('claude-3-opus-20240229'); // OLD
|
||||||
|
const haiku = anthropic('claude-3-haiku-20240307'); // OLD
|
||||||
|
```
|
||||||
|
|
||||||
|
**Current Reality**:
|
||||||
|
- **Claude Sonnet 4** released: May 22, 2025
|
||||||
|
- **Claude Opus 4** released: May 22, 2025
|
||||||
|
- **Claude Sonnet 4.5** released: September 29, 2025
|
||||||
|
- **Naming convention changed**: `claude-sonnet-4-5-20250929` (not `claude-3-5-sonnet-YYYYMMDD`)
|
||||||
|
- **Anthropic deprecated Claude 3.x models** to focus on Claude 4.x family
|
||||||
|
|
||||||
|
**Lines affected**: 71, 605-610, references throughout
|
||||||
|
|
||||||
|
**Severity**: **CRITICAL** - Users following this skill will use deprecated models
|
||||||
|
|
||||||
|
**Recommendation**:
|
||||||
|
1. Update all Claude model examples to Claude 4.x
|
||||||
|
2. Add Claude 3.x to legacy/migration section with deprecation warning
|
||||||
|
3. Document new naming convention
|
||||||
|
|
||||||
|
**Sources**:
|
||||||
|
- Anthropic Claude models: https://docs.claude.com/en/docs/about-claude/models/overview
|
||||||
|
- Claude Sonnet 4.5 announcement: https://www.anthropic.com/claude/sonnet
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Finding 3.2: GPT-5 and Gemini 2.5 Availability ⚠️ **MODERATE**
|
||||||
|
|
||||||
|
**Documented**:
|
||||||
|
```typescript
|
||||||
|
const gpt5 = openai('gpt-5'); // If available (line 573)
|
||||||
|
const lite = google('gemini-2.5-flash-lite'); // If available (line 642)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Current Reality**:
|
||||||
|
- **GPT-5**: Released August 7, 2025 (nearly 3 months ago)
|
||||||
|
- Models available: `gpt-5`, `gpt-5-mini`, `gpt-5-nano`
|
||||||
|
- Status: Generally available through OpenAI API
|
||||||
|
- Default model in ChatGPT for all users
|
||||||
|
|
||||||
|
- **Gemini 2.5**: All models generally available
|
||||||
|
- Gemini 2.5 Pro: GA since June 17, 2025
|
||||||
|
- Gemini 2.5 Flash: GA since June 17, 2025
|
||||||
|
- Gemini 2.5 Flash-Lite: GA since July 2025
|
||||||
|
|
||||||
|
**Lines affected**: 32, 573, 642
|
||||||
|
|
||||||
|
**Severity**: **MODERATE** - Not critical but creates confusion
|
||||||
|
|
||||||
|
**Recommendation**:
|
||||||
|
1. Remove "If available" comments
|
||||||
|
2. Update to "Currently available" or similar
|
||||||
|
3. Verify exact model identifiers with providers
|
||||||
|
|
||||||
|
**Sources**:
|
||||||
|
- OpenAI GPT-5: https://openai.com/index/introducing-gpt-5/
|
||||||
|
- Google Gemini 2.5: https://developers.googleblog.com/en/gemini-2-5-thinking-model-updates/
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. Documentation Accuracy ⚠️ **WARNING**
|
||||||
|
|
||||||
|
**Status**: Core patterns correct, but missing new features and has outdated information
|
||||||
|
|
||||||
|
#### Finding 4.1: Missing New Features (Minor)
|
||||||
|
|
||||||
|
**New AI SDK 5 Features Not Documented**:
|
||||||
|
|
||||||
|
1. **`onError` callback for streamText** (IMPORTANT!)
|
||||||
|
- Added in ai@4.1.22 (now standard in v5)
|
||||||
|
- Critical for proper error handling
|
||||||
|
- Fixes the "silent failure" issue (#4726)
|
||||||
|
- **Recommendation**: Add section on streamText error handling
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
onError({ error }) {
|
||||||
|
console.error('Stream error:', error);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **`experimental_transform` for stream transformations**
|
||||||
|
- Allows custom pipeline support (e.g., `smoothStream()`)
|
||||||
|
- **Recommendation**: Add to advanced features or mention in "not covered"
|
||||||
|
|
||||||
|
3. **`sources` support**
|
||||||
|
- Web references from providers like Perplexity/Google
|
||||||
|
- **Recommendation**: Add to "Advanced Topics (Not Replicated in This Skill)"
|
||||||
|
|
||||||
|
4. **`fullStream` property**
|
||||||
|
- Fine-grained event handling for tool calls and reasoning
|
||||||
|
- Already mentioned briefly, but could be expanded
|
||||||
|
|
||||||
|
**Severity**: **LOW** - Core functionality documented correctly
|
||||||
|
|
||||||
|
**Recommendation**: Add section on new v5 features or update "Advanced Topics" list
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Finding 4.2: Code Examples (Pass)
|
||||||
|
|
||||||
|
**Status**: All tested code patterns are valid for AI SDK 5.0.76+
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- [x] Function signatures correct (`generateText`, `streamText`, `generateObject`, `streamObject`)
|
||||||
|
- [x] Parameter names accurate (`maxOutputTokens`, `temperature`, `stopWhen`)
|
||||||
|
- [x] Tool calling patterns correct (`tool()` function, `inputSchema`)
|
||||||
|
- [x] Agent class usage correct
|
||||||
|
- [x] Error handling classes correct
|
||||||
|
- [x] TypeScript types valid
|
||||||
|
|
||||||
|
**Notes**: Core API documentation is accurate and production-ready.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. Known Issues Accuracy ⚠️ **WARNING**
|
||||||
|
|
||||||
|
**Status**: One issue fixed but still documented as active, one correctly documented
|
||||||
|
|
||||||
|
#### Finding 5.1: Issue #4726 (streamText fails silently) - **FIXED BUT STILL DOCUMENTED** ⚠️
|
||||||
|
|
||||||
|
**Documented** (lines 1130-1161):
|
||||||
|
```typescript
|
||||||
|
// Add explicit error handling
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
for await (const chunk of stream.textStream) {
|
||||||
|
process.stdout.write(chunk);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Stream error:', error);
|
||||||
|
// Check server logs - errors may not reach client
|
||||||
|
}
|
||||||
|
|
||||||
|
// GitHub Issue: #4726
|
||||||
|
```
|
||||||
|
|
||||||
|
**Actual Status**:
|
||||||
|
- **CLOSED**: February 6, 2025
|
||||||
|
- **Fixed in**: ai@4.1.22
|
||||||
|
- **Solution**: `onError` callback parameter added
|
||||||
|
|
||||||
|
**Impact**: Users may think this is still an unsolved issue when it's actually fixed
|
||||||
|
|
||||||
|
**Recommendation**:
|
||||||
|
1. Update to note issue was resolved
|
||||||
|
2. Show the `onError` callback as the preferred solution
|
||||||
|
3. Keep the manual try-catch as secondary approach
|
||||||
|
4. Update line: `// GitHub Issue: #4726 (RESOLVED in v4.1.22)`
|
||||||
|
|
||||||
|
**Source**: https://github.com/vercel/ai/issues/4726
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Finding 5.2: Issue #4302 (Imagen 3.0 Invalid JSON) - **CORRECTLY DOCUMENTED** ✅
|
||||||
|
|
||||||
|
**Documented** (lines 1406-1445):
|
||||||
|
```typescript
|
||||||
|
// GitHub Issue: #4302 (Imagen 3.0 Invalid JSON)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Actual Status**:
|
||||||
|
- **OPEN**: Reported January 7, 2025
|
||||||
|
- **Still unresolved**: Intermittent empty JSON responses from Vertex AI
|
||||||
|
- **Affects**: `@ai-sdk/google-vertex` version 2.0.13+
|
||||||
|
|
||||||
|
**Impact**: Correctly informs users of ongoing issue
|
||||||
|
|
||||||
|
**Status**: ✅ **ACCURATE** - No changes needed
|
||||||
|
|
||||||
|
**Source**: https://github.com/vercel/ai/issues/4302
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6. Templates Functionality ✅ **NOT TESTED**
|
||||||
|
|
||||||
|
**Status**: Not tested in this verification (would require creating test project)
|
||||||
|
|
||||||
|
**Files to Test** (13 templates):
|
||||||
|
- `templates/generate-text-basic.ts`
|
||||||
|
- `templates/stream-text-chat.ts`
|
||||||
|
- `templates/generate-object-zod.ts`
|
||||||
|
- `templates/stream-object-zod.ts`
|
||||||
|
- `templates/tools-basic.ts`
|
||||||
|
- `templates/agent-with-tools.ts`
|
||||||
|
- `templates/multi-step-execution.ts`
|
||||||
|
- `templates/openai-setup.ts`
|
||||||
|
- `templates/anthropic-setup.ts`
|
||||||
|
- `templates/google-setup.ts`
|
||||||
|
- `templates/cloudflare-worker-integration.ts`
|
||||||
|
- `templates/nextjs-server-action.ts`
|
||||||
|
- `templates/package.json`
|
||||||
|
|
||||||
|
**Recommendation**: Test templates in Phase 3 verification (create test project with latest packages)
|
||||||
|
|
||||||
|
**Assumption**: Templates follow documented patterns, so likely work correctly (but need verification)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 7. Standards Compliance ✅ **PASS**
|
||||||
|
|
||||||
|
**Status**: Fully compliant with Anthropic official standards
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- [x] Follows agent_skills_spec.md structure
|
||||||
|
- [x] Directory structure correct (`scripts/`, `references/`, `templates/`)
|
||||||
|
- [x] README.md has comprehensive auto-trigger keywords
|
||||||
|
- [x] Writing style: imperative instructions, third-person descriptions
|
||||||
|
- [x] No placeholder text (TODO, FIXME) found
|
||||||
|
- [x] Skill installed correctly in `~/.claude/skills/`
|
||||||
|
|
||||||
|
**Comparison**:
|
||||||
|
- Matches gold standard: `tailwind-v4-shadcn/`
|
||||||
|
- Follows repo standards: `claude-code-skill-standards.md`
|
||||||
|
- Example audit: `CLOUDFLARE_SKILLS_AUDIT.md` patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 8. Metadata & Metrics ✅ **PASS**
|
||||||
|
|
||||||
|
**Status**: Well-documented and credible
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- [x] Production testing mentioned: "Production-ready backend AI"
|
||||||
|
- [x] Token efficiency: Implied by "13 templates, comprehensive docs"
|
||||||
|
- [x] Errors prevented: "Top 12 Errors" documented with solutions
|
||||||
|
- [x] Status: "Production Ready" (implicit, no beta/experimental warnings)
|
||||||
|
- [x] Last Updated: 2025-10-21 (38 days ago - reasonable)
|
||||||
|
- [x] Version tracking: Skill v1.0.0, AI SDK v5.0.76+
|
||||||
|
|
||||||
|
**Notes**:
|
||||||
|
- No explicit "N% token savings" metric (consider adding)
|
||||||
|
- "Errors prevented: 12" is clear
|
||||||
|
- Production evidence: Comprehensive documentation suggests real-world usage
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 9. Links & External Resources ⚠️ **NOT TESTED**
|
||||||
|
|
||||||
|
**Status**: Not tested in this verification (would require checking each URL)
|
||||||
|
|
||||||
|
**Links to Verify** (sample):
|
||||||
|
- https://ai-sdk.dev/docs/introduction
|
||||||
|
- https://ai-sdk.dev/docs/ai-sdk-core/overview
|
||||||
|
- https://github.com/vercel/ai
|
||||||
|
- https://vercel.com/blog/ai-sdk-5
|
||||||
|
- https://developers.cloudflare.com/workers-ai/
|
||||||
|
- [50+ more links in SKILL.md]
|
||||||
|
|
||||||
|
**Recommendation**: Automated link checker or manual spot-check in Phase 3
|
||||||
|
|
||||||
|
**Assumption**: Official Vercel/Anthropic/OpenAI/Google docs are stable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 10. v4→v5 Migration Guide ✅ **PASS**
|
||||||
|
|
||||||
|
**Status**: Comprehensive and accurate
|
||||||
|
|
||||||
|
**Sections Reviewed**:
|
||||||
|
- Breaking changes overview (lines 908-1018)
|
||||||
|
- Migration examples (lines 954-990)
|
||||||
|
- Migration checklist (lines 993-1004)
|
||||||
|
- Automated migration tool mentioned (lines 1007-1017)
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- [x] Breaking changes match official guide
|
||||||
|
- [x] `maxTokens` → `maxOutputTokens` documented
|
||||||
|
- [x] `providerMetadata` → `providerOptions` documented
|
||||||
|
- [x] Tool API changes documented
|
||||||
|
- [x] `maxSteps` → `stopWhen` migration documented
|
||||||
|
- [x] Package reorganization noted
|
||||||
|
|
||||||
|
**Source**: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommendations by Priority
|
||||||
|
|
||||||
|
### 🔴 Critical (Fix Immediately)
|
||||||
|
|
||||||
|
1. **Update Claude Model Names** (Finding 3.1)
|
||||||
|
- Replace all Claude 3.x references with Claude 4.x
|
||||||
|
- Document new naming convention: `claude-sonnet-4-5-YYYYMMDD`
|
||||||
|
- Add deprecation warning for Claude 3.x models
|
||||||
|
- **Files**: SKILL.md (lines 71, 605-610, examples throughout)
|
||||||
|
|
||||||
|
2. **Remove "If Available" for GPT-5 and Gemini 2.5** (Finding 3.2)
|
||||||
|
- GPT-5 released August 7, 2025 (3 months ago)
|
||||||
|
- Gemini 2.5 models GA since June-July 2025
|
||||||
|
- **Files**: SKILL.md (lines 32, 573, 642)
|
||||||
|
|
||||||
|
3. **Update Anthropic Provider Package** (Finding 2)
|
||||||
|
- `@ai-sdk/anthropic`: 2.0.0 → 2.0.38 (+38 patches!)
|
||||||
|
- Most outdated package, likely includes Claude 4 support
|
||||||
|
- **Files**: SKILL.md (line 1678), templates/package.json
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🟡 Moderate (Update Soon)
|
||||||
|
|
||||||
|
4. **Update GitHub Issue #4726 Status** (Finding 5.1)
|
||||||
|
- Mark as RESOLVED (closed Feb 6, 2025)
|
||||||
|
- Document `onError` callback as the solution
|
||||||
|
- **Files**: SKILL.md (lines 1130-1161)
|
||||||
|
|
||||||
|
5. **Update Package Versions** (Finding 2)
|
||||||
|
- `ai`: 5.0.76 → 5.0.81
|
||||||
|
- `@ai-sdk/openai`: 2.0.53 → 2.0.56
|
||||||
|
- `@ai-sdk/google`: 2.0.0 → 2.0.24
|
||||||
|
- **Files**: SKILL.md (lines 1673-1687), templates/package.json
|
||||||
|
|
||||||
|
6. **Document Zod 4 Compatibility** (Finding 2)
|
||||||
|
- Add note that AI SDK 5 supports both Zod 3 and 4
|
||||||
|
- Mention Zod 4 is recommended for new projects
|
||||||
|
- Note potential peer dependency warnings
|
||||||
|
- **Files**: SKILL.md (lines 1690-1695, dependencies section)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🟢 Minor (Nice to Have)
|
||||||
|
|
||||||
|
7. **Add `onError` Callback Documentation** (Finding 4.1)
|
||||||
|
- Document the `onError` callback for streamText
|
||||||
|
- Show as preferred error handling method
|
||||||
|
- **Files**: SKILL.md (streamText section, error handling)
|
||||||
|
|
||||||
|
8. **Add "New in v5" Section** (Finding 4.1)
|
||||||
|
- Document: `onError`, `experimental_transform`, `sources`, `fullStream`
|
||||||
|
- Or add to "Advanced Topics (Not Replicated in This Skill)"
|
||||||
|
|
||||||
|
9. **Update "Last Verified" Date** (Metadata)
|
||||||
|
- Change from 2025-10-21 to 2025-10-29
|
||||||
|
- **Files**: SKILL.md (line 1778), README.md (line 87)
|
||||||
|
|
||||||
|
10. **Add Token Efficiency Metric** (Finding 8)
|
||||||
|
- Calculate approximate token savings vs manual implementation
|
||||||
|
- Add to metadata section
|
||||||
|
- Example: "~60% token savings (12k → 4.5k tokens)"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist Progress
|
||||||
|
|
||||||
|
- [x] YAML frontmatter valid ✅
|
||||||
|
- [x] Package versions checked ⚠️ (outdated)
|
||||||
|
- [x] Model names verified ❌ (critical issues)
|
||||||
|
- [x] API patterns checked ✅ (mostly correct)
|
||||||
|
- [x] Known issues validated ⚠️ (one fixed but documented as active)
|
||||||
|
- [ ] Templates tested ⏸️ (not tested - requires project creation)
|
||||||
|
- [x] Standards compliance verified ✅
|
||||||
|
- [x] Metadata reviewed ✅
|
||||||
|
- [ ] Links checked ⏸️ (not tested - would need automated tool)
|
||||||
|
- [x] Documentation accuracy ⚠️ (missing new features)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
### Phase 1: Critical Updates (Immediate)
|
||||||
|
1. Update Claude model names to 4.x throughout
|
||||||
|
2. Remove "if available" comments for GPT-5 and Gemini 2.5
|
||||||
|
3. Update `@ai-sdk/anthropic` to 2.0.38
|
||||||
|
|
||||||
|
### Phase 2: Moderate Updates (This Week)
|
||||||
|
4. Mark issue #4726 as resolved, document onError callback
|
||||||
|
5. Update remaining package versions
|
||||||
|
6. Add Zod 4 compatibility note
|
||||||
|
|
||||||
|
### Phase 3: Testing & Verification (Next Session)
|
||||||
|
7. Create test project with all templates
|
||||||
|
8. Verify templates work with latest packages
|
||||||
|
9. Test with updated model names
|
||||||
|
10. Check external links (automated or spot-check)
|
||||||
|
|
||||||
|
### Phase 4: Enhancements (Optional)
|
||||||
|
11. Add new v5 features documentation
|
||||||
|
12. Add token efficiency metrics
|
||||||
|
13. Update "Last Verified" date
|
||||||
|
14. Consider adding examples for Claude 4.5 Sonnet
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Verification
|
||||||
|
|
||||||
|
**Scheduled**: 2026-01-29 (3 months from now, per quarterly maintenance policy)
|
||||||
|
|
||||||
|
**Priority Items to Check**:
|
||||||
|
- AI SDK version (watch for v6 GA)
|
||||||
|
- Claude 5.x release (if any)
|
||||||
|
- GPT-6 announcements (unlikely but monitor)
|
||||||
|
- Zod 5.x release (if any)
|
||||||
|
- New AI SDK features
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix: Version Comparison Table
|
||||||
|
|
||||||
|
| Component | Documented | Current | Status | Action |
|
||||||
|
|-----------|------------|---------|--------|--------|
|
||||||
|
| **Skill** | 1.0.0 | 1.0.0 | ✅ | - |
|
||||||
|
| **AI SDK** | 5.0.76+ | 5.0.81 | ⚠️ | Update to 5.0.81 |
|
||||||
|
| **OpenAI Provider** | 2.0.53 | 2.0.56 | ⚠️ | Update to 2.0.56 |
|
||||||
|
| **Anthropic Provider** | 2.0.0 | 2.0.38 | ❌ | **Update to 2.0.38** |
|
||||||
|
| **Google Provider** | 2.0.0 | 2.0.24 | ⚠️ | Update to 2.0.24 |
|
||||||
|
| **Workers AI Provider** | 2.0.0 | 2.0.0 | ✅ | - |
|
||||||
|
| **Zod** | 3.23.8 | 4.1.12 | ⚠️ | Document Zod 4 support |
|
||||||
|
| | | | | |
|
||||||
|
| **GPT-5** | "If available" | Available (Aug 2025) | ❌ | **Update availability** |
|
||||||
|
| **Gemini 2.5** | "If available" | GA (Jun-Jul 2025) | ❌ | **Update availability** |
|
||||||
|
| **Claude 3.x** | Primary examples | Deprecated | ❌ | **Migrate to Claude 4.x** |
|
||||||
|
| **Claude 4.x** | Not mentioned | Current (May 2025) | ❌ | **Add as primary** |
|
||||||
|
| **Claude 4.5** | Not mentioned | Current (Sep 2025) | ❌ | **Add as recommended** |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Report Generated**: 2025-10-29 by Claude Code (Sonnet 4.5)
|
||||||
|
**Review Status**: Ready for implementation
|
||||||
|
**Estimated Update Time**: 2-3 hours for all changes
|
||||||
125
plugin.lock.json
Normal file
125
plugin.lock.json
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
{
|
||||||
|
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||||
|
"pluginId": "gh:jezweb/claude-skills:skills/ai-sdk-core",
|
||||||
|
"normalized": {
|
||||||
|
"repo": null,
|
||||||
|
"ref": "refs/tags/v20251128.0",
|
||||||
|
"commit": "1c2857142ee6e3d85df7bbe0d3015fc0971f01ad",
|
||||||
|
"treeHash": "90b58c54ad625fd18dad4840d80133a7da6b47909e9ef70f14ef386f40b53972",
|
||||||
|
"generatedAt": "2025-11-28T10:19:00.280928Z",
|
||||||
|
"toolVersion": "publish_plugins.py@0.2.0"
|
||||||
|
},
|
||||||
|
"origin": {
|
||||||
|
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||||
|
"branch": "master",
|
||||||
|
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||||
|
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||||
|
},
|
||||||
|
"manifest": {
|
||||||
|
"name": "ai-sdk-core",
|
||||||
|
"description": "Build backend AI features with Vercel AI SDK v5: text generation, structured output (Zod schemas), tool calling, and agents. Multi-provider support (OpenAI, Anthropic, Google, Cloudflare). Use when: implementing server-side AI, generating text/structured data, building AI agents, streaming responses, or troubleshooting AI_APICallError, AI_NoObjectGeneratedError.",
|
||||||
|
"version": "1.0.0"
|
||||||
|
},
|
||||||
|
"content": {
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "README.md",
|
||||||
|
"sha256": "40d7043d7a9cbc19e2967e487b401e62415e00637691f74c8468d0ace5960edf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "SKILL.md",
|
||||||
|
"sha256": "5028044396d2df40ced1626d74b5700fe18282855c8d6d0792e63ba6a2d7c80f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "VERIFICATION_REPORT.md",
|
||||||
|
"sha256": "b658585656e45ec1ee2efece6b4e7fb45972612051994154e18381ad612ced52"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "references/providers-quickstart.md",
|
||||||
|
"sha256": "d607a8c1cccedb52633eac231901f1020e02a5d8b18a2dc4ea01b679836d5f0e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "references/v5-breaking-changes.md",
|
||||||
|
"sha256": "0f3e11c07044675f6e3078ea3d3482e0d26cbcd8b4af686cda4819f28c348446"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "references/top-errors.md",
|
||||||
|
"sha256": "7b60f3ac1d0d845070a1a93941b4173cd6b4fbed311c4a23f61504c74856046a"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "references/production-patterns.md",
|
||||||
|
"sha256": "757dd5293a63a5f3e0f013cc86ac6ac6de6d79589a848d27ea5b8aefa8c6186c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "references/links-to-official-docs.md",
|
||||||
|
"sha256": "c28ec873f07b78c519ceaf3866e03402466c83e41f18069a722937c1b67fbcb8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "scripts/check-versions.sh",
|
||||||
|
"sha256": "cf24c7435ab34c784ac1537b70b1a886e8d8c11ace85a1fb404e9f1b7965b4f8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": ".claude-plugin/plugin.json",
|
||||||
|
"sha256": "7f788cd071bcb480f3d31fdf12bf6ba345b4eeabbfd8da2b5b320a051e25c21f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/tools-basic.ts",
|
||||||
|
"sha256": "7c51a6b2b51611dc20e3de1ce348f36c4347a08fce8e9f41c34a550830c10706"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/cloudflare-worker-integration.ts",
|
||||||
|
"sha256": "19880356c1f7a1c621750e727f2c3668cd0e129084a6259d0941bdb2c365e3a7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/generate-object-zod.ts",
|
||||||
|
"sha256": "8257e05614541fed5eb48ac51bac08e1959a54747de6246093088f4516094eda"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/nextjs-server-action.ts",
|
||||||
|
"sha256": "b1fcdff3d2b27b4d1a7ca9c68370eaa259378b51cf4e7efe6e535f2203449ec0"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/stream-object-zod.ts",
|
||||||
|
"sha256": "cf8df737215d1eb07a241126b9e4cb329efa189b9649939a5c9d849670e10aa0"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/package.json",
|
||||||
|
"sha256": "386a0998b0931330aff6e20f0a55dd60387d83f8561a6e93642ebd53e2c62a8b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/agent-with-tools.ts",
|
||||||
|
"sha256": "7af59dd3256bf5ba6c1becc2ad67f20e12239191c3bc87be153390788fb43d38"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/anthropic-setup.ts",
|
||||||
|
"sha256": "67c33b0e9a87de6167954101bf8d5dd7d4e5e0a6b879a716641f66f3515da388"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/multi-step-execution.ts",
|
||||||
|
"sha256": "38431267b3e11ead3d9a08daaba6e15ae915e32a3b36c2d6297bc22eeb1b9052"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/google-setup.ts",
|
||||||
|
"sha256": "7194a7e5953c58da27e2602e81f87856eed55c21077de25a822a1f131167ec9e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/generate-text-basic.ts",
|
||||||
|
"sha256": "e04407d3ef478e12a22e3e855f901f2005d265f3b0c047756f9fad9eaab2d55f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/openai-setup.ts",
|
||||||
|
"sha256": "2c6724cf76f6d13541ec8991114940bf681811c013107d4b4b0a94b22f78682d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "templates/stream-text-chat.ts",
|
||||||
|
"sha256": "8cae34234f6a8f1061ba9c186728ff488047b35a6b6c5e24c22e5c394e67900f"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dirSha256": "90b58c54ad625fd18dad4840d80133a7da6b47909e9ef70f14ef386f40b53972"
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"scannedAt": null,
|
||||||
|
"scannerVersion": null,
|
||||||
|
"flags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
209
references/links-to-official-docs.md
Normal file
209
references/links-to-official-docs.md
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
# AI SDK Core - Official Documentation Links
|
||||||
|
|
||||||
|
Organized links to official AI SDK and provider documentation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI SDK Core Documentation
|
||||||
|
|
||||||
|
### Getting Started
|
||||||
|
|
||||||
|
- **Introduction:** https://ai-sdk.dev/docs/introduction
|
||||||
|
- **AI SDK Core Overview:** https://ai-sdk.dev/docs/ai-sdk-core/overview
|
||||||
|
- **Foundations:** https://ai-sdk.dev/docs/foundations/overview
|
||||||
|
|
||||||
|
### Core Functions
|
||||||
|
|
||||||
|
- **Generating Text:** https://ai-sdk.dev/docs/ai-sdk-core/generating-text
|
||||||
|
- **Streaming Text:** https://ai-sdk.dev/docs/ai-sdk-core/streaming-text
|
||||||
|
- **Generating Structured Data:** https://ai-sdk.dev/docs/ai-sdk-core/generating-structured-data
|
||||||
|
- **Streaming Structured Data:** https://ai-sdk.dev/docs/ai-sdk-core/streaming-structured-data
|
||||||
|
|
||||||
|
### Tool Calling & Agents
|
||||||
|
|
||||||
|
- **Tools and Tool Calling:** https://ai-sdk.dev/docs/ai-sdk-core/tools-and-tool-calling
|
||||||
|
- **Agents Overview:** https://ai-sdk.dev/docs/agents/overview
|
||||||
|
- **Building Agents:** https://ai-sdk.dev/docs/agents/building-agents
|
||||||
|
|
||||||
|
### Advanced Topics (Not Replicated in This Skill)
|
||||||
|
|
||||||
|
- **Embeddings:** https://ai-sdk.dev/docs/ai-sdk-core/embeddings
|
||||||
|
- **Image Generation:** https://ai-sdk.dev/docs/ai-sdk-core/generating-images
|
||||||
|
- **Transcription (Audio to Text):** https://ai-sdk.dev/docs/ai-sdk-core/generating-transcriptions
|
||||||
|
- **Speech (Text to Audio):** https://ai-sdk.dev/docs/ai-sdk-core/generating-speech
|
||||||
|
- **MCP Tools:** https://ai-sdk.dev/docs/ai-sdk-core/mcp-tools
|
||||||
|
- **Telemetry:** https://ai-sdk.dev/docs/ai-sdk-core/telemetry
|
||||||
|
- **Generative UI (RSC):** https://ai-sdk.dev/docs/ai-sdk-rsc
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration & Troubleshooting
|
||||||
|
|
||||||
|
- **v4 → v5 Migration Guide:** https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
|
||||||
|
- **All Error Types (28 total):** https://ai-sdk.dev/docs/reference/ai-sdk-errors
|
||||||
|
- **Troubleshooting Guide:** https://ai-sdk.dev/docs/troubleshooting
|
||||||
|
- **Common Issues:** https://ai-sdk.dev/docs/troubleshooting/common-issues
|
||||||
|
- **Slow Type Checking:** https://ai-sdk.dev/docs/troubleshooting/common-issues/slow-type-checking
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Provider Documentation
|
||||||
|
|
||||||
|
### Provider Overview
|
||||||
|
|
||||||
|
- **All Providers:** https://ai-sdk.dev/providers/overview
|
||||||
|
- **Provider Selection Guide:** https://ai-sdk.dev/providers/overview#provider-selection
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
- **OpenAI Provider Docs:** https://ai-sdk.dev/providers/ai-sdk-providers/openai
|
||||||
|
- **OpenAI Platform:** https://platform.openai.com/
|
||||||
|
- **API Keys:** https://platform.openai.com/api-keys
|
||||||
|
- **Rate Limits:** https://platform.openai.com/account/rate-limits
|
||||||
|
- **Pricing:** https://openai.com/api/pricing/
|
||||||
|
|
||||||
|
### Anthropic
|
||||||
|
|
||||||
|
- **Anthropic Provider Docs:** https://ai-sdk.dev/providers/ai-sdk-providers/anthropic
|
||||||
|
- **Anthropic Console:** https://console.anthropic.com/
|
||||||
|
- **Claude Models:** https://docs.anthropic.com/en/docs/models-overview
|
||||||
|
- **Rate Limits:** https://docs.anthropic.com/en/api/rate-limits
|
||||||
|
- **Pricing:** https://www.anthropic.com/pricing
|
||||||
|
|
||||||
|
### Google
|
||||||
|
|
||||||
|
- **Google Provider Docs:** https://ai-sdk.dev/providers/ai-sdk-providers/google
|
||||||
|
- **Google AI Studio:** https://aistudio.google.com/
|
||||||
|
- **API Keys:** https://aistudio.google.com/app/apikey
|
||||||
|
- **Gemini Models:** https://ai.google.dev/models/gemini
|
||||||
|
- **Pricing:** https://ai.google.dev/pricing
|
||||||
|
|
||||||
|
### Cloudflare Workers AI
|
||||||
|
|
||||||
|
- **Workers AI Provider (Community):** https://ai-sdk.dev/providers/community-providers/cloudflare-workers-ai
|
||||||
|
- **Cloudflare Workers AI Docs:** https://developers.cloudflare.com/workers-ai/
|
||||||
|
- **AI SDK Configuration:** https://developers.cloudflare.com/workers-ai/configuration/ai-sdk/
|
||||||
|
- **Available Models:** https://developers.cloudflare.com/workers-ai/models/
|
||||||
|
- **GitHub (workers-ai-provider):** https://github.com/cloudflare/ai/tree/main/packages/workers-ai-provider
|
||||||
|
- **Pricing:** https://developers.cloudflare.com/workers-ai/platform/pricing/
|
||||||
|
|
||||||
|
### Community Providers
|
||||||
|
|
||||||
|
- **Community Providers List:** https://ai-sdk.dev/providers/community-providers
|
||||||
|
- **Ollama:** https://ai-sdk.dev/providers/community-providers/ollama
|
||||||
|
- **FriendliAI:** https://ai-sdk.dev/providers/community-providers/friendliai
|
||||||
|
- **LM Studio:** https://ai-sdk.dev/providers/community-providers/lmstudio
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Framework Integration
|
||||||
|
|
||||||
|
### Next.js
|
||||||
|
|
||||||
|
- **Next.js App Router Integration:** https://ai-sdk.dev/docs/getting-started/nextjs-app-router
|
||||||
|
- **Next.js Pages Router Integration:** https://ai-sdk.dev/docs/getting-started/nextjs-pages-router
|
||||||
|
- **Next.js Documentation:** https://nextjs.org/docs
|
||||||
|
|
||||||
|
### Node.js
|
||||||
|
|
||||||
|
- **Node.js Integration:** https://ai-sdk.dev/docs/getting-started/nodejs
|
||||||
|
|
||||||
|
### Vercel Deployment
|
||||||
|
|
||||||
|
- **Vercel Functions:** https://vercel.com/docs/functions
|
||||||
|
- **Vercel Streaming:** https://vercel.com/docs/functions/streaming
|
||||||
|
- **Vercel Environment Variables:** https://vercel.com/docs/projects/environment-variables
|
||||||
|
|
||||||
|
### Cloudflare Workers
|
||||||
|
|
||||||
|
- **Cloudflare Workers Docs:** https://developers.cloudflare.com/workers/
|
||||||
|
- **Wrangler CLI:** https://developers.cloudflare.com/workers/wrangler/
|
||||||
|
- **Workers Configuration:** https://developers.cloudflare.com/workers/wrangler/configuration/
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
- **generateText:** https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-text
|
||||||
|
- **streamText:** https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text
|
||||||
|
- **generateObject:** https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-object
|
||||||
|
- **streamObject:** https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-object
|
||||||
|
- **Tool:** https://ai-sdk.dev/docs/reference/ai-sdk-core/tool
|
||||||
|
- **Agent:** https://ai-sdk.dev/docs/reference/ai-sdk-core/agent
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GitHub & Community
|
||||||
|
|
||||||
|
- **GitHub Repository:** https://github.com/vercel/ai
|
||||||
|
- **GitHub Issues:** https://github.com/vercel/ai/issues
|
||||||
|
- **GitHub Discussions:** https://github.com/vercel/ai/discussions
|
||||||
|
- **Discord Community:** https://discord.gg/vercel
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Blog Posts & Announcements
|
||||||
|
|
||||||
|
- **AI SDK 5.0 Release:** https://vercel.com/blog/ai-sdk-5
|
||||||
|
- **Vercel AI Blog:** https://vercel.com/blog/category/ai
|
||||||
|
- **Engineering Blog (Agents):** https://www.anthropic.com/engineering
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## TypeScript & Zod
|
||||||
|
|
||||||
|
- **Zod Documentation:** https://zod.dev/
|
||||||
|
- **TypeScript Handbook:** https://www.typescriptlang.org/docs/
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complementary Skills
|
||||||
|
|
||||||
|
For complete AI SDK coverage, also see:
|
||||||
|
|
||||||
|
- **ai-sdk-ui skill:** Frontend React hooks (useChat, useCompletion, useObject)
|
||||||
|
- **cloudflare-workers-ai skill:** Native Cloudflare Workers AI binding (no multi-provider)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Navigation
|
||||||
|
|
||||||
|
### I want to...
|
||||||
|
|
||||||
|
**Generate text:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/ai-sdk-core/generating-text
|
||||||
|
- Template: `templates/generate-text-basic.ts`
|
||||||
|
|
||||||
|
**Stream text:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/ai-sdk-core/streaming-text
|
||||||
|
- Template: `templates/stream-text-chat.ts`
|
||||||
|
|
||||||
|
**Generate structured output:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/ai-sdk-core/generating-structured-data
|
||||||
|
- Template: `templates/generate-object-zod.ts`
|
||||||
|
|
||||||
|
**Use tools:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/ai-sdk-core/tools-and-tool-calling
|
||||||
|
- Template: `templates/tools-basic.ts`
|
||||||
|
|
||||||
|
**Build an agent:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/agents/overview
|
||||||
|
- Template: `templates/agent-with-tools.ts`
|
||||||
|
|
||||||
|
**Migrate from v4:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
|
||||||
|
- Reference: `references/v5-breaking-changes.md`
|
||||||
|
|
||||||
|
**Fix an error:**
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors
|
||||||
|
- Reference: `references/top-errors.md`
|
||||||
|
|
||||||
|
**Set up a provider:**
|
||||||
|
- Reference: `references/providers-quickstart.md`
|
||||||
|
|
||||||
|
**Deploy to production:**
|
||||||
|
- Reference: `references/production-patterns.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-10-21
|
||||||
621
references/production-patterns.md
Normal file
621
references/production-patterns.md
Normal file
@@ -0,0 +1,621 @@
|
|||||||
|
# AI SDK Core - Production Patterns
|
||||||
|
|
||||||
|
Best practices for deploying AI SDK Core in production environments.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Performance Optimization
|
||||||
|
|
||||||
|
### 1. Streaming for Long-Form Content
|
||||||
|
|
||||||
|
**Always use streaming for user-facing long-form content:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: User-facing (better perceived performance)
|
||||||
|
app.post('/chat', async (req, res) => {
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: req.body.message,
|
||||||
|
});
|
||||||
|
|
||||||
|
return stream.toDataStreamResponse();
|
||||||
|
});
|
||||||
|
|
||||||
|
// ❌ BAD: User waits for entire response
|
||||||
|
app.post('/chat', async (req, res) => {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: req.body.message,
|
||||||
|
});
|
||||||
|
|
||||||
|
return res.json({ response: result.text });
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ GOOD: Background tasks (no user waiting)
|
||||||
|
async function processDocument(doc: string) {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: `Analyze: ${doc}`,
|
||||||
|
});
|
||||||
|
|
||||||
|
await saveToDatabase(result.text);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Set Appropriate maxOutputTokens
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Limit token usage based on use case
|
||||||
|
const shortSummary = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Summarize in 2 sentences',
|
||||||
|
maxOutputTokens: 100, // Prevents over-generation
|
||||||
|
});
|
||||||
|
|
||||||
|
const article = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Write article',
|
||||||
|
maxOutputTokens: 2000, // Appropriate for long-form
|
||||||
|
});
|
||||||
|
|
||||||
|
// ❌ BAD: No limit (can waste tokens/money)
|
||||||
|
const unlimited = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Write something',
|
||||||
|
// No maxOutputTokens
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Cache Provider Instances
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Reuse provider instances
|
||||||
|
const gpt4 = openai('gpt-4-turbo');
|
||||||
|
const claude = anthropic('claude-3-5-sonnet-20241022');
|
||||||
|
|
||||||
|
app.post('/chat', async (req, res) => {
|
||||||
|
const result = await generateText({
|
||||||
|
model: gpt4, // Reuse
|
||||||
|
prompt: req.body.message,
|
||||||
|
});
|
||||||
|
return res.json({ response: result.text });
|
||||||
|
});
|
||||||
|
|
||||||
|
// ❌ BAD: Create new instance every time
|
||||||
|
app.post('/chat', async (req, res) => {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4-turbo'), // New instance each call
|
||||||
|
prompt: req.body.message,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Optimize Zod Schemas (Especially in Workers)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Complex schema at top level (slow startup)
|
||||||
|
const ComplexSchema = z.object({
|
||||||
|
// 50+ fields with deep nesting
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ GOOD: Define schemas inside functions
|
||||||
|
function generateStructuredData() {
|
||||||
|
const schema = z.object({
|
||||||
|
// Schema definition here
|
||||||
|
});
|
||||||
|
|
||||||
|
return generateObject({ model: openai('gpt-4'), schema, prompt: '...' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD: Split into smaller reusable schemas
|
||||||
|
const AddressSchema = z.object({ street: z.string(), city: z.string() });
|
||||||
|
const PersonSchema = z.object({ name: z.string(), address: AddressSchema });
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### 1. Wrap All AI Calls in Try-Catch
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function generateSafely(prompt: string) {
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
|
||||||
|
return { success: true, data: result.text };
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_APICallError) {
|
||||||
|
console.error('API call failed:', error.statusCode, error.message);
|
||||||
|
return { success: false, error: 'AI service temporarily unavailable' };
|
||||||
|
} else if (error instanceof AI_NoContentGeneratedError) {
|
||||||
|
console.error('No content generated');
|
||||||
|
return { success: false, error: 'Unable to generate response' };
|
||||||
|
} else {
|
||||||
|
console.error('Unknown error:', error);
|
||||||
|
return { success: false, error: 'An error occurred' };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Handle Specific Error Types
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import {
|
||||||
|
AI_APICallError,
|
||||||
|
AI_NoObjectGeneratedError,
|
||||||
|
AI_TypeValidationError,
|
||||||
|
AI_RetryError,
|
||||||
|
} from 'ai';
|
||||||
|
|
||||||
|
async function robustGeneration(prompt: string) {
|
||||||
|
try {
|
||||||
|
return await generateText({ model: openai('gpt-4'), prompt });
|
||||||
|
} catch (error) {
|
||||||
|
switch (error.constructor) {
|
||||||
|
case AI_APICallError:
|
||||||
|
if (error.statusCode === 429) {
|
||||||
|
// Rate limit - wait and retry
|
||||||
|
await wait(5000);
|
||||||
|
return retry();
|
||||||
|
} else if (error.statusCode >= 500) {
|
||||||
|
// Provider issue - try fallback
|
||||||
|
return generateText({ model: anthropic('claude-3-5-sonnet-20241022'), prompt });
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
|
||||||
|
case AI_RetryError:
|
||||||
|
// All retries failed - use fallback provider
|
||||||
|
return generateText({ model: google('gemini-2.5-pro'), prompt });
|
||||||
|
|
||||||
|
case AI_NoContentGeneratedError:
|
||||||
|
// Content filtered - return safe message
|
||||||
|
return { text: 'Unable to generate response for this input.' };
|
||||||
|
|
||||||
|
default:
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Implement Retry Logic
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function generateWithRetry(prompt: string, maxRetries = 3) {
|
||||||
|
for (let i = 0; i < maxRetries; i++) {
|
||||||
|
try {
|
||||||
|
return await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
maxRetries: 2, // Built-in retry
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (i === maxRetries - 1) throw error; // Last attempt failed
|
||||||
|
|
||||||
|
// Exponential backoff
|
||||||
|
const delay = Math.pow(2, i) * 1000;
|
||||||
|
console.log(`Retry ${i + 1}/${maxRetries} after ${delay}ms`);
|
||||||
|
await new Promise(resolve => setTimeout(resolve, delay));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Log Errors Properly
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
function logAIError(error: any, context: Record<string, any>) {
|
||||||
|
const errorLog = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
type: error.constructor.name,
|
||||||
|
message: error.message,
|
||||||
|
statusCode: error.statusCode,
|
||||||
|
responseBody: error.responseBody,
|
||||||
|
context,
|
||||||
|
stack: error.stack,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Send to monitoring service (e.g., Sentry, Datadog)
|
||||||
|
console.error('AI SDK Error:', JSON.stringify(errorLog));
|
||||||
|
|
||||||
|
// Track metrics
|
||||||
|
metrics.increment('ai.error', {
|
||||||
|
type: error.constructor.name,
|
||||||
|
statusCode: error.statusCode,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({ model: openai('gpt-4'), prompt });
|
||||||
|
} catch (error) {
|
||||||
|
logAIError(error, { prompt, model: 'gpt-4' });
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cost Optimization
|
||||||
|
|
||||||
|
### 1. Choose Appropriate Models
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Model selection based on task complexity
|
||||||
|
async function generateWithCostOptimization(prompt: string, complexity: 'simple' | 'medium' | 'complex') {
|
||||||
|
const models = {
|
||||||
|
simple: openai('gpt-3.5-turbo'), // $0.50 / 1M tokens
|
||||||
|
medium: openai('gpt-4-turbo'), // $10 / 1M tokens
|
||||||
|
complex: openai('gpt-4'), // $30 / 1M tokens
|
||||||
|
};
|
||||||
|
|
||||||
|
return generateText({
|
||||||
|
model: models[complexity],
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
await generateWithCostOptimization('Translate to Spanish', 'simple');
|
||||||
|
await generateWithCostOptimization('Analyze sentiment', 'medium');
|
||||||
|
await generateWithCostOptimization('Complex reasoning task', 'complex');
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Set Token Limits
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Prevent runaway costs
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Write essay',
|
||||||
|
maxOutputTokens: 500, // Hard limit
|
||||||
|
});
|
||||||
|
|
||||||
|
// Adjust limits per use case
|
||||||
|
const limits = {
|
||||||
|
chatMessage: 200,
|
||||||
|
summary: 300,
|
||||||
|
article: 2000,
|
||||||
|
analysis: 1000,
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Cache Results
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { LRUCache } from 'lru-cache';
|
||||||
|
|
||||||
|
const cache = new LRUCache<string, string>({
|
||||||
|
max: 1000, // Max 1000 items
|
||||||
|
ttl: 1000 * 60 * 60, // 1 hour TTL
|
||||||
|
});
|
||||||
|
|
||||||
|
async function generateWithCache(prompt: string) {
|
||||||
|
const cacheKey = `ai:${hash(prompt)}`;
|
||||||
|
|
||||||
|
// Check cache
|
||||||
|
const cached = cache.get(cacheKey);
|
||||||
|
if (cached) {
|
||||||
|
console.log('Cache hit');
|
||||||
|
return { text: cached, cached: true };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Store in cache
|
||||||
|
cache.set(cacheKey, result.text);
|
||||||
|
|
||||||
|
return { text: result.text, cached: false };
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Monitor Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Track token usage
|
||||||
|
let totalTokensUsed = 0;
|
||||||
|
let totalCost = 0;
|
||||||
|
|
||||||
|
async function generateWithTracking(prompt: string) {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Track tokens
|
||||||
|
totalTokensUsed += result.usage.totalTokens;
|
||||||
|
|
||||||
|
// Estimate cost (GPT-4: $30/1M tokens)
|
||||||
|
const cost = (result.usage.totalTokens / 1_000_000) * 30;
|
||||||
|
totalCost += cost;
|
||||||
|
|
||||||
|
console.log(`Tokens: ${result.usage.totalTokens}, Cost: $${cost.toFixed(4)}`);
|
||||||
|
console.log(`Total tokens: ${totalTokensUsed}, Total cost: $${totalCost.toFixed(2)}`);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cloudflare Workers Best Practices
|
||||||
|
|
||||||
|
### 1. Lazy Initialization
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Import inside handler
|
||||||
|
export default {
|
||||||
|
async fetch(request, env) {
|
||||||
|
const { generateText } = await import('ai');
|
||||||
|
const { createWorkersAI } = await import('workers-ai-provider');
|
||||||
|
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
return new Response(result.text);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// ❌ BAD: Top-level imports (startup overhead)
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI }); // Runs at startup!
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Monitor Startup Time
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Wrangler reports startup time
|
||||||
|
npx wrangler deploy
|
||||||
|
|
||||||
|
# Output shows:
|
||||||
|
# Startup Time: 287ms (must be <400ms)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Handle Streaming Properly
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
app.post('/chat/stream', async (c) => {
|
||||||
|
const workersai = createWorkersAI({ binding: c.env.AI });
|
||||||
|
|
||||||
|
const stream = streamText({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
// Return ReadableStream for Workers
|
||||||
|
return new Response(stream.toTextStream(), {
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'text/plain; charset=utf-8',
|
||||||
|
'X-Content-Type-Options': 'nosniff',
|
||||||
|
},
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next.js / Vercel Best Practices
|
||||||
|
|
||||||
|
### 1. Server Actions for Mutations
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// app/actions.ts
|
||||||
|
'use server';
|
||||||
|
|
||||||
|
export async function generateContent(input: string) {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: input,
|
||||||
|
maxOutputTokens: 500,
|
||||||
|
});
|
||||||
|
|
||||||
|
return result.text;
|
||||||
|
}
|
||||||
|
|
||||||
|
// app/page.tsx (Client Component)
|
||||||
|
'use client';
|
||||||
|
|
||||||
|
import { generateContent } from './actions';
|
||||||
|
|
||||||
|
export default function Page() {
|
||||||
|
const [loading, setLoading] = useState(false);
|
||||||
|
|
||||||
|
async function handleSubmit(formData: FormData) {
|
||||||
|
setLoading(true);
|
||||||
|
const result = await generateContent(formData.get('input') as string);
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
|
||||||
|
return <form action={handleSubmit}>...</form>;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Server Components for Initial Loads
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// app/page.tsx (Server Component)
|
||||||
|
export default async function Page() {
|
||||||
|
// Generate on server
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Welcome message',
|
||||||
|
});
|
||||||
|
|
||||||
|
// No loading state needed
|
||||||
|
return <div>{result.text}</div>;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. API Routes for Streaming
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// app/api/chat/route.ts
|
||||||
|
import { streamText } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
|
||||||
|
export async function POST(request: Request) {
|
||||||
|
const { messages } = await request.json();
|
||||||
|
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
messages,
|
||||||
|
});
|
||||||
|
|
||||||
|
return stream.toDataStreamResponse();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Monitoring and Logging
|
||||||
|
|
||||||
|
### 1. Track Key Metrics
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Token usage
|
||||||
|
metrics.gauge('ai.tokens.total', result.usage.totalTokens);
|
||||||
|
metrics.gauge('ai.tokens.prompt', result.usage.promptTokens);
|
||||||
|
metrics.gauge('ai.tokens.completion', result.usage.completionTokens);
|
||||||
|
|
||||||
|
// Response time
|
||||||
|
const startTime = Date.now();
|
||||||
|
const result = await generateText({ model: openai('gpt-4'), prompt });
|
||||||
|
metrics.timing('ai.response_time', Date.now() - startTime);
|
||||||
|
|
||||||
|
// Error rate
|
||||||
|
metrics.increment('ai.errors', { type: error.constructor.name });
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Structured Logging
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import winston from 'winston';
|
||||||
|
|
||||||
|
const logger = winston.createLogger({
|
||||||
|
format: winston.format.json(),
|
||||||
|
transports: [new winston.transports.Console()],
|
||||||
|
});
|
||||||
|
|
||||||
|
logger.info('AI generation started', {
|
||||||
|
model: 'gpt-4',
|
||||||
|
promptLength: prompt.length,
|
||||||
|
userId: user.id,
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await generateText({ model: openai('gpt-4'), prompt });
|
||||||
|
|
||||||
|
logger.info('AI generation completed', {
|
||||||
|
model: 'gpt-4',
|
||||||
|
tokensUsed: result.usage.totalTokens,
|
||||||
|
responseLength: result.text.length,
|
||||||
|
duration: Date.now() - startTime,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rate Limiting
|
||||||
|
|
||||||
|
### 1. Queue Requests
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import PQueue from 'p-queue';
|
||||||
|
|
||||||
|
// Limit: 50 requests per minute
|
||||||
|
const queue = new PQueue({
|
||||||
|
concurrency: 5,
|
||||||
|
interval: 60000,
|
||||||
|
intervalCap: 50,
|
||||||
|
});
|
||||||
|
|
||||||
|
async function generateQueued(prompt: string) {
|
||||||
|
return queue.add(() =>
|
||||||
|
generateText({ model: openai('gpt-4'), prompt })
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Monitor Rate Limits
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function generateWithRateCheck(prompt: string) {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check rate limit headers (provider-specific)
|
||||||
|
console.log('Remaining requests:', response.headers['x-ratelimit-remaining']);
|
||||||
|
console.log('Resets at:', response.headers['x-ratelimit-reset']);
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
### 1. Sanitize User Inputs
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
function sanitizePrompt(userInput: string): string {
|
||||||
|
// Remove potential prompt injections
|
||||||
|
return userInput
|
||||||
|
.replace(/system:/gi, '')
|
||||||
|
.replace(/ignore previous/gi, '')
|
||||||
|
.slice(0, 1000); // Limit length
|
||||||
|
}
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: sanitizePrompt(req.body.message),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Validate API Keys
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Startup validation
|
||||||
|
function validateEnv() {
|
||||||
|
const required = ['OPENAI_API_KEY', 'ANTHROPIC_API_KEY'];
|
||||||
|
|
||||||
|
for (const key of required) {
|
||||||
|
if (!process.env[key]) {
|
||||||
|
throw new Error(`Missing: ${key}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!process.env[key].match(/^sk-/)) {
|
||||||
|
throw new Error(`Invalid format: ${key}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateEnv();
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
See Vercel's official deployment documentation:
|
||||||
|
https://vercel.com/docs/functions
|
||||||
|
|
||||||
|
For Cloudflare Workers:
|
||||||
|
https://developers.cloudflare.com/workers/
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-10-21
|
||||||
329
references/providers-quickstart.md
Normal file
329
references/providers-quickstart.md
Normal file
@@ -0,0 +1,329 @@
|
|||||||
|
# AI SDK Core - Providers Quick Start
|
||||||
|
|
||||||
|
Quick reference for setting up the top 4 AI providers with AI SDK v5.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenAI
|
||||||
|
|
||||||
|
**Package:** `@ai-sdk/openai`
|
||||||
|
**Version:** 2.0.53+
|
||||||
|
**Maturity:** Excellent
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install @ai-sdk/openai
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env
|
||||||
|
OPENAI_API_KEY=sk-...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Models
|
||||||
|
|
||||||
|
| Model | Use Case | Cost | Speed |
|
||||||
|
|-------|----------|------|-------|
|
||||||
|
| gpt-5 | Latest (if available) | High | Medium |
|
||||||
|
| gpt-4-turbo | Complex reasoning | High | Medium |
|
||||||
|
| gpt-4 | High quality | High | Slow |
|
||||||
|
| gpt-3.5-turbo | Simple tasks | Low | Fast |
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
- **401 Unauthorized**: Invalid API key
|
||||||
|
- **429 Rate Limit**: Exceeded RPM/TPM limits
|
||||||
|
- **500 Server Error**: OpenAI service issue
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/providers/ai-sdk-providers/openai
|
||||||
|
- API Keys: https://platform.openai.com/api-keys
|
||||||
|
- Rate Limits: https://platform.openai.com/account/rate-limits
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Anthropic
|
||||||
|
|
||||||
|
**Package:** `@ai-sdk/anthropic`
|
||||||
|
**Version:** 2.0.0+
|
||||||
|
**Maturity:** Excellent
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install @ai-sdk/anthropic
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env
|
||||||
|
ANTHROPIC_API_KEY=sk-ant-...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { anthropic } from '@ai-sdk/anthropic';
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: anthropic('claude-3-5-sonnet-20241022'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Models
|
||||||
|
|
||||||
|
| Model | Use Case | Context | Speed |
|
||||||
|
|-------|----------|---------|-------|
|
||||||
|
| claude-3-5-sonnet-20241022 | Best balance | 200K | Medium |
|
||||||
|
| claude-3-opus-20240229 | Highest intelligence | 200K | Slow |
|
||||||
|
| claude-3-haiku-20240307 | Fast and cheap | 200K | Fast |
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
- **authentication_error**: Invalid API key
|
||||||
|
- **rate_limit_error**: Rate limit exceeded
|
||||||
|
- **overloaded_error**: Service overloaded, retry
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/providers/ai-sdk-providers/anthropic
|
||||||
|
- API Keys: https://console.anthropic.com/
|
||||||
|
- Model Details: https://docs.anthropic.com/en/docs/models-overview
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Google
|
||||||
|
|
||||||
|
**Package:** `@ai-sdk/google`
|
||||||
|
**Version:** 2.0.0+
|
||||||
|
**Maturity:** Excellent
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install @ai-sdk/google
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env
|
||||||
|
GOOGLE_GENERATIVE_AI_API_KEY=...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { google } from '@ai-sdk/google';
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: google('gemini-2.5-pro'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Models
|
||||||
|
|
||||||
|
| Model | Use Case | Context | Free Tier |
|
||||||
|
|-------|----------|---------|-----------|
|
||||||
|
| gemini-2.5-pro | Complex reasoning | 1M | Generous |
|
||||||
|
| gemini-2.5-flash | Fast & efficient | 1M | Generous |
|
||||||
|
| gemini-2.5-flash-lite | Ultra-fast | 1M | Generous |
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
- **SAFETY**: Content filtered by safety settings
|
||||||
|
- **QUOTA_EXCEEDED**: Rate limit exceeded
|
||||||
|
- **INVALID_ARGUMENT**: Invalid parameters
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/providers/ai-sdk-providers/google
|
||||||
|
- API Keys: https://aistudio.google.com/app/apikey
|
||||||
|
- Model Details: https://ai.google.dev/models/gemini
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cloudflare Workers AI
|
||||||
|
|
||||||
|
**Package:** `workers-ai-provider`
|
||||||
|
**Version:** 2.0.0+
|
||||||
|
**Type:** Community Provider
|
||||||
|
**Maturity:** Good
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install workers-ai-provider
|
||||||
|
```
|
||||||
|
|
||||||
|
```jsonc
|
||||||
|
// wrangler.jsonc
|
||||||
|
{
|
||||||
|
"ai": {
|
||||||
|
"binding": "AI"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { createWorkersAI } from 'workers-ai-provider';
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
|
||||||
|
// In Cloudflare Worker handler
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Models
|
||||||
|
|
||||||
|
| Model | Use Case | Notes |
|
||||||
|
|-------|----------|-------|
|
||||||
|
| @cf/meta/llama-3.1-8b-instruct | General purpose | Recommended |
|
||||||
|
| @cf/meta/llama-3.1-70b-instruct | Complex tasks | Slower |
|
||||||
|
| @cf/mistral/mistral-7b-instruct-v0.1 | Alternative | Good quality |
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
- **Startup Limit**: Move imports inside handlers
|
||||||
|
- **270ms+**: Lazy-load AI SDK to avoid startup overhead
|
||||||
|
|
||||||
|
### Important Notes
|
||||||
|
|
||||||
|
1. **Startup Optimization Required:**
|
||||||
|
```typescript
|
||||||
|
// BAD: Top-level import causes startup overhead
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
// GOOD: Lazy initialization
|
||||||
|
export default {
|
||||||
|
async fetch(request, env) {
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
// Use here
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **When to Use:**
|
||||||
|
- Multi-provider scenarios (OpenAI + Workers AI)
|
||||||
|
- Using AI SDK UI hooks
|
||||||
|
- Need consistent API across providers
|
||||||
|
|
||||||
|
3. **When to Use Native Binding:**
|
||||||
|
- Cloudflare-only deployment
|
||||||
|
- Maximum performance
|
||||||
|
- See: `cloudflare-workers-ai` skill
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/providers/community-providers/cloudflare-workers-ai
|
||||||
|
- Models: https://developers.cloudflare.com/workers-ai/models/
|
||||||
|
- GitHub: https://github.com/cloudflare/ai/tree/main/packages/workers-ai-provider
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Provider Comparison
|
||||||
|
|
||||||
|
| Feature | OpenAI | Anthropic | Google | Cloudflare |
|
||||||
|
|---------|--------|-----------|--------|------------|
|
||||||
|
| **Quality** | Excellent | Excellent | Excellent | Good |
|
||||||
|
| **Speed** | Medium | Medium | Fast | Fast |
|
||||||
|
| **Cost** | Medium | Medium | Low | Lowest |
|
||||||
|
| **Context** | 128K | 200K | 1M | 128K |
|
||||||
|
| **Structured Output** | ✅ | ✅ | ✅ | ⚠️ |
|
||||||
|
| **Tool Calling** | ✅ | ✅ | ✅ | ⚠️ |
|
||||||
|
| **Streaming** | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| **Free Tier** | ❌ | ❌ | ✅ | ✅ |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Multi-Provider Setup
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { anthropic } from '@ai-sdk/anthropic';
|
||||||
|
import { google } from '@ai-sdk/google';
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
|
||||||
|
// Use different providers for different tasks
|
||||||
|
const complexTask = await generateText({
|
||||||
|
model: openai('gpt-4'), // Best reasoning
|
||||||
|
prompt: 'Complex analysis...',
|
||||||
|
});
|
||||||
|
|
||||||
|
const longContext = await generateText({
|
||||||
|
model: anthropic('claude-3-5-sonnet-20241022'), // Long context
|
||||||
|
prompt: 'Document: ' + longDocument,
|
||||||
|
});
|
||||||
|
|
||||||
|
const fastTask = await generateText({
|
||||||
|
model: google('gemini-2.5-flash'), // Fast and cheap
|
||||||
|
prompt: 'Quick summary...',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fallback Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function generateWithFallback(prompt: string) {
|
||||||
|
try {
|
||||||
|
return await generateText({ model: openai('gpt-4'), prompt });
|
||||||
|
} catch (error) {
|
||||||
|
console.error('OpenAI failed, trying Anthropic...');
|
||||||
|
try {
|
||||||
|
return await generateText({ model: anthropic('claude-3-5-sonnet-20241022'), prompt });
|
||||||
|
} catch (error2) {
|
||||||
|
console.error('Anthropic failed, trying Google...');
|
||||||
|
return await generateText({ model: google('gemini-2.5-pro'), prompt });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## All Providers
|
||||||
|
|
||||||
|
AI SDK supports 25+ providers. See full list:
|
||||||
|
https://ai-sdk.dev/providers/overview
|
||||||
|
|
||||||
|
**Other Official Providers:**
|
||||||
|
- xAI (Grok)
|
||||||
|
- Mistral
|
||||||
|
- Azure OpenAI
|
||||||
|
- Amazon Bedrock
|
||||||
|
- DeepSeek
|
||||||
|
- Groq
|
||||||
|
|
||||||
|
**Community Providers:**
|
||||||
|
- Ollama (local models)
|
||||||
|
- FriendliAI
|
||||||
|
- Portkey
|
||||||
|
- LM Studio
|
||||||
|
- Baseten
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-10-21
|
||||||
739
references/top-errors.md
Normal file
739
references/top-errors.md
Normal file
@@ -0,0 +1,739 @@
|
|||||||
|
# AI SDK Core - Top 12 Errors & Solutions
|
||||||
|
|
||||||
|
Comprehensive guide to the most common AI SDK Core errors with actionable solutions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. AI_APICallError
|
||||||
|
|
||||||
|
**Type:** Network/API Error
|
||||||
|
**Frequency:** Very Common
|
||||||
|
**Severity:** High
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
API request to provider failed due to:
|
||||||
|
- Invalid API key
|
||||||
|
- Network connectivity issues
|
||||||
|
- Rate limit exceeded
|
||||||
|
- Provider service outage
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_APICallError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_APICallError) {
|
||||||
|
console.error('API call failed:', error.message);
|
||||||
|
console.error('Status code:', error.statusCode);
|
||||||
|
console.error('Response:', error.responseBody);
|
||||||
|
|
||||||
|
// Handle specific status codes
|
||||||
|
if (error.statusCode === 401) {
|
||||||
|
// Invalid API key
|
||||||
|
console.error('Check OPENAI_API_KEY environment variable');
|
||||||
|
} else if (error.statusCode === 429) {
|
||||||
|
// Rate limit - implement exponential backoff
|
||||||
|
await wait(Math.pow(2, retryCount) * 1000);
|
||||||
|
// retry...
|
||||||
|
} else if (error.statusCode >= 500) {
|
||||||
|
// Provider issue - retry later
|
||||||
|
console.error('Provider service issue, retry in 1 minute');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Validate API keys at application startup
|
||||||
|
- Implement retry logic with exponential backoff
|
||||||
|
- Monitor rate limits via response headers
|
||||||
|
- Handle network errors gracefully
|
||||||
|
- Set reasonable timeouts
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-api-call-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. AI_NoObjectGeneratedError
|
||||||
|
|
||||||
|
**Type:** Generation Error
|
||||||
|
**Frequency:** Common
|
||||||
|
**Severity:** Medium
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Model didn't generate a valid object matching the Zod schema:
|
||||||
|
- Schema too complex for model
|
||||||
|
- Prompt doesn't provide enough context
|
||||||
|
- Model capabilities exceeded
|
||||||
|
- Safety filters triggered
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_NoObjectGeneratedError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: z.object({
|
||||||
|
// Complex schema
|
||||||
|
name: z.string(),
|
||||||
|
age: z.number(),
|
||||||
|
nested: z.object({ /* ... */ }),
|
||||||
|
}),
|
||||||
|
prompt: 'Generate a person',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_NoObjectGeneratedError) {
|
||||||
|
console.error('No valid object generated');
|
||||||
|
|
||||||
|
// Solutions:
|
||||||
|
// 1. Simplify schema
|
||||||
|
const simpler = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
age: z.number(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// 2. Add more context
|
||||||
|
const betterPrompt = 'Generate a person profile with name (string) and age (number, 18-80)';
|
||||||
|
|
||||||
|
// 3. Try different model
|
||||||
|
const result2 = await generateObject({
|
||||||
|
model: openai('gpt-4'), // GPT-4 better than 3.5 for complex objects
|
||||||
|
schema: simpler,
|
||||||
|
prompt: betterPrompt,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Start with simple schemas, add complexity gradually
|
||||||
|
- Include examples in prompt: `"Generate like: { name: 'Alice', age: 30 }"`
|
||||||
|
- Use GPT-4 or Claude for complex structured output
|
||||||
|
- Test schemas with sample data first
|
||||||
|
- Add descriptions to schema fields using `.describe()`
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-no-object-generated-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Worker Startup Limit (270ms+)
|
||||||
|
|
||||||
|
**Type:** Cloudflare Workers Issue
|
||||||
|
**Frequency:** Common (Workers only)
|
||||||
|
**Severity:** High (blocks deployment)
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
AI SDK v5 + Zod initialization overhead exceeds Cloudflare Workers startup limit (must be <400ms):
|
||||||
|
- Top-level imports of AI SDK packages
|
||||||
|
- Complex Zod schemas at module level
|
||||||
|
- Provider initialization at startup
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Top-level imports cause startup overhead
|
||||||
|
import { createWorkersAI } from 'workers-ai-provider';
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
import { complexSchema } from './schemas'; // Heavy Zod schemas
|
||||||
|
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI }); // Runs at startup!
|
||||||
|
|
||||||
|
// ✅ GOOD: Lazy initialization inside handler
|
||||||
|
export default {
|
||||||
|
async fetch(request, env) {
|
||||||
|
// Import inside handler
|
||||||
|
const { createWorkersAI } = await import('workers-ai-provider');
|
||||||
|
const { generateText } = await import('ai');
|
||||||
|
|
||||||
|
const workersai = createWorkersAI({ binding: env.AI });
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
return new Response(result.text);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### Alternative Solution (Move Schemas Inside Routes)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Top-level schema
|
||||||
|
import { z } from 'zod';
|
||||||
|
const PersonSchema = z.object({ /* complex schema */ });
|
||||||
|
|
||||||
|
// ✅ GOOD: Schema inside handler
|
||||||
|
export default {
|
||||||
|
async fetch(request, env) {
|
||||||
|
const { z } = await import('zod');
|
||||||
|
const PersonSchema = z.object({ /* complex schema */ });
|
||||||
|
|
||||||
|
// Use schema here
|
||||||
|
}
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Never initialize AI SDK at module top-level in Workers
|
||||||
|
- Move all imports inside route handlers
|
||||||
|
- Minimize top-level Zod schemas
|
||||||
|
- Monitor Worker startup time: `wrangler deploy` shows startup duration
|
||||||
|
- Target < 270ms startup time to be safe (limit is 400ms)
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Cloudflare Workers AI Docs: https://developers.cloudflare.com/workers-ai/configuration/ai-sdk/
|
||||||
|
- GitHub: Search "Workers startup limit" in Vercel AI SDK issues
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. streamText Fails Silently
|
||||||
|
|
||||||
|
**Type:** Streaming Error
|
||||||
|
**Frequency:** Occasional
|
||||||
|
**Severity:** Medium (hard to debug)
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Stream errors are swallowed by `createDataStreamResponse()` or framework response handling:
|
||||||
|
- Error occurs during streaming
|
||||||
|
- Error handler not set up
|
||||||
|
- Response already committed
|
||||||
|
- Client disconnects
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Add explicit error handling
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
for await (const chunk of stream.textStream) {
|
||||||
|
process.stdout.write(chunk);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
// Error may not reach here if stream swallows it
|
||||||
|
console.error('Stream error:', error);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ BETTER: Always log on server side
|
||||||
|
console.log('Starting stream...');
|
||||||
|
const stream = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
|
||||||
|
stream.result.then(
|
||||||
|
(result) => console.log('Stream success:', result.usage),
|
||||||
|
(error) => console.error('Stream failed:', error) // This will catch errors!
|
||||||
|
);
|
||||||
|
|
||||||
|
return stream.toDataStreamResponse();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Always check server logs for stream errors
|
||||||
|
- Implement server-side error monitoring (e.g., Sentry)
|
||||||
|
- Test stream error handling explicitly
|
||||||
|
- Use `try-catch` around stream consumption
|
||||||
|
- Monitor for unexpected stream terminations
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- GitHub Issue: #4726
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. AI_LoadAPIKeyError
|
||||||
|
|
||||||
|
**Type:** Configuration Error
|
||||||
|
**Frequency:** Very Common (setup)
|
||||||
|
**Severity:** High (blocks usage)
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
API key missing or invalid:
|
||||||
|
- `.env` file not loaded
|
||||||
|
- Wrong environment variable name
|
||||||
|
- API key format invalid
|
||||||
|
- Environment variable not set in deployment
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_LoadAPIKeyError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_LoadAPIKeyError) {
|
||||||
|
console.error('API key error:', error.message);
|
||||||
|
|
||||||
|
// Debugging steps:
|
||||||
|
console.log('OPENAI_API_KEY exists:', !!process.env.OPENAI_API_KEY);
|
||||||
|
console.log('Key starts with sk-:', process.env.OPENAI_API_KEY?.startsWith('sk-'));
|
||||||
|
|
||||||
|
// Common issues:
|
||||||
|
// 1. .env not loaded → use dotenv or similar
|
||||||
|
// 2. Wrong variable name → check provider docs
|
||||||
|
// 3. Key format wrong → verify in provider dashboard
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Validate at startup
|
||||||
|
function validateEnv() {
|
||||||
|
const required = ['OPENAI_API_KEY', 'ANTHROPIC_API_KEY'];
|
||||||
|
|
||||||
|
for (const key of required) {
|
||||||
|
if (!process.env[key]) {
|
||||||
|
throw new Error(`Missing required environment variable: ${key}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateEnv();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Variable Names
|
||||||
|
|
||||||
|
| Provider | Variable Name |
|
||||||
|
|----------|---------------|
|
||||||
|
| OpenAI | `OPENAI_API_KEY` |
|
||||||
|
| Anthropic | `ANTHROPIC_API_KEY` |
|
||||||
|
| Google | `GOOGLE_GENERATIVE_AI_API_KEY` |
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-load-api-key-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. AI_InvalidArgumentError
|
||||||
|
|
||||||
|
**Type:** Validation Error
|
||||||
|
**Frequency:** Common (development)
|
||||||
|
**Severity:** Low (easy to fix)
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Invalid parameters passed to AI SDK function:
|
||||||
|
- Negative `maxOutputTokens`
|
||||||
|
- Invalid temperature (must be 0-2)
|
||||||
|
- Wrong parameter types
|
||||||
|
- Missing required parameters
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_InvalidArgumentError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
maxOutputTokens: -1, // ❌ Invalid!
|
||||||
|
temperature: 3.0, // ❌ Must be 0-2
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_InvalidArgumentError) {
|
||||||
|
console.error('Invalid argument:', error.message);
|
||||||
|
// Fix: Check parameter types and values
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Use TypeScript for compile-time type checking
|
||||||
|
- Validate inputs before calling AI SDK functions
|
||||||
|
- Read function signatures carefully
|
||||||
|
- Check official docs for parameter constraints
|
||||||
|
- Use IDE autocomplete
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-invalid-argument-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. AI_NoContentGeneratedError
|
||||||
|
|
||||||
|
**Type:** Generation Error
|
||||||
|
**Frequency:** Occasional
|
||||||
|
**Severity:** Medium
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Model generated no content:
|
||||||
|
- Safety filters blocked output
|
||||||
|
- Prompt triggered content policy
|
||||||
|
- Model configuration issue
|
||||||
|
- Empty prompt
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_NoContentGeneratedError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Some potentially problematic prompt',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_NoContentGeneratedError) {
|
||||||
|
console.error('No content generated');
|
||||||
|
|
||||||
|
// Return user-friendly message
|
||||||
|
return {
|
||||||
|
text: 'Unable to generate response. Please try different input.',
|
||||||
|
error: true,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Sanitize user inputs
|
||||||
|
- Avoid prompts that may trigger safety filters
|
||||||
|
- Have fallback messaging
|
||||||
|
- Log occurrences for analysis
|
||||||
|
- Test with edge cases
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-no-content-generated-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. AI_TypeValidationError
|
||||||
|
|
||||||
|
**Type:** Validation Error
|
||||||
|
**Frequency:** Common (with generateObject)
|
||||||
|
**Severity:** Medium
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Zod schema validation failed on generated output:
|
||||||
|
- Model output doesn't match schema
|
||||||
|
- Schema too strict
|
||||||
|
- Model misunderstood schema
|
||||||
|
- Invalid JSON generated
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_TypeValidationError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: z.object({
|
||||||
|
age: z.number().min(0).max(120), // Strict validation
|
||||||
|
email: z.string().email(), // Strict format
|
||||||
|
}),
|
||||||
|
prompt: 'Generate person',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_TypeValidationError) {
|
||||||
|
console.error('Validation failed:', error.message);
|
||||||
|
|
||||||
|
// Solutions:
|
||||||
|
// 1. Relax schema
|
||||||
|
const relaxed = z.object({
|
||||||
|
age: z.number(), // Remove min/max
|
||||||
|
email: z.string().optional(), // Make optional
|
||||||
|
});
|
||||||
|
|
||||||
|
// 2. Add guidance in prompt
|
||||||
|
const better = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: relaxed,
|
||||||
|
prompt: 'Generate person with age 18-80 and valid email',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Start with lenient schemas, tighten gradually
|
||||||
|
- Use `.optional()` for unreliable fields
|
||||||
|
- Add validation hints in field descriptions
|
||||||
|
- Test with various prompts
|
||||||
|
- Use mode: 'json' when available
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-type-validation-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. AI_RetryError
|
||||||
|
|
||||||
|
**Type:** Network Error
|
||||||
|
**Frequency:** Occasional
|
||||||
|
**Severity:** High
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
All retry attempts failed:
|
||||||
|
- Persistent network issue
|
||||||
|
- Provider outage
|
||||||
|
- Invalid configuration
|
||||||
|
- Unreachable API endpoint
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AI_RetryError } from 'ai';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
maxRetries: 3, // Default is 2
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof AI_RetryError) {
|
||||||
|
console.error('All retries failed');
|
||||||
|
console.error('Last error:', error.lastError);
|
||||||
|
console.error('Retry count:', error.retryCount);
|
||||||
|
|
||||||
|
// Implement circuit breaker
|
||||||
|
if (isProviderDown()) {
|
||||||
|
switchToFallbackProvider();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Investigate root cause of failures
|
||||||
|
- Adjust retry configuration if needed
|
||||||
|
- Implement circuit breaker pattern
|
||||||
|
- Have fallback providers
|
||||||
|
- Monitor provider status pages
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Docs: https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-retry-error
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Rate Limiting Errors
|
||||||
|
|
||||||
|
**Type:** API Limit Error
|
||||||
|
**Frequency:** Common (production)
|
||||||
|
**Severity:** High
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Exceeded provider rate limits:
|
||||||
|
- RPM (Requests Per Minute) exceeded
|
||||||
|
- TPM (Tokens Per Minute) exceeded
|
||||||
|
- Concurrent request limit hit
|
||||||
|
- Free tier limits reached
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Implement exponential backoff
|
||||||
|
async function generateWithBackoff(prompt: string, retries = 3) {
|
||||||
|
for (let i = 0; i < retries; i++) {
|
||||||
|
try {
|
||||||
|
return await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt,
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
if (error.statusCode === 429) {
|
||||||
|
const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s, 8s...
|
||||||
|
console.log(`Rate limited, waiting ${delay}ms`);
|
||||||
|
await new Promise(resolve => setTimeout(resolve, delay));
|
||||||
|
} else {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
throw new Error('Rate limit retries exhausted');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or use queue
|
||||||
|
import PQueue from 'p-queue';
|
||||||
|
|
||||||
|
const queue = new PQueue({ concurrency: 5, interval: 60000, intervalCap: 50 });
|
||||||
|
|
||||||
|
async function generateQueued(prompt: string) {
|
||||||
|
return queue.add(() => generateText({ model: openai('gpt-4'), prompt }));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Monitor rate limit headers in responses
|
||||||
|
- Queue requests to stay under limits
|
||||||
|
- Upgrade provider tier if needed
|
||||||
|
- Implement request throttling
|
||||||
|
- Cache results when possible
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- OpenAI Rate Limits: https://platform.openai.com/account/rate-limits
|
||||||
|
- Anthropic Rate Limits: https://docs.anthropic.com/en/api/rate-limits
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 11. TypeScript Performance with Zod
|
||||||
|
|
||||||
|
**Type:** Development Issue
|
||||||
|
**Frequency:** Occasional
|
||||||
|
**Severity:** Low (annoying)
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Complex Zod schemas slow down TypeScript type checking:
|
||||||
|
- Deeply nested schemas
|
||||||
|
- Many union types
|
||||||
|
- Recursive types
|
||||||
|
- Top-level complex schemas
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Complex schema at top level
|
||||||
|
const ComplexSchema = z.object({
|
||||||
|
// 100+ fields with nested objects...
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ GOOD: Define inside function
|
||||||
|
function generateData() {
|
||||||
|
const schema = z.object({
|
||||||
|
// Complex schema here
|
||||||
|
});
|
||||||
|
|
||||||
|
return generateObject({ model: openai('gpt-4'), schema, prompt: '...' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD: Use z.lazy() for recursive
|
||||||
|
type Category = { name: string; subcategories?: Category[] };
|
||||||
|
|
||||||
|
const CategorySchema: z.ZodType<Category> = z.lazy(() =>
|
||||||
|
z.object({
|
||||||
|
name: z.string(),
|
||||||
|
subcategories: z.array(CategorySchema).optional(),
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
// ✅ GOOD: Split large schemas
|
||||||
|
const AddressSchema = z.object({ /* ... */ });
|
||||||
|
const PersonSchema = z.object({
|
||||||
|
address: AddressSchema, // Reuse smaller schema
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Avoid top-level complex schemas
|
||||||
|
- Use `z.lazy()` for recursive types
|
||||||
|
- Split large schemas into smaller ones
|
||||||
|
- Use type assertions where appropriate
|
||||||
|
- Enable `skipLibCheck` in tsconfig.json if desperate
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- Troubleshooting: https://ai-sdk.dev/docs/troubleshooting/common-issues/slow-type-checking
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 12. Invalid JSON Response (Provider-Specific)
|
||||||
|
|
||||||
|
**Type:** Provider Issue
|
||||||
|
**Frequency:** Rare
|
||||||
|
**Severity:** Medium
|
||||||
|
|
||||||
|
### Cause
|
||||||
|
|
||||||
|
Some models occasionally return invalid JSON:
|
||||||
|
- Model error
|
||||||
|
- Provider API issue
|
||||||
|
- Specific model version bug (e.g., Imagen 3.0)
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Use built-in retry and mode selection
|
||||||
|
try {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: mySchema,
|
||||||
|
prompt: 'Generate data',
|
||||||
|
mode: 'json', // Force JSON mode (GPT-4 supports this)
|
||||||
|
maxRetries: 3, // Retry on invalid JSON
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
// Fallback to different model
|
||||||
|
console.error('GPT-4 failed, trying Claude...');
|
||||||
|
const result2 = await generateObject({
|
||||||
|
model: anthropic('claude-3-5-sonnet-20241022'),
|
||||||
|
schema: mySchema,
|
||||||
|
prompt: 'Generate data',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prevention
|
||||||
|
|
||||||
|
- Use `mode: 'json'` when available
|
||||||
|
- Prefer GPT-4/Claude for structured output
|
||||||
|
- Implement retry logic
|
||||||
|
- Validate responses
|
||||||
|
- Have fallback models
|
||||||
|
|
||||||
|
### Resources
|
||||||
|
|
||||||
|
- GitHub Issue: #4302 (Imagen 3.0 Invalid JSON)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## For More Errors
|
||||||
|
|
||||||
|
See complete error reference (28 total error types):
|
||||||
|
https://ai-sdk.dev/docs/reference/ai-sdk-errors
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-10-21
|
||||||
522
references/v5-breaking-changes.md
Normal file
522
references/v5-breaking-changes.md
Normal file
@@ -0,0 +1,522 @@
|
|||||||
|
# AI SDK v4 → v5 Migration Guide
|
||||||
|
|
||||||
|
Complete guide to breaking changes from AI SDK v4 to v5.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
AI SDK v5 introduced **extensive breaking changes** to improve consistency, type safety, and functionality. This guide covers all critical changes with before/after examples.
|
||||||
|
|
||||||
|
**Migration Effort:** Medium-High (2-8 hours depending on codebase size)
|
||||||
|
|
||||||
|
**Automated Migration Available:**
|
||||||
|
```bash
|
||||||
|
npx ai migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core API Changes
|
||||||
|
|
||||||
|
### 1. Parameter Renames
|
||||||
|
|
||||||
|
**Change:** `maxTokens` → `maxOutputTokens`, `providerMetadata` → `providerOptions`
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai.chat('gpt-4'),
|
||||||
|
maxTokens: 500,
|
||||||
|
providerMetadata: {
|
||||||
|
openai: { user: 'user-123' }
|
||||||
|
},
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
maxOutputTokens: 500,
|
||||||
|
providerOptions: {
|
||||||
|
openai: { user: 'user-123' }
|
||||||
|
},
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** `maxOutputTokens` is clearer that it limits generated tokens, not prompt tokens. `providerOptions` better reflects that it's for provider-specific configuration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Tool Definitions
|
||||||
|
|
||||||
|
**Change:** `parameters` → `inputSchema`, tool properties renamed
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const tools = {
|
||||||
|
weather: {
|
||||||
|
description: 'Get weather',
|
||||||
|
parameters: z.object({
|
||||||
|
location: z.string(),
|
||||||
|
}),
|
||||||
|
execute: async (args) => {
|
||||||
|
return { temp: 72, location: args.location };
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
// In tool call result:
|
||||||
|
console.log(toolCall.args); // { location: "SF" }
|
||||||
|
console.log(toolCall.result); // { temp: 72 }
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
import { tool } from 'ai';
|
||||||
|
|
||||||
|
const tools = {
|
||||||
|
weather: tool({
|
||||||
|
description: 'Get weather',
|
||||||
|
inputSchema: z.object({
|
||||||
|
location: z.string(),
|
||||||
|
}),
|
||||||
|
execute: async ({ location }) => {
|
||||||
|
return { temp: 72, location };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
};
|
||||||
|
|
||||||
|
// In tool call result:
|
||||||
|
console.log(toolCall.input); // { location: "SF" }
|
||||||
|
console.log(toolCall.output); // { temp: 72 }
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** `inputSchema` clarifies it's a Zod schema. `input`/`output` are clearer than `args`/`result`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Message Types
|
||||||
|
|
||||||
|
**Change:** `CoreMessage` → `ModelMessage`, `Message` → `UIMessage`
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
import { CoreMessage, convertToCoreMessages } from 'ai';
|
||||||
|
|
||||||
|
const messages: CoreMessage[] = [
|
||||||
|
{ role: 'user', content: 'Hello' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const converted = convertToCoreMessages(uiMessages);
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
import { ModelMessage, convertToModelMessages } from 'ai';
|
||||||
|
|
||||||
|
const messages: ModelMessage[] = [
|
||||||
|
{ role: 'user', content: 'Hello' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const converted = convertToModelMessages(uiMessages);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** `ModelMessage` better reflects that these are messages for the model. `UIMessage` is for UI hooks.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. Tool Error Handling
|
||||||
|
|
||||||
|
**Change:** `ToolExecutionError` removed, errors now appear as content parts
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
import { ToolExecutionError } from 'ai';
|
||||||
|
|
||||||
|
const tools = {
|
||||||
|
risky: {
|
||||||
|
execute: async (args) => {
|
||||||
|
throw new ToolExecutionError({
|
||||||
|
message: 'API failed',
|
||||||
|
cause: originalError,
|
||||||
|
});
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
// Error would stop execution
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
const tools = {
|
||||||
|
risky: tool({
|
||||||
|
execute: async (input) => {
|
||||||
|
// Just throw regular errors
|
||||||
|
throw new Error('API failed');
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Error appears as tool-error content part
|
||||||
|
// Model can see the error and retry or handle it
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** Enables automated retry in multi-step scenarios. Model can see and respond to errors.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. Multi-Step Execution
|
||||||
|
|
||||||
|
**Change:** `maxSteps` → `stopWhen` with conditions
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai.chat('gpt-4'),
|
||||||
|
tools: { /* ... */ },
|
||||||
|
maxSteps: 5,
|
||||||
|
experimental_continueSteps: true,
|
||||||
|
prompt: 'Complex task',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
import { stopWhen, stepCountIs } from 'ai';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: { /* ... */ },
|
||||||
|
stopWhen: stepCountIs(5),
|
||||||
|
prompt: 'Complex task',
|
||||||
|
});
|
||||||
|
|
||||||
|
// Or stop on specific tool:
|
||||||
|
stopWhen: hasToolCall('finalize')
|
||||||
|
|
||||||
|
// Or custom condition:
|
||||||
|
stopWhen: (step) => step.stepCount > 5 || step.hasToolCall('finish')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** More flexible control over when multi-step execution stops. `experimental_continueSteps` no longer needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6. Message Structure
|
||||||
|
|
||||||
|
**Change:** Simple `content` string → `parts` array
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const message = {
|
||||||
|
role: 'user',
|
||||||
|
content: 'Hello',
|
||||||
|
};
|
||||||
|
|
||||||
|
// Tool calls embedded in message differently
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
const message = {
|
||||||
|
role: 'user',
|
||||||
|
content: [
|
||||||
|
{ type: 'text', text: 'Hello' },
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
// Tool calls as parts:
|
||||||
|
const messageWithTool = {
|
||||||
|
role: 'assistant',
|
||||||
|
content: [
|
||||||
|
{ type: 'text', text: 'Let me check...' },
|
||||||
|
{
|
||||||
|
type: 'tool-call',
|
||||||
|
toolCallId: '123',
|
||||||
|
toolName: 'weather',
|
||||||
|
args: { location: 'SF' },
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**Part Types:**
|
||||||
|
- `text`: Text content
|
||||||
|
- `file`: File attachments
|
||||||
|
- `reasoning`: Extended thinking (Claude)
|
||||||
|
- `tool-call`: Tool invocation
|
||||||
|
- `tool-result`: Tool result
|
||||||
|
- `tool-error`: Tool error (new in v5)
|
||||||
|
|
||||||
|
**Why:** Unified structure for all content types. Enables richer message formats.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 7. Streaming Architecture
|
||||||
|
|
||||||
|
**Change:** Single chunk format → start/delta/end lifecycle
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
stream.on('chunk', (chunk) => {
|
||||||
|
console.log(chunk.text);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
for await (const part of stream.fullStream) {
|
||||||
|
if (part.type === 'text-delta') {
|
||||||
|
console.log(part.textDelta);
|
||||||
|
} else if (part.type === 'finish') {
|
||||||
|
console.log('Stream finished:', part.finishReason);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or use simplified textStream:
|
||||||
|
for await (const text of stream.textStream) {
|
||||||
|
console.log(text);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Stream Event Types:**
|
||||||
|
- `text-delta`: Text chunk
|
||||||
|
- `tool-call-delta`: Tool call chunk
|
||||||
|
- `tool-result`: Tool result
|
||||||
|
- `finish`: Stream complete
|
||||||
|
- `error`: Stream error
|
||||||
|
|
||||||
|
**Why:** Better structure for concurrent streaming and metadata.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 8. Tool Streaming
|
||||||
|
|
||||||
|
**Change:** Enabled by default
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai.chat('gpt-4'),
|
||||||
|
tools: { /* ... */ },
|
||||||
|
toolCallStreaming: true, // Opt-in
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: { /* ... */ },
|
||||||
|
// Tool streaming enabled by default
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** Better UX. Tools stream by default for real-time feedback.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 9. Package Reorganization
|
||||||
|
|
||||||
|
**Change:** Separate packages for RSC and React
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
import { streamUI } from 'ai/rsc';
|
||||||
|
import { useChat } from 'ai/react';
|
||||||
|
import { LangChainAdapter } from 'ai';
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
import { streamUI } from '@ai-sdk/rsc';
|
||||||
|
import { useChat } from '@ai-sdk/react';
|
||||||
|
import { LangChainAdapter } from '@ai-sdk/langchain';
|
||||||
|
```
|
||||||
|
|
||||||
|
**Install:**
|
||||||
|
```bash
|
||||||
|
npm install @ai-sdk/rsc @ai-sdk/react @ai-sdk/langchain
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** Cleaner package structure. Easier to tree-shake unused functionality.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## UI Hook Changes (See ai-sdk-ui Skill)
|
||||||
|
|
||||||
|
Brief summary (detailed in `ai-sdk-ui` skill):
|
||||||
|
|
||||||
|
1. **useChat Input Management:** No longer managed by hook
|
||||||
|
2. **useChat Actions:** `append()` → `sendMessage()`
|
||||||
|
3. **useChat Props:** `initialMessages` → `messages` (controlled)
|
||||||
|
4. **StreamData Removed:** Replaced by message streams
|
||||||
|
|
||||||
|
See: `ai-sdk-ui` skill for complete UI migration guide
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Provider-Specific Changes
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
**Change:** Default API changed
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const model = openai.chat('gpt-4'); // Uses Chat Completions API
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
const model = openai('gpt-4'); // Uses Responses API
|
||||||
|
// strictSchemas: true → strictJsonSchema: true
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** Responses API is newer and has better features.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Google
|
||||||
|
|
||||||
|
**Change:** Search grounding moved to tool
|
||||||
|
|
||||||
|
**Before (v4):**
|
||||||
|
```typescript
|
||||||
|
const model = google.generativeAI('gemini-pro', {
|
||||||
|
googleSearchRetrieval: true,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (v5):**
|
||||||
|
```typescript
|
||||||
|
import { google, googleSearchRetrieval } from '@ai-sdk/google';
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: google('gemini-pro'),
|
||||||
|
tools: {
|
||||||
|
search: googleSearchRetrieval(),
|
||||||
|
},
|
||||||
|
prompt: 'Search for...',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:** More flexible. Search is now a tool like others.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Checklist
|
||||||
|
|
||||||
|
- [ ] Update package versions (`ai@^5.0.76`, `@ai-sdk/openai@^2.0.53`, etc.)
|
||||||
|
- [ ] Run automated migration: `npx ai migrate`
|
||||||
|
- [ ] Review automated changes
|
||||||
|
- [ ] Update all `maxTokens` → `maxOutputTokens`
|
||||||
|
- [ ] Update `providerMetadata` → `providerOptions`
|
||||||
|
- [ ] Convert tool `parameters` → `inputSchema`
|
||||||
|
- [ ] Update tool properties: `args` → `input`, `result` → `output`
|
||||||
|
- [ ] Replace `maxSteps` with `stopWhen(stepCountIs(n))`
|
||||||
|
- [ ] Update message types: `CoreMessage` → `ModelMessage`
|
||||||
|
- [ ] Remove `ToolExecutionError` handling (just throw errors)
|
||||||
|
- [ ] Update package imports (`ai/rsc` → `@ai-sdk/rsc`)
|
||||||
|
- [ ] Test streaming behavior
|
||||||
|
- [ ] Update TypeScript types
|
||||||
|
- [ ] Test tool calling
|
||||||
|
- [ ] Test multi-step execution
|
||||||
|
- [ ] Check for message structure changes in your code
|
||||||
|
- [ ] Update any custom error handling
|
||||||
|
- [ ] Test with real API calls
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Migration Errors
|
||||||
|
|
||||||
|
### Error: "maxTokens is not a valid parameter"
|
||||||
|
|
||||||
|
**Solution:** Change to `maxOutputTokens`
|
||||||
|
|
||||||
|
### Error: "ToolExecutionError is not exported from 'ai'"
|
||||||
|
|
||||||
|
**Solution:** Remove ToolExecutionError, just throw regular errors
|
||||||
|
|
||||||
|
### Error: "Cannot find module 'ai/rsc'"
|
||||||
|
|
||||||
|
**Solution:** Install and import from `@ai-sdk/rsc`
|
||||||
|
```bash
|
||||||
|
npm install @ai-sdk/rsc
|
||||||
|
import { streamUI } from '@ai-sdk/rsc';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error: "model.chat is not a function"
|
||||||
|
|
||||||
|
**Solution:** Remove `.chat()` call
|
||||||
|
```typescript
|
||||||
|
// Before: openai.chat('gpt-4')
|
||||||
|
// After: openai('gpt-4')
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error: "maxSteps is not a valid parameter"
|
||||||
|
|
||||||
|
**Solution:** Use `stopWhen(stepCountIs(n))`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing After Migration
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Test basic generation
|
||||||
|
const test1 = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
console.log('✅ Basic generation:', test1.text);
|
||||||
|
|
||||||
|
// Test streaming
|
||||||
|
const test2 = streamText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
for await (const chunk of test2.textStream) {
|
||||||
|
process.stdout.write(chunk);
|
||||||
|
}
|
||||||
|
console.log('\n✅ Streaming works');
|
||||||
|
|
||||||
|
// Test structured output
|
||||||
|
const test3 = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: z.object({ name: z.string() }),
|
||||||
|
prompt: 'Generate a person',
|
||||||
|
});
|
||||||
|
console.log('✅ Structured output:', test3.object);
|
||||||
|
|
||||||
|
// Test tools
|
||||||
|
const test4 = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: {
|
||||||
|
test: tool({
|
||||||
|
description: 'Test tool',
|
||||||
|
inputSchema: z.object({ value: z.string() }),
|
||||||
|
execute: async ({ value }) => ({ result: value }),
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
prompt: 'Use the test tool with value "hello"',
|
||||||
|
});
|
||||||
|
console.log('✅ Tools work');
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- **Official Migration Guide:** https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
|
||||||
|
- **Automated Migration:** `npx ai migrate`
|
||||||
|
- **GitHub Discussions:** https://github.com/vercel/ai/discussions
|
||||||
|
- **v5 Release Blog:** https://vercel.com/blog/ai-sdk-5
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-10-21
|
||||||
66
scripts/check-versions.sh
Executable file
66
scripts/check-versions.sh
Executable file
@@ -0,0 +1,66 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Check installed AI SDK Core package versions against latest
|
||||||
|
# Usage: ./scripts/check-versions.sh
|
||||||
|
|
||||||
|
echo "==================================="
|
||||||
|
echo " AI SDK Core - Version Checker"
|
||||||
|
echo "==================================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
packages=(
|
||||||
|
"ai"
|
||||||
|
"@ai-sdk/openai"
|
||||||
|
"@ai-sdk/anthropic"
|
||||||
|
"@ai-sdk/google"
|
||||||
|
"workers-ai-provider"
|
||||||
|
"zod"
|
||||||
|
)
|
||||||
|
|
||||||
|
echo "Checking package versions..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
for package in "${packages[@]}"; do
|
||||||
|
echo "📦 $package"
|
||||||
|
|
||||||
|
# Get installed version
|
||||||
|
installed=$(npm list "$package" --depth=0 2>/dev/null | grep "$package" | awk -F@ '{print $NF}')
|
||||||
|
|
||||||
|
if [ -z "$installed" ]; then
|
||||||
|
echo " ❌ Not installed"
|
||||||
|
else
|
||||||
|
echo " ✅ Installed: $installed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get latest version
|
||||||
|
latest=$(npm view "$package" version 2>/dev/null)
|
||||||
|
|
||||||
|
if [ -z "$latest" ]; then
|
||||||
|
echo " ⚠️ Could not fetch latest version"
|
||||||
|
else
|
||||||
|
echo " 📌 Latest: $latest"
|
||||||
|
|
||||||
|
# Compare versions
|
||||||
|
if [ "$installed" = "$latest" ]; then
|
||||||
|
echo " ✨ Up to date!"
|
||||||
|
elif [ -n "$installed" ]; then
|
||||||
|
echo " ⬆️ Update available"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "==================================="
|
||||||
|
echo " Recommended Versions (AI SDK v5)"
|
||||||
|
echo "==================================="
|
||||||
|
echo ""
|
||||||
|
echo "ai: ^5.0.76"
|
||||||
|
echo "@ai-sdk/openai: ^2.0.53"
|
||||||
|
echo "@ai-sdk/anthropic: ^2.0.0"
|
||||||
|
echo "@ai-sdk/google: ^2.0.0"
|
||||||
|
echo "workers-ai-provider: ^2.0.0"
|
||||||
|
echo "zod: ^3.23.8"
|
||||||
|
echo ""
|
||||||
|
echo "To update all packages:"
|
||||||
|
echo "npm install ai@latest @ai-sdk/openai@latest @ai-sdk/anthropic@latest @ai-sdk/google@latest workers-ai-provider@latest zod@latest"
|
||||||
|
echo ""
|
||||||
86
templates/agent-with-tools.ts
Normal file
86
templates/agent-with-tools.ts
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
// Agent class with multiple tools
|
||||||
|
// AI SDK Core - Agent class for multi-step execution
|
||||||
|
|
||||||
|
import { Agent, tool } from 'ai';
|
||||||
|
import { anthropic } from '@ai-sdk/anthropic';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Create agent with tools
|
||||||
|
const weatherAgent = new Agent({
|
||||||
|
model: anthropic('claude-sonnet-4-5'),
|
||||||
|
system: 'You are a weather assistant. Always provide temperature in the user\'s preferred unit.',
|
||||||
|
tools: {
|
||||||
|
getWeather: tool({
|
||||||
|
description: 'Get current weather for a location',
|
||||||
|
inputSchema: z.object({
|
||||||
|
location: z.string(),
|
||||||
|
}),
|
||||||
|
execute: async ({ location }) => {
|
||||||
|
console.log(`[Tool] Getting weather for ${location}...`);
|
||||||
|
// Simulate API call
|
||||||
|
return {
|
||||||
|
location,
|
||||||
|
temperature: 72,
|
||||||
|
condition: 'sunny',
|
||||||
|
humidity: 65,
|
||||||
|
unit: 'fahrenheit',
|
||||||
|
};
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
|
||||||
|
convertTemp: tool({
|
||||||
|
description: 'Convert temperature between Fahrenheit and Celsius',
|
||||||
|
inputSchema: z.object({
|
||||||
|
fahrenheit: z.number(),
|
||||||
|
}),
|
||||||
|
execute: async ({ fahrenheit }) => {
|
||||||
|
console.log(`[Tool] Converting ${fahrenheit}°F to Celsius...`);
|
||||||
|
const celsius = Math.round(((fahrenheit - 32) * 5 / 9) * 10) / 10;
|
||||||
|
return { celsius };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
|
||||||
|
getAirQuality: tool({
|
||||||
|
description: 'Get air quality index for a location',
|
||||||
|
inputSchema: z.object({
|
||||||
|
location: z.string(),
|
||||||
|
}),
|
||||||
|
execute: async ({ location }) => {
|
||||||
|
console.log(`[Tool] Getting air quality for ${location}...`);
|
||||||
|
// Simulate API call
|
||||||
|
return {
|
||||||
|
location,
|
||||||
|
aqi: 42,
|
||||||
|
level: 'good',
|
||||||
|
pollutants: {
|
||||||
|
pm25: 8,
|
||||||
|
pm10: 15,
|
||||||
|
o3: 35,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
console.log('Starting agent conversation...\n');
|
||||||
|
|
||||||
|
const result = await weatherAgent.run({
|
||||||
|
messages: [
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
content: 'What is the weather in San Francisco? Tell me in Celsius and include air quality.',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('\n--- Agent Response ---');
|
||||||
|
console.log(result.text);
|
||||||
|
|
||||||
|
console.log('\n--- Execution Summary ---');
|
||||||
|
console.log('Total steps:', result.steps);
|
||||||
|
console.log('Tools used:', result.toolCalls?.map(tc => tc.toolName).join(', ') || 'none');
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
77
templates/anthropic-setup.ts
Normal file
77
templates/anthropic-setup.ts
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
// Anthropic provider configuration
|
||||||
|
// AI SDK Core - Anthropic (Claude) setup and usage
|
||||||
|
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
import { anthropic } from '@ai-sdk/anthropic';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
console.log('=== Anthropic (Claude) Provider Setup ===\n');
|
||||||
|
|
||||||
|
// Method 1: Use environment variable (recommended)
|
||||||
|
// ANTHROPIC_API_KEY=sk-ant-...
|
||||||
|
const model1 = anthropic('claude-sonnet-4-5');
|
||||||
|
|
||||||
|
// Method 2: Explicit API key
|
||||||
|
const model2 = anthropic('claude-sonnet-4-5', {
|
||||||
|
apiKey: process.env.ANTHROPIC_API_KEY,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Available models (Claude 4.x family - current)
|
||||||
|
const models = {
|
||||||
|
sonnet45: anthropic('claude-sonnet-4-5'), // Latest, recommended
|
||||||
|
opus4: anthropic('claude-opus-4-0'), // Highest intelligence
|
||||||
|
haiku45: anthropic('claude-haiku-4-5'), // Fastest
|
||||||
|
};
|
||||||
|
|
||||||
|
// Legacy models (Claude 3.x - deprecated, use Claude 4.x instead)
|
||||||
|
// const legacyModels = {
|
||||||
|
// sonnet35: anthropic('claude-3-5-sonnet-20241022'),
|
||||||
|
// opus3: anthropic('claude-3-opus-20240229'),
|
||||||
|
// haiku3: anthropic('claude-3-haiku-20240307'),
|
||||||
|
// };
|
||||||
|
|
||||||
|
// Example: Generate text with Claude
|
||||||
|
console.log('Generating text with Claude Sonnet 4.5...\n');
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: models.sonnet45,
|
||||||
|
prompt: 'Explain what makes Claude different from other AI assistants in 2 sentences.',
|
||||||
|
maxOutputTokens: 150,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Response:', result.text);
|
||||||
|
console.log('\nUsage:');
|
||||||
|
console.log('- Prompt tokens:', result.usage.promptTokens);
|
||||||
|
console.log('- Completion tokens:', result.usage.completionTokens);
|
||||||
|
console.log('- Total tokens:', result.usage.totalTokens);
|
||||||
|
|
||||||
|
// Example: Long context handling
|
||||||
|
console.log('\n=== Long Context Example ===\n');
|
||||||
|
|
||||||
|
const longContextResult = await generateText({
|
||||||
|
model: models.sonnet45,
|
||||||
|
messages: [
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
content: 'I will give you a long document to analyze. Here it is: ' + 'Lorem ipsum '.repeat(1000),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
content: 'Now summarize the key points.',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
maxOutputTokens: 200,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Long context summary:', longContextResult.text);
|
||||||
|
|
||||||
|
// Model selection guide
|
||||||
|
console.log('\n=== Model Selection Guide ===');
|
||||||
|
console.log('- Claude Sonnet 4.5: Latest model, best balance (recommended)');
|
||||||
|
console.log('- Claude Opus 4.0: Highest intelligence for complex reasoning');
|
||||||
|
console.log('- Claude Haiku 4.5: Fastest and most cost-effective');
|
||||||
|
console.log('\nAll Claude 4.x models support extended context windows');
|
||||||
|
console.log('Note: Claude 3.x models deprecated in 2025, use Claude 4.x instead');
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
119
templates/cloudflare-worker-integration.ts
Normal file
119
templates/cloudflare-worker-integration.ts
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
// Cloudflare Workers with workers-ai-provider
|
||||||
|
// AI SDK Core - Cloudflare Workers AI integration
|
||||||
|
|
||||||
|
import { Hono } from 'hono';
|
||||||
|
import { generateText, streamText } from 'ai';
|
||||||
|
import { createWorkersAI } from 'workers-ai-provider';
|
||||||
|
|
||||||
|
// Environment interface for Workers AI binding
|
||||||
|
interface Env {
|
||||||
|
AI: Ai;
|
||||||
|
}
|
||||||
|
|
||||||
|
const app = new Hono<{ Bindings: Env }>();
|
||||||
|
|
||||||
|
// Example 1: Basic text generation
|
||||||
|
app.post('/chat', async (c) => {
|
||||||
|
// IMPORTANT: Create provider inside handler to avoid startup overhead
|
||||||
|
const workersai = createWorkersAI({ binding: c.env.AI });
|
||||||
|
|
||||||
|
const { message } = await c.req.json();
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
prompt: message,
|
||||||
|
maxOutputTokens: 500,
|
||||||
|
});
|
||||||
|
|
||||||
|
return c.json({ response: result.text });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Example 2: Streaming response
|
||||||
|
app.post('/chat/stream', async (c) => {
|
||||||
|
const workersai = createWorkersAI({ binding: c.env.AI });
|
||||||
|
|
||||||
|
const { message } = await c.req.json();
|
||||||
|
|
||||||
|
const stream = streamText({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
prompt: message,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Return stream to client
|
||||||
|
return stream.toDataStreamResponse();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Example 3: Structured output
|
||||||
|
app.post('/extract', async (c) => {
|
||||||
|
const workersai = createWorkersAI({ binding: c.env.AI });
|
||||||
|
|
||||||
|
const { generateObject } = await import('ai');
|
||||||
|
const { z } = await import('zod');
|
||||||
|
|
||||||
|
const { text } = await c.req.json();
|
||||||
|
|
||||||
|
const result = await generateObject({
|
||||||
|
model: workersai('@cf/meta/llama-3.1-8b-instruct'),
|
||||||
|
schema: z.object({
|
||||||
|
summary: z.string(),
|
||||||
|
keyPoints: z.array(z.string()),
|
||||||
|
}),
|
||||||
|
prompt: `Extract key information from: ${text}`,
|
||||||
|
});
|
||||||
|
|
||||||
|
return c.json(result.object);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Example 4: Health check
|
||||||
|
app.get('/health', (c) => {
|
||||||
|
return c.json({ status: 'ok', ai: 'ready' });
|
||||||
|
});
|
||||||
|
|
||||||
|
export default app;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* wrangler.jsonc configuration:
|
||||||
|
*
|
||||||
|
* {
|
||||||
|
* "name": "ai-sdk-worker",
|
||||||
|
* "compatibility_date": "2025-10-21",
|
||||||
|
* "main": "src/index.ts",
|
||||||
|
* "ai": {
|
||||||
|
* "binding": "AI"
|
||||||
|
* }
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* IMPORTANT NOTES:
|
||||||
|
*
|
||||||
|
* 1. Startup Optimization:
|
||||||
|
* - Move `createWorkersAI` inside handlers (not top-level)
|
||||||
|
* - Avoid importing complex Zod schemas at top level
|
||||||
|
* - Monitor startup time (must be <400ms)
|
||||||
|
*
|
||||||
|
* 2. Available Models:
|
||||||
|
* - @cf/meta/llama-3.1-8b-instruct (recommended)
|
||||||
|
* - @cf/meta/llama-3.1-70b-instruct
|
||||||
|
* - @cf/mistral/mistral-7b-instruct-v0.1
|
||||||
|
* - See: https://developers.cloudflare.com/workers-ai/models/
|
||||||
|
*
|
||||||
|
* 3. When to use workers-ai-provider:
|
||||||
|
* - Multi-provider scenarios (OpenAI + Workers AI)
|
||||||
|
* - Using AI SDK UI hooks
|
||||||
|
* - Need consistent API across providers
|
||||||
|
*
|
||||||
|
* 4. When to use native binding:
|
||||||
|
* - Cloudflare-only deployment
|
||||||
|
* - Maximum performance
|
||||||
|
* - See: cloudflare-workers-ai skill
|
||||||
|
*
|
||||||
|
* 5. Testing:
|
||||||
|
* npx wrangler dev
|
||||||
|
* curl -X POST http://localhost:8787/chat \
|
||||||
|
* -H "Content-Type: application/json" \
|
||||||
|
* -d '{"message": "Hello!"}'
|
||||||
|
*
|
||||||
|
* 6. Deployment:
|
||||||
|
* npx wrangler deploy
|
||||||
|
*/
|
||||||
37
templates/generate-object-zod.ts
Normal file
37
templates/generate-object-zod.ts
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
// Structured output with Zod schema validation
|
||||||
|
// AI SDK Core - generateObject() with Zod
|
||||||
|
|
||||||
|
import { generateObject } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Define Zod schema
|
||||||
|
const PersonSchema = z.object({
|
||||||
|
name: z.string().describe('Person full name'),
|
||||||
|
age: z.number().describe('Person age in years'),
|
||||||
|
role: z.enum(['engineer', 'designer', 'manager', 'other']).describe('Job role'),
|
||||||
|
skills: z.array(z.string()).describe('List of technical skills'),
|
||||||
|
experience: z.object({
|
||||||
|
years: z.number(),
|
||||||
|
companies: z.array(z.string()),
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: PersonSchema,
|
||||||
|
prompt: 'Generate a profile for a senior software engineer with 8 years of experience.',
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Generated object:');
|
||||||
|
console.log(JSON.stringify(result.object, null, 2));
|
||||||
|
|
||||||
|
// TypeScript knows the exact type
|
||||||
|
console.log('\nAccessing typed properties:');
|
||||||
|
console.log('Name:', result.object.name);
|
||||||
|
console.log('Skills:', result.object.skills.join(', '));
|
||||||
|
console.log('Years of experience:', result.object.experience.years);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
20
templates/generate-text-basic.ts
Normal file
20
templates/generate-text-basic.ts
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
// Simple text generation with OpenAI
|
||||||
|
// AI SDK Core - generateText() basic example
|
||||||
|
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
prompt: 'What is TypeScript? Explain in 2 sentences.',
|
||||||
|
maxOutputTokens: 100,
|
||||||
|
temperature: 0.7,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Generated text:', result.text);
|
||||||
|
console.log('Tokens used:', result.usage.totalTokens);
|
||||||
|
console.log('Finish reason:', result.finishReason);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
87
templates/google-setup.ts
Normal file
87
templates/google-setup.ts
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
// Google provider configuration
|
||||||
|
// AI SDK Core - Google (Gemini) setup and usage
|
||||||
|
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
import { google } from '@ai-sdk/google';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
console.log('=== Google (Gemini) Provider Setup ===\n');
|
||||||
|
|
||||||
|
// Method 1: Use environment variable (recommended)
|
||||||
|
// GOOGLE_GENERATIVE_AI_API_KEY=...
|
||||||
|
const model1 = google('gemini-2.5-pro');
|
||||||
|
|
||||||
|
// Method 2: Explicit API key
|
||||||
|
const model2 = google('gemini-2.5-pro', {
|
||||||
|
apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Available models
|
||||||
|
const models = {
|
||||||
|
pro: google('gemini-2.5-pro'), // Best for reasoning
|
||||||
|
flash: google('gemini-2.5-flash'), // Fast and efficient
|
||||||
|
flashLite: google('gemini-2.5-flash-lite'), // Ultra-fast (if available)
|
||||||
|
};
|
||||||
|
|
||||||
|
// Example: Generate text with Gemini
|
||||||
|
console.log('Generating text with Gemini 2.5 Pro...\n');
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: models.pro,
|
||||||
|
prompt: 'Explain what makes Gemini good at multimodal tasks in 2 sentences.',
|
||||||
|
maxOutputTokens: 150,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Response:', result.text);
|
||||||
|
console.log('\nUsage:');
|
||||||
|
console.log('- Prompt tokens:', result.usage.promptTokens);
|
||||||
|
console.log('- Completion tokens:', result.usage.completionTokens);
|
||||||
|
console.log('- Total tokens:', result.usage.totalTokens);
|
||||||
|
|
||||||
|
// Example: Structured output with Gemini
|
||||||
|
console.log('\n=== Structured Output Example ===\n');
|
||||||
|
|
||||||
|
const { generateObject } = await import('ai');
|
||||||
|
const { z } = await import('zod');
|
||||||
|
|
||||||
|
const structuredResult = await generateObject({
|
||||||
|
model: models.pro,
|
||||||
|
schema: z.object({
|
||||||
|
title: z.string(),
|
||||||
|
summary: z.string(),
|
||||||
|
keyPoints: z.array(z.string()),
|
||||||
|
}),
|
||||||
|
prompt: 'Summarize the benefits of using Gemini AI.',
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Structured output:');
|
||||||
|
console.log(JSON.stringify(structuredResult.object, null, 2));
|
||||||
|
|
||||||
|
// Error handling example
|
||||||
|
console.log('\n=== Error Handling ===\n');
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result2 = await generateText({
|
||||||
|
model: google('gemini-2.5-pro'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
console.log('Success:', result2.text);
|
||||||
|
} catch (error: any) {
|
||||||
|
if (error.message?.includes('SAFETY')) {
|
||||||
|
console.error('Error: Content filtered by safety settings');
|
||||||
|
} else if (error.message?.includes('QUOTA_EXCEEDED')) {
|
||||||
|
console.error('Error: API quota exceeded');
|
||||||
|
} else {
|
||||||
|
console.error('Error:', error.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Model selection guide
|
||||||
|
console.log('\n=== Model Selection Guide ===');
|
||||||
|
console.log('- Gemini 2.5 Pro: Best for complex reasoning and analysis');
|
||||||
|
console.log('- Gemini 2.5 Flash: Fast and cost-effective for most tasks');
|
||||||
|
console.log('- Gemini 2.5 Flash Lite: Ultra-fast for simple tasks');
|
||||||
|
console.log('\nGemini has generous free tier limits and excels at multimodal tasks');
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
112
templates/multi-step-execution.ts
Normal file
112
templates/multi-step-execution.ts
Normal file
@@ -0,0 +1,112 @@
|
|||||||
|
// Multi-step execution with stopWhen conditions
|
||||||
|
// AI SDK Core - Control multi-step workflows
|
||||||
|
|
||||||
|
import { generateText, tool, stopWhen, stepCountIs, hasToolCall } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
async function example1_stepCount() {
|
||||||
|
console.log('=== Example 1: Stop after N steps ===\n');
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: {
|
||||||
|
research: tool({
|
||||||
|
description: 'Research a topic',
|
||||||
|
inputSchema: z.object({ topic: z.string() }),
|
||||||
|
execute: async ({ topic }) => {
|
||||||
|
console.log(`[Tool] Researching ${topic}...`);
|
||||||
|
return { info: `Research data about ${topic}` };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
analyze: tool({
|
||||||
|
description: 'Analyze research data',
|
||||||
|
inputSchema: z.object({ data: z.string() }),
|
||||||
|
execute: async ({ data }) => {
|
||||||
|
console.log(`[Tool] Analyzing data...`);
|
||||||
|
return { analysis: `Analysis of ${data}` };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
prompt: 'Research TypeScript and analyze the findings.',
|
||||||
|
stopWhen: stepCountIs(3), // Stop after 3 steps
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('\nResult:', result.text);
|
||||||
|
console.log('Steps taken:', result.steps);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function example2_specificTool() {
|
||||||
|
console.log('\n=== Example 2: Stop when specific tool called ===\n');
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: {
|
||||||
|
search: tool({
|
||||||
|
description: 'Search for information',
|
||||||
|
inputSchema: z.object({ query: z.string() }),
|
||||||
|
execute: async ({ query }) => {
|
||||||
|
console.log(`[Tool] Searching for: ${query}`);
|
||||||
|
return { results: `Search results for ${query}` };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
summarize: tool({
|
||||||
|
description: 'Create final summary',
|
||||||
|
inputSchema: z.object({ content: z.string() }),
|
||||||
|
execute: async ({ content }) => {
|
||||||
|
console.log(`[Tool] Creating summary...`);
|
||||||
|
return { summary: `Summary of ${content}` };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
prompt: 'Search for information about AI and create a summary.',
|
||||||
|
stopWhen: hasToolCall('summarize'), // Stop when summarize is called
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('\nResult:', result.text);
|
||||||
|
console.log('Final tool called:', result.toolCalls?.[result.toolCalls.length - 1]?.toolName);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function example3_customCondition() {
|
||||||
|
console.log('\n=== Example 3: Custom stop condition ===\n');
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: {
|
||||||
|
calculate: tool({
|
||||||
|
description: 'Perform calculation',
|
||||||
|
inputSchema: z.object({ expression: z.string() }),
|
||||||
|
execute: async ({ expression }) => {
|
||||||
|
console.log(`[Tool] Calculating: ${expression}`);
|
||||||
|
return { result: 42 };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
finish: tool({
|
||||||
|
description: 'Mark task as complete',
|
||||||
|
inputSchema: z.object({ status: z.string() }),
|
||||||
|
execute: async ({ status }) => {
|
||||||
|
console.log(`[Tool] Finishing with status: ${status}`);
|
||||||
|
return { done: true };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
prompt: 'Solve a math problem and finish.',
|
||||||
|
stopWhen: (step) => {
|
||||||
|
// Stop if:
|
||||||
|
// - More than 5 steps, OR
|
||||||
|
// - 'finish' tool was called
|
||||||
|
return step.stepCount > 5 || step.hasToolCall('finish');
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('\nResult:', result.text);
|
||||||
|
console.log('Stopped at step:', result.steps);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
await example1_stepCount();
|
||||||
|
await example2_specificTool();
|
||||||
|
await example3_customCondition();
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
150
templates/nextjs-server-action.ts
Normal file
150
templates/nextjs-server-action.ts
Normal file
@@ -0,0 +1,150 @@
|
|||||||
|
// Next.js Server Action with AI SDK
|
||||||
|
// AI SDK Core - Server Actions for Next.js App Router
|
||||||
|
|
||||||
|
'use server';
|
||||||
|
|
||||||
|
import { generateObject, generateText } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Example 1: Simple text generation
|
||||||
|
export async function generateStory(theme: string) {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
prompt: `Write a short story about: ${theme}`,
|
||||||
|
maxOutputTokens: 500,
|
||||||
|
});
|
||||||
|
|
||||||
|
return result.text;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Example 2: Structured output (recipe generation)
|
||||||
|
export async function generateRecipe(ingredients: string[]) {
|
||||||
|
const RecipeSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
description: z.string(),
|
||||||
|
ingredients: z.array(
|
||||||
|
z.object({
|
||||||
|
name: z.string(),
|
||||||
|
amount: z.string(),
|
||||||
|
})
|
||||||
|
),
|
||||||
|
instructions: z.array(z.string()),
|
||||||
|
cookingTime: z.number().describe('Cooking time in minutes'),
|
||||||
|
servings: z.number(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: RecipeSchema,
|
||||||
|
prompt: `Create a recipe using these ingredients: ${ingredients.join(', ')}`,
|
||||||
|
});
|
||||||
|
|
||||||
|
return result.object;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Example 3: Data extraction
|
||||||
|
export async function extractContactInfo(text: string) {
|
||||||
|
const ContactSchema = z.object({
|
||||||
|
name: z.string().optional(),
|
||||||
|
email: z.string().email().optional(),
|
||||||
|
phone: z.string().optional(),
|
||||||
|
company: z.string().optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await generateObject({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
schema: ContactSchema,
|
||||||
|
prompt: `Extract contact information from this text: ${text}`,
|
||||||
|
});
|
||||||
|
|
||||||
|
return result.object;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Example 4: Error handling in Server Action
|
||||||
|
export async function generateWithErrorHandling(prompt: string) {
|
||||||
|
try {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
prompt,
|
||||||
|
maxOutputTokens: 200,
|
||||||
|
});
|
||||||
|
|
||||||
|
return { success: true, data: result.text };
|
||||||
|
} catch (error: any) {
|
||||||
|
console.error('AI generation error:', error);
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to generate response. Please try again.',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Usage in Client Component:
|
||||||
|
*
|
||||||
|
* 'use client';
|
||||||
|
*
|
||||||
|
* import { useState } from 'react';
|
||||||
|
* import { generateStory, generateRecipe } from './actions';
|
||||||
|
*
|
||||||
|
* export default function AIForm() {
|
||||||
|
* const [result, setResult] = useState('');
|
||||||
|
* const [loading, setLoading] = useState(false);
|
||||||
|
*
|
||||||
|
* async function handleGenerateStory(formData: FormData) {
|
||||||
|
* setLoading(true);
|
||||||
|
* const theme = formData.get('theme') as string;
|
||||||
|
* const story = await generateStory(theme);
|
||||||
|
* setResult(story);
|
||||||
|
* setLoading(false);
|
||||||
|
* }
|
||||||
|
*
|
||||||
|
* async function handleGenerateRecipe(formData: FormData) {
|
||||||
|
* setLoading(true);
|
||||||
|
* const ingredients = (formData.get('ingredients') as string).split(',');
|
||||||
|
* const recipe = await generateRecipe(ingredients);
|
||||||
|
* setResult(JSON.stringify(recipe, null, 2));
|
||||||
|
* setLoading(false);
|
||||||
|
* }
|
||||||
|
*
|
||||||
|
* return (
|
||||||
|
* <div>
|
||||||
|
* <form action={handleGenerateStory}>
|
||||||
|
* <input name="theme" placeholder="Story theme" required />
|
||||||
|
* <button disabled={loading}>Generate Story</button>
|
||||||
|
* </form>
|
||||||
|
*
|
||||||
|
* <form action={handleGenerateRecipe}>
|
||||||
|
* <input name="ingredients" placeholder="flour, eggs, sugar" required />
|
||||||
|
* <button disabled={loading}>Generate Recipe</button>
|
||||||
|
* </form>
|
||||||
|
*
|
||||||
|
* {result && <pre>{result}</pre>}
|
||||||
|
* </div>
|
||||||
|
* );
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* File Structure:
|
||||||
|
*
|
||||||
|
* app/
|
||||||
|
* ├── actions.ts # This file (Server Actions)
|
||||||
|
* ├── page.tsx # Client component using actions
|
||||||
|
* └── api/
|
||||||
|
* └── chat/
|
||||||
|
* └── route.ts # Alternative: API Route for streaming
|
||||||
|
*
|
||||||
|
* Note: Server Actions are recommended for mutations and non-streaming AI calls.
|
||||||
|
* For streaming, use API Routes with streamText().toDataStreamResponse()
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Environment Variables (.env.local):
|
||||||
|
*
|
||||||
|
* OPENAI_API_KEY=sk-...
|
||||||
|
* ANTHROPIC_API_KEY=sk-ant-...
|
||||||
|
* GOOGLE_GENERATIVE_AI_API_KEY=...
|
||||||
|
*/
|
||||||
81
templates/openai-setup.ts
Normal file
81
templates/openai-setup.ts
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
// OpenAI provider configuration
|
||||||
|
// AI SDK Core - OpenAI setup and usage
|
||||||
|
|
||||||
|
import { generateText } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
console.log('=== OpenAI Provider Setup ===\n');
|
||||||
|
|
||||||
|
// Method 1: Use environment variable (recommended)
|
||||||
|
// OPENAI_API_KEY=sk-...
|
||||||
|
const model1 = openai('gpt-4-turbo');
|
||||||
|
|
||||||
|
// Method 2: Explicit API key
|
||||||
|
const model2 = openai('gpt-4', {
|
||||||
|
apiKey: process.env.OPENAI_API_KEY,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Available models (latest)
|
||||||
|
const models = {
|
||||||
|
gpt51: openai('gpt-5.1'), // Latest flagship model (Nov 2025)
|
||||||
|
gpt5Pro: openai('gpt-5-pro'), // Advanced reasoning
|
||||||
|
gpt41: openai('gpt-4.1'), // Latest GPT-4 series
|
||||||
|
o3: openai('o3'), // Reasoning model
|
||||||
|
gpt4Turbo: openai('gpt-4-turbo'), // Previous generation (still excellent)
|
||||||
|
gpt35Turbo: openai('gpt-3.5-turbo'), // Fast, cost-effective
|
||||||
|
};
|
||||||
|
|
||||||
|
// Older models (still functional)
|
||||||
|
// const olderModels = {
|
||||||
|
// gpt5: openai('gpt-5'), // Superseded by gpt-5.1
|
||||||
|
// gpt4: openai('gpt-4'), // Use gpt-4-turbo instead
|
||||||
|
// };
|
||||||
|
|
||||||
|
// Example: Generate text with GPT-4
|
||||||
|
console.log('Generating text with GPT-4 Turbo...\n');
|
||||||
|
|
||||||
|
const result = await generateText({
|
||||||
|
model: models.gpt4Turbo,
|
||||||
|
prompt: 'Explain the difference between GPT-3.5 and GPT-4 in one sentence.',
|
||||||
|
maxOutputTokens: 100,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Response:', result.text);
|
||||||
|
console.log('\nUsage:');
|
||||||
|
console.log('- Prompt tokens:', result.usage.promptTokens);
|
||||||
|
console.log('- Completion tokens:', result.usage.completionTokens);
|
||||||
|
console.log('- Total tokens:', result.usage.totalTokens);
|
||||||
|
|
||||||
|
// Example: Error handling
|
||||||
|
console.log('\n=== Error Handling ===\n');
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result2 = await generateText({
|
||||||
|
model: openai('gpt-4-turbo'),
|
||||||
|
prompt: 'Hello',
|
||||||
|
});
|
||||||
|
console.log('Success:', result2.text);
|
||||||
|
} catch (error: any) {
|
||||||
|
if (error.statusCode === 401) {
|
||||||
|
console.error('Error: Invalid API key');
|
||||||
|
} else if (error.statusCode === 429) {
|
||||||
|
console.error('Error: Rate limit exceeded');
|
||||||
|
} else if (error.statusCode >= 500) {
|
||||||
|
console.error('Error: OpenAI server issue');
|
||||||
|
} else {
|
||||||
|
console.error('Error:', error.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Model selection guide
|
||||||
|
console.log('\n=== Model Selection Guide ===');
|
||||||
|
console.log('- gpt-5.1: Latest flagship model (November 2025)');
|
||||||
|
console.log('- gpt-5-pro: Advanced reasoning and complex tasks');
|
||||||
|
console.log('- o3: Specialized reasoning model');
|
||||||
|
console.log('- gpt-4.1: Latest GPT-4 series, excellent quality');
|
||||||
|
console.log('- gpt-4-turbo: Previous generation, still very capable');
|
||||||
|
console.log('- gpt-3.5-turbo: Fast and cost-effective for simple tasks');
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
42
templates/package.json
Normal file
42
templates/package.json
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
{
|
||||||
|
"name": "ai-sdk-core-example",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"type": "module",
|
||||||
|
"description": "AI SDK Core examples - Backend AI with generateText, streamText, generateObject, and tools",
|
||||||
|
"scripts": {
|
||||||
|
"dev": "tsx watch src/index.ts",
|
||||||
|
"build": "tsc",
|
||||||
|
"start": "node dist/index.js",
|
||||||
|
"type-check": "tsc --noEmit"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"ai": "^5.0.95",
|
||||||
|
"@ai-sdk/openai": "^2.0.68",
|
||||||
|
"@ai-sdk/anthropic": "^2.0.45",
|
||||||
|
"@ai-sdk/google": "^2.0.38",
|
||||||
|
"workers-ai-provider": "^2.0.0",
|
||||||
|
"zod": "^3.23.8"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^20.11.0",
|
||||||
|
"tsx": "^4.7.0",
|
||||||
|
"typescript": "^5.3.3"
|
||||||
|
},
|
||||||
|
"keywords": [
|
||||||
|
"ai",
|
||||||
|
"ai-sdk",
|
||||||
|
"vercel",
|
||||||
|
"openai",
|
||||||
|
"anthropic",
|
||||||
|
"google",
|
||||||
|
"gemini",
|
||||||
|
"claude",
|
||||||
|
"gpt-4",
|
||||||
|
"llm",
|
||||||
|
"text-generation",
|
||||||
|
"structured-output",
|
||||||
|
"zod"
|
||||||
|
],
|
||||||
|
"author": "",
|
||||||
|
"license": "MIT"
|
||||||
|
}
|
||||||
52
templates/stream-object-zod.ts
Normal file
52
templates/stream-object-zod.ts
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
// Streaming structured output with partial updates
|
||||||
|
// AI SDK Core - streamObject() with Zod
|
||||||
|
|
||||||
|
import { streamObject } from 'ai';
|
||||||
|
import { google } from '@ai-sdk/google';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Define schema for RPG characters
|
||||||
|
const CharacterSchema = z.object({
|
||||||
|
characters: z.array(
|
||||||
|
z.object({
|
||||||
|
name: z.string(),
|
||||||
|
class: z.enum(['warrior', 'mage', 'rogue', 'cleric']),
|
||||||
|
level: z.number(),
|
||||||
|
stats: z.object({
|
||||||
|
hp: z.number(),
|
||||||
|
mana: z.number(),
|
||||||
|
strength: z.number(),
|
||||||
|
intelligence: z.number(),
|
||||||
|
}),
|
||||||
|
inventory: z.array(z.string()),
|
||||||
|
})
|
||||||
|
),
|
||||||
|
});
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const stream = streamObject({
|
||||||
|
model: google('gemini-2.5-pro'),
|
||||||
|
schema: CharacterSchema,
|
||||||
|
prompt: 'Generate 3 diverse RPG characters with complete stats and starting inventory.',
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Streaming structured object (partial updates):');
|
||||||
|
console.log('---\n');
|
||||||
|
|
||||||
|
// Stream partial object updates
|
||||||
|
for await (const partialObject of stream.partialObjectStream) {
|
||||||
|
console.clear(); // Clear console for visual effect
|
||||||
|
console.log('Current partial object:');
|
||||||
|
console.log(JSON.stringify(partialObject, null, 2));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get final complete object
|
||||||
|
const result = await stream.result;
|
||||||
|
|
||||||
|
console.log('\n---');
|
||||||
|
console.log('Final complete object:');
|
||||||
|
console.log(JSON.stringify(result.object, null, 2));
|
||||||
|
console.log('\nCharacter count:', result.object.characters.length);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
39
templates/stream-text-chat.ts
Normal file
39
templates/stream-text-chat.ts
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
// Streaming chat with messages
|
||||||
|
// AI SDK Core - streamText() with chat messages
|
||||||
|
|
||||||
|
import { streamText } from 'ai';
|
||||||
|
import { anthropic } from '@ai-sdk/anthropic';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const stream = streamText({
|
||||||
|
model: anthropic('claude-sonnet-4-5'),
|
||||||
|
messages: [
|
||||||
|
{
|
||||||
|
role: 'system',
|
||||||
|
content: 'You are a helpful assistant that writes concise responses.',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
content: 'Tell me a short story about AI and humanity working together.',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
maxOutputTokens: 500,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('Streaming response:');
|
||||||
|
console.log('---');
|
||||||
|
|
||||||
|
// Stream text chunks to console
|
||||||
|
for await (const chunk of stream.textStream) {
|
||||||
|
process.stdout.write(chunk);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('\n---');
|
||||||
|
|
||||||
|
// Get final result with metadata
|
||||||
|
const result = await stream.result;
|
||||||
|
console.log('\nTokens used:', result.usage.totalTokens);
|
||||||
|
console.log('Finish reason:', result.finishReason);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
75
templates/tools-basic.ts
Normal file
75
templates/tools-basic.ts
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
// Basic tool calling example
|
||||||
|
// AI SDK Core - Tool calling with generateText()
|
||||||
|
|
||||||
|
import { generateText, tool } from 'ai';
|
||||||
|
import { openai } from '@ai-sdk/openai';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const result = await generateText({
|
||||||
|
model: openai('gpt-4'),
|
||||||
|
tools: {
|
||||||
|
weather: tool({
|
||||||
|
description: 'Get the current weather for a location',
|
||||||
|
inputSchema: z.object({
|
||||||
|
location: z.string().describe('City name, e.g. "San Francisco"'),
|
||||||
|
unit: z.enum(['celsius', 'fahrenheit']).optional().describe('Temperature unit'),
|
||||||
|
}),
|
||||||
|
execute: async ({ location, unit = 'fahrenheit' }) => {
|
||||||
|
// Simulate API call to weather service
|
||||||
|
console.log(`[Tool] Fetching weather for ${location}...`);
|
||||||
|
|
||||||
|
// In production, call real weather API here
|
||||||
|
const mockWeather = {
|
||||||
|
location,
|
||||||
|
temperature: unit === 'celsius' ? 22 : 72,
|
||||||
|
condition: 'sunny',
|
||||||
|
humidity: 65,
|
||||||
|
unit,
|
||||||
|
};
|
||||||
|
|
||||||
|
return mockWeather;
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
convertTemperature: tool({
|
||||||
|
description: 'Convert temperature between Celsius and Fahrenheit',
|
||||||
|
inputSchema: z.object({
|
||||||
|
value: z.number(),
|
||||||
|
from: z.enum(['celsius', 'fahrenheit']),
|
||||||
|
to: z.enum(['celsius', 'fahrenheit']),
|
||||||
|
}),
|
||||||
|
execute: async ({ value, from, to }) => {
|
||||||
|
console.log(`[Tool] Converting ${value}°${from} to ${to}...`);
|
||||||
|
|
||||||
|
if (from === to) return { value, unit: to };
|
||||||
|
|
||||||
|
let result: number;
|
||||||
|
if (from === 'celsius' && to === 'fahrenheit') {
|
||||||
|
result = (value * 9 / 5) + 32;
|
||||||
|
} else {
|
||||||
|
result = (value - 32) * 5 / 9;
|
||||||
|
}
|
||||||
|
|
||||||
|
return { value: Math.round(result * 10) / 10, unit: to };
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
prompt: 'What is the weather in Tokyo? Please tell me in Celsius.',
|
||||||
|
maxOutputTokens: 200,
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log('\n--- AI Response ---');
|
||||||
|
console.log(result.text);
|
||||||
|
|
||||||
|
console.log('\n--- Tool Calls ---');
|
||||||
|
console.log('Number of tool calls:', result.toolCalls?.length || 0);
|
||||||
|
if (result.toolCalls) {
|
||||||
|
result.toolCalls.forEach((call, i) => {
|
||||||
|
console.log(`\n${i + 1}. ${call.toolName}`);
|
||||||
|
console.log(' Input:', JSON.stringify(call.input));
|
||||||
|
console.log(' Output:', JSON.stringify(call.output));
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(console.error);
|
||||||
Reference in New Issue
Block a user