Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:41:51 +08:00
commit 43ed648a28
43 changed files with 8444 additions and 0 deletions

View File

@@ -0,0 +1,11 @@
{
"name": "ai-elements",
"description": "Intelligent documentation system for AI Elements component library. Activate automatically when working with AI-native applications or when AI Elements component names are mentioned (Message, Conversation, Reasoning, Canvas, etc.). Provides context-aware documentation retrieval - fetches examples for implementation queries, component references for API lookups, and smart multi-page fetching for complex tasks.",
"version": "1.0.0",
"author": {
"name": "Nathan Onn"
},
"skills": [
"./"
]
}

429
INDEX.md Normal file
View File

@@ -0,0 +1,429 @@
# AI Elements Documentation Index
This index provides a searchable reference for all AI Elements components, examples, and documentation. Use this to quickly locate the right documentation file based on keywords, categories, or related components.
---
## Examples
### Chatbot
**File:** `docs/examples/chatbot.md`
**Description:** Complete tutorial for building a chatbot with reasoning, sources, model picker, and file attachments
**Keywords:** chat, conversation, message, response, reasoning, sources, prompt-input, chatbot, tutorial, complete-example
**Category:** Tutorial
**Installation:** Multiple components
**Related:** message, conversation, response, reasoning, sources, prompt-input, loader, suggestion
**Common Use:** Building complete AI chat interfaces with advanced features
### v0 Clone
**File:** `docs/examples/v0.md`
**Description:** Create a v0 clone using v0 Platform API with WebPreview component
**Keywords:** v0, web-preview, artifact, code-generation, clone, tutorial
**Category:** Tutorial
**Installation:** Multiple components
**Related:** web-preview, artifact, code-block, prompt-input, conversation
**Common Use:** Building code generation and preview interfaces
### Workflow Visualization
**File:** `docs/examples/workflow.md`
**Description:** Build workflow visualizations with React Flow, custom nodes, and animated edges
**Keywords:** workflow, react-flow, canvas, node, edge, visualization, tutorial
**Category:** Tutorial
**Installation:** Multiple components (requires React Flow)
**Related:** canvas, node, edge, connection, toolbar, panel, controls
**Common Use:** Creating AI agent workflow visualizations and node-based interfaces
---
## Components - Chat & Messaging
### Message
**File:** `docs/components/message.md`
**Description:** Core message component with role-based styling for chat interfaces
**Keywords:** message, chat, role, user, assistant, system, avatar, conversation-display
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add message`
**Related:** conversation, response, actions, loader
**Common Use:** Displaying individual chat messages in conversation interfaces
### Conversation
**File:** `docs/components/conversation.md`
**Description:** Chat conversation container with auto-scroll functionality
**Keywords:** conversation, chat, container, auto-scroll, messages, chat-container
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add conversation`
**Related:** message, response, loader, prompt-input
**Common Use:** Building scrollable chat interfaces with auto-scroll to bottom
### Response
**File:** `docs/components/response.md`
**Description:** Markdown response renderer using Streamdown for AI-generated content
**Keywords:** response, markdown, streamdown, ai-content, render, formatting
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add response`
**Related:** message, conversation, code-block
**Common Use:** Rendering AI-generated markdown responses with proper formatting
### Loader
**File:** `docs/components/loader.md`
**Description:** Loading indicators for AI responses with various animation styles
**Keywords:** loader, loading, spinner, animation, pending, waiting
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add loader`
**Related:** conversation, message, prompt-input
**Common Use:** Showing loading state while AI generates responses
### Prompt Input
**File:** `docs/components/prompt-input.md`
**Description:** Rich input component with file attachments, model picker, and action buttons
**Keywords:** prompt, input, textarea, file-attachment, model-picker, submit, chat-input
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add prompt-input`
**Related:** conversation, message, suggestion
**Common Use:** Building rich chat input interfaces with file uploads and model selection
### Suggestion
**File:** `docs/components/suggestion.md`
**Description:** Horizontal row of clickable suggestion chips for quick interactions
**Keywords:** suggestion, chips, quick-reply, buttons, prompts, suggestions
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add suggestion`
**Related:** prompt-input, message, conversation
**Common Use:** Providing quick action suggestions and starter prompts
### Actions
**File:** `docs/components/actions.md`
**Description:** Action buttons for message interactions (copy, retry, regenerate, etc.)
**Keywords:** actions, buttons, copy, retry, regenerate, message-actions, toolbar
**Category:** Chat & Messaging
**Installation:** `npx ai-elements add actions`
**Related:** message, conversation, response
**Common Use:** Adding copy, retry, and regenerate functionality to messages
---
## Components - AI-Specific Features
### Tool
**File:** `docs/components/tool.md`
**Description:** Display AI tool invocations with input/output visualization
**Keywords:** tool, function-call, invocation, input, output, ai-tools, tool-use
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add tool`
**Related:** message, response, artifact
**Common Use:** Showing AI tool/function calls and their results
### Reasoning
**File:** `docs/components/reasoning.md`
**Description:** Collapsible component for displaying AI reasoning and chain-of-thought
**Keywords:** reasoning, chain-of-thought, thinking, collapsible, ai-reasoning, deepseek
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add reasoning`
**Related:** chain-of-thought, plan, task, message
**Common Use:** Displaying AI reasoning process in collapsible format
### Chain of Thought
**File:** `docs/components/chain-of-thought.md`
**Description:** Display AI reasoning process step-by-step
**Keywords:** chain-of-thought, reasoning, steps, thinking, process, ai-reasoning
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add chain-of-thought`
**Related:** reasoning, plan, task
**Common Use:** Showing step-by-step AI reasoning and thought process
### Plan
**File:** `docs/components/plan.md`
**Description:** Multi-step plan display with progress tracking and status indicators
**Keywords:** plan, steps, progress, status, multi-step, planning, task-list
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add plan`
**Related:** task, queue, chain-of-thought, reasoning
**Common Use:** Displaying AI-generated plans with progress tracking
### Task
**File:** `docs/components/task.md`
**Description:** Collapsible task lists with file references and progress tracking
**Keywords:** task, checklist, progress, files, collapsible, todo, task-tracking
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add task`
**Related:** plan, queue, chain-of-thought
**Common Use:** Showing AI task execution with file references
### Queue
**File:** `docs/components/queue.md`
**Description:** Task queue visualization for managing AI workflows
**Keywords:** queue, tasks, workflow, sequence, pending, processing
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add queue`
**Related:** task, plan, workflow
**Common Use:** Visualizing queued AI tasks and workflow sequences
### Artifact
**File:** `docs/components/artifact.md`
**Description:** Container for displaying generated artifacts like code, documents, or previews
**Keywords:** artifact, container, generated-content, code, document, preview, output
**Category:** AI-Specific Features
**Installation:** `npx ai-elements add artifact`
**Related:** code-block, web-preview, tool
**Common Use:** Displaying AI-generated artifacts and outputs
---
## Components - Workflow & Canvas
### Canvas
**File:** `docs/components/canvas.md`
**Description:** React Flow canvas wrapper for workflow visualizations
**Keywords:** canvas, react-flow, workflow, visualization, nodes, edges, graph
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add canvas` (requires React Flow)
**Related:** node, edge, connection, toolbar, panel, controls
**Common Use:** Creating node-based workflow visualizations
### Node
**File:** `docs/components/node.md`
**Description:** Custom workflow nodes with headers, content, and footers
**Keywords:** node, workflow, custom-node, react-flow, graph-node
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add node`
**Related:** canvas, edge, toolbar, connection
**Common Use:** Building custom nodes for workflow visualizations
### Edge
**File:** `docs/components/edge.md`
**Description:** Animated and temporary edge types for workflow connections
**Keywords:** edge, connection, animated, temporary, react-flow, workflow-edge
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add edge`
**Related:** canvas, node, connection
**Common Use:** Creating animated connections between workflow nodes
### Connection
**File:** `docs/components/connection.md`
**Description:** Custom connection lines for React Flow graphs
**Keywords:** connection, line, react-flow, connecting, workflow
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add connection`
**Related:** canvas, node, edge
**Common Use:** Customizing connection appearance in workflows
### Toolbar
**File:** `docs/components/toolbar.md`
**Description:** Node toolbar for React Flow with action buttons
**Keywords:** toolbar, actions, node-toolbar, react-flow, buttons
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add toolbar`
**Related:** canvas, node, panel
**Common Use:** Adding action toolbars to workflow nodes
### Panel
**File:** `docs/components/panel.md`
**Description:** Positioned panels for canvas overlays and controls
**Keywords:** panel, overlay, positioned, react-flow, controls
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add panel`
**Related:** canvas, controls, toolbar
**Common Use:** Creating overlay panels on workflow canvases
### Controls
**File:** `docs/components/controls.md`
**Description:** Canvas control buttons for zoom, fit view, and navigation
**Keywords:** controls, zoom, fit-view, navigation, react-flow, canvas-controls
**Category:** Workflow & Canvas
**Installation:** `npx ai-elements add controls`
**Related:** canvas, panel
**Common Use:** Adding zoom and navigation controls to canvas
---
## Components - UI Enhancements
### Code Block
**File:** `docs/components/code-block.md`
**Description:** Syntax-highlighted code blocks with copy functionality
**Keywords:** code-block, syntax-highlighting, copy, code, programming, snippet
**Category:** UI Enhancements
**Installation:** `npx ai-elements add code-block`
**Related:** response, artifact, tool
**Common Use:** Displaying formatted code with syntax highlighting
### Image
**File:** `docs/components/image.md`
**Description:** Image display component with error handling
**Keywords:** image, picture, display, error-handling, media
**Category:** UI Enhancements
**Installation:** `npx ai-elements add image`
**Related:** message, response, artifact
**Common Use:** Displaying images with graceful error handling
### Sources
**File:** `docs/components/sources.md`
**Description:** Collapsible source and citation list for AI responses
**Keywords:** sources, citations, references, collapsible, rag, retrieval
**Category:** UI Enhancements
**Installation:** `npx ai-elements add sources`
**Related:** inline-citation, message, response
**Common Use:** Showing sources and citations for AI-generated content
### Inline Citation
**File:** `docs/components/inline-citation.md`
**Description:** Inline source citations with hover previews
**Keywords:** citation, inline, hover, preview, reference, sources
**Category:** UI Enhancements
**Installation:** `npx ai-elements add inline-citation`
**Related:** sources, response, message
**Common Use:** Adding inline citations with hover previews to text
### Web Preview
**File:** `docs/components/web-preview.md`
**Description:** Iframe component for displaying web previews
**Keywords:** web-preview, iframe, preview, web, html, render
**Category:** UI Enhancements
**Installation:** `npx ai-elements add web-preview`
**Related:** artifact, code-block
**Common Use:** Previewing generated HTML and web content
### Shimmer
**File:** `docs/components/shimmer.md`
**Description:** Animated shimmer effect for loading states
**Keywords:** shimmer, loading, skeleton, animation, placeholder
**Category:** UI Enhancements
**Installation:** `npx ai-elements add shimmer`
**Related:** loader, message
**Common Use:** Creating skeleton loading states with shimmer effect
### Open in Chat
**File:** `docs/components/open-in-chat.md`
**Description:** Button to open content in chat interface
**Keywords:** open-in-chat, button, navigation, chat, action
**Category:** UI Enhancements
**Installation:** `npx ai-elements add open-in-chat`
**Related:** conversation, message, prompt-input
**Common Use:** Adding buttons to open content in chat interface
### Context
**File:** `docs/components/context.md`
**Description:** Context menu system for conversations
**Keywords:** context-menu, menu, right-click, actions, conversation
**Category:** UI Enhancements
**Installation:** `npx ai-elements add context`
**Related:** message, conversation, actions
**Common Use:** Adding context menus to conversation elements
### Confirmation
**File:** `docs/components/confirmation.md`
**Description:** User approval flow for sensitive AI operations
**Keywords:** confirmation, approval, dialog, user-confirmation, sensitive, permission
**Category:** UI Enhancements
**Installation:** `npx ai-elements add confirmation`
**Related:** message, tool, actions
**Common Use:** Requesting user approval for sensitive AI actions
### Branch
**File:** `docs/components/branch.md`
**Description:** Multi-version message branches with navigation
**Keywords:** branch, versions, multi-version, navigation, conversation-branch, alternatives
**Category:** UI Enhancements
**Installation:** `npx ai-elements add branch`
**Related:** message, conversation, actions
**Common Use:** Managing multiple conversation branches and alternatives
---
## General Documentation
### README
**File:** `docs/README.md`
**Description:** Overview of AI Elements library with component categories and installation methods
**Keywords:** overview, introduction, component-list, categories, getting-started
**Related:** introduction, usage
### Introduction
**File:** `docs/introduction.md`
**Description:** Getting started guide with installation, prerequisites, and library overview
**Keywords:** introduction, installation, setup, prerequisites, getting-started, quickstart
**Related:** README, usage, troubleshooting
### Usage
**File:** `docs/usage.md`
**Description:** Implementation patterns and customization examples for AI Elements
**Keywords:** usage, patterns, customization, implementation, examples, how-to
**Related:** introduction, troubleshooting
### Troubleshooting
**File:** `docs/troubleshooting.md`
**Description:** Common issues and solutions for AI Elements
**Keywords:** troubleshooting, issues, problems, solutions, errors, debugging, help
**Related:** introduction, usage
---
## Component Categories Summary
### By Use Case
**Building Chat Interfaces:**
- Message, Conversation, Response, Loader, Prompt Input, Suggestion, Actions
**Displaying AI Reasoning:**
- Reasoning, Chain of Thought, Plan, Task, Tool, Queue
**Creating Workflows:**
- Canvas, Node, Edge, Connection, Toolbar, Panel, Controls
**Enhancing Content:**
- Code Block, Image, Sources, Inline Citation, Web Preview, Artifact, Shimmer
**User Interactions:**
- Confirmation, Actions, Suggestion, Open in Chat, Context, Branch
---
## Common Component Pairings
These components are frequently used together:
- **Chat Interface:** Message + Conversation + Response + Prompt Input + Loader
- **AI Reasoning Display:** Reasoning + Chain of Thought + Plan + Task
- **Workflow Visualization:** Canvas + Node + Edge + Toolbar + Controls
- **Code Generation:** Artifact + Code Block + Web Preview + Response
- **Citations & Sources:** Sources + Inline Citation + Response
- **Rich Messages:** Message + Response + Code Block + Tool + Reasoning
- **Complete Chatbot:** See chatbot example for full integration
---
## Prerequisites
All AI Elements components require:
- Node.js 18 or later
- Next.js project
- AI SDK installed
- shadcn/ui installed
**Additional for Workflow Components:**
- React Flow library (for Canvas, Node, Edge, Connection, Toolbar, Panel, Controls)
---
## Installation Quick Reference
**Single component:**
```bash
npx ai-elements add [component-name]
```
**Multiple components:**
```bash
npx ai-elements add message conversation response
```
**All components:**
```bash
npx ai-elements add @ai-elements/all
```
**Using shadcn CLI:**
```bash
npx shadcn@latest add @ai-elements/[component-name]
```

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# ai-elements
Intelligent documentation system for AI Elements component library. Activate automatically when working with AI-native applications or when AI Elements component names are mentioned (Message, Conversation, Reasoning, Canvas, etc.). Provides context-aware documentation retrieval - fetches examples for implementation queries, component references for API lookups, and smart multi-page fetching for complex tasks.

446
SKILL.md Normal file
View File

@@ -0,0 +1,446 @@
---
name: ai-elements
description: Intelligent documentation system for AI Elements component library. Activate automatically when working with AI-native applications or when AI Elements component names are mentioned (Message, Conversation, Reasoning, Canvas, etc.). Provides context-aware documentation retrieval - fetches examples for implementation queries, component references for API lookups, and smart multi-page fetching for complex tasks.
---
# AI Elements Documentation
## Overview
Provide intelligent documentation retrieval for the AI Elements component library when building AI-native applications. AI Elements is a component library built on shadcn/ui that provides pre-built components like conversations, messages, prompts, workflows, and more - all designed to integrate with the AI SDK.
This skill automatically fetches relevant documentation from the bundled markdown files and provides context-aware routing to ensure users get the right information format for their query.
## Activation
### Automatically Activate When:
1. **AI Elements Component Names Mentioned:**
- Chat & Messaging: Message, Conversation, Response, Loader, Prompt Input, Suggestion, Actions
- AI-Specific: Tool, Reasoning, Chain of Thought, Plan, Task, Queue, Artifact
- Workflow: Canvas, Node, Edge, Connection, Toolbar, Panel, Controls
- UI Enhancements: Code Block, Image, Sources, Inline Citation, Confirmation, etc.
2. **AI Elements Library Terms:**
- "ai-elements", "AI Elements library"
- "ai-native app", "AI native application"
- "shadcn AI components"
- "@ai-elements/..." (component installation syntax)
3. **File Context Detection:**
- Imports from `@/components/ai-elements/`
- Component usage matching AI Elements component names
- AI Elements commands in package.json
4. **AI-Native Development Queries:**
- Building chat interfaces with AI responses
- Creating workflow visualizations for AI agents
- Displaying AI reasoning or chain-of-thought
- Implementing AI artifact containers
### Do NOT Activate For:
- Generic chat/messaging apps without AI context
- General React Flow usage without AI workflow context
- Standard UI components unless specifically AI Elements components
- Generic "conversation" mentions without AI/component context
## Core Capabilities
### 1. Context-Aware Documentation Routing
Route queries intelligently based on intent to provide the most useful documentation format:
#### Route 1: Implementation Queries → Examples + Components
**Triggers:**
- "how to build...", "create a...", "implement a..."
- "building a chatbot", "creating a workflow visualization"
- "I want to make...", "I need to develop..."
**Action:**
1. Identify the most relevant example from INDEX.md
2. Read the example file using Read tool: `.claude/skills/ai-elements/docs/examples/[example].md`
3. Identify 2-3 key components mentioned in the example
4. Read those component files: `.claude/skills/ai-elements/docs/components/[component].md`
5. Present with integration pattern explanation
**Example Query:** "How do I build a chatbot with reasoning?"
**Fetch:**
- `docs/examples/chatbot.md`
- `docs/components/reasoning.md`
- `docs/components/conversation.md`
#### Route 2: API Reference Queries → Component Only
**Triggers:**
- "what is...", "what does... do", "what props..."
- "how does X component work"
- "show me the API for..."
- "what properties does... accept"
**Action:**
1. Look up component in INDEX.md
2. Read single component file: `.claude/skills/ai-elements/docs/components/[component].md`
3. Suggest 2-3 category-related components from INDEX.md
**Example Query:** "What props does Message accept?"
**Fetch:**
- `docs/components/message.md`
**Suggest:** Conversation, Response, Actions (same category)
#### Route 3: General Guidance Queries → Overview Docs
**Triggers:**
- "getting started with AI Elements"
- "how to install...", "setup..."
- "troubleshooting...", "common issues..."
- "best practices for..."
**Action:**
1. Read appropriate general documentation:
- `docs/introduction.md` for getting started
- `docs/usage.md` for implementation patterns
- `docs/troubleshooting.md` for issues
- `docs/README.md` for overview
**Example Query:** "I'm new to AI Elements, how do I get started?"
**Fetch:**
- `docs/introduction.md`
- `docs/usage.md`
#### Route 4: Category/Use Case Queries → README + Components
**Triggers:**
- "components for building chat interfaces"
- "workflow visualization components"
- "what components help with..."
- "show me all... components"
**Action:**
1. Read `docs/README.md` (contains categorized component lists)
2. Optionally read 1-2 key components in that category
3. List all relevant components with brief descriptions from INDEX.md
**Example Query:** "What components help with workflow visualization?"
**Fetch:**
- `docs/README.md`
- Optionally: `docs/components/canvas.md`
### 2. Smart Multi-Page Fetching
**Fetch Multiple Pages (2-4 total) When:**
- Implementation queries ("how to build...")
- Query mentions multiple components ("chat with reasoning")
- Component docs reference other required components
- Related component sets (Canvas + Node + Edge, Message + Conversation + Response)
**Fetch Single Page When:**
- Simple reference queries ("what props...")
- Troubleshooting lookups
- General overview requests
**Stop Fetching When:**
- Already fetched 3-4 pages
- Query is answered sufficiently
- Additional pages would be tangential
### 3. Category-Aware Suggestions
After fetching documentation, always suggest 2-3 related components using these category groupings:
**Chat & Messaging:**
- Message, Conversation, Response, Loader, Prompt Input, Suggestion, Actions
**AI-Specific Features:**
- Tool, Reasoning, Chain of Thought, Plan, Task, Queue, Artifact
**Workflow & Canvas:**
- Canvas, Node, Edge, Connection, Toolbar, Panel, Controls
**UI Enhancements:**
- Code Block, Image, Sources, Inline Citation, Web Preview, Shimmer, Open in Chat, Confirmation, Context, Branch
**Common Pairings:**
- Message → Conversation, Response, Actions
- Reasoning → Chain of Thought, Plan, Task
- Canvas → Node, Edge, Toolbar, Panel
- Prompt Input → Suggestion, Confirmation
- Tool → Artifact, Code Block
- Sources → Inline Citation
Suggest components from:
1. Same category (highest priority)
2. Commonly used together
3. Next implementation step
### 4. Installation Command Generation
Always include installation commands in responses:
**Single component:**
```
**Installation:**
`npx ai-elements add [component-name]`
```
**Multiple components:**
```
**Installation:**
`npx ai-elements add message conversation response`
```
**All components:**
```
**Installation (all components):**
`npx ai-elements add @ai-elements/all`
```
**For workflow components (Canvas, Node, Edge, etc.):**
```
**Installation:**
`npx ai-elements add canvas` (requires React Flow)
```
## Response Formats
### Single-Page Response Format
```markdown
I've fetched the [Component Name] documentation from AI Elements.
**Installation:**
`npx ai-elements add [component-name]`
[Full component documentation content from file]
---
**Related Components:**
- **[Component 1]** - [Brief description of relationship]
- **[Component 2]** - [Brief description of relationship]
- **[Component 3]** - [Brief description of relationship]
```
### Multi-Page Response Format
```markdown
This requires understanding multiple components. I've fetched:
---
## [Example/Component 1 Name]
**Installation:**
`npx ai-elements add [component-name]`
[Full content from file]
---
## [Component 2 Name]
**Installation:**
`npx ai-elements add [component-name]`
[Full content from file]
---
## [Component 3 Name] (if needed)
**Installation:**
`npx ai-elements add [component-name]`
[Full content from file]
---
**Integration Pattern:**
[2-3 sentences explaining how these components work together, referencing examples from the documentation]
**Related Resources:**
- **[Additional Component]** - [Description]
- **[Tutorial/Example]** - [Description]
```
## Implementation Guidance Scope
### ✅ DO Provide:
1. **Pattern Explanations from Examples**
- "The chatbot example shows using Conversation with auto-scroll"
- "The workflow example demonstrates connecting Canvas + Node + Edge"
2. **Component Integration Patterns**
- "Typically used together: Message inside Conversation with Response for AI output"
- "Canvas requires Node and Edge components for complete workflows"
3. **Best Practices from Docs**
- "The docs recommend using Loader during AI response generation"
- "Prompt Input includes built-in file attachment handling"
4. **Common Pitfall Warnings**
- "Note: Canvas requires React Flow to be installed separately"
- "Response component uses Streamdown for markdown rendering"
5. **Installation Guidance**
- "Install all chat components: `npx ai-elements add message conversation response`"
- "Prerequisites: Next.js with AI SDK and shadcn/ui"
### ❌ DON'T Provide:
1. **Complete Custom Implementations** - Reference examples instead of generating new code
2. **Customization Beyond Docs** - Only suggest what's documented
3. **Debugging User Code** - Can reference troubleshooting docs only
4. **Undocumented Features** - Only describe documented capabilities
5. **Framework Integration** (unless documented) - Stick to documented Next.js + AI SDK patterns
## Documentation Access
### File Structure
All documentation is bundled in the skill at:
```
.claude/skills/ai-elements/
├── SKILL.md (this file)
├── INDEX.md (searchable component reference)
└── docs/
├── README.md (overview and categories)
├── introduction.md (getting started)
├── usage.md (implementation patterns)
├── troubleshooting.md (common issues)
├── components/ (31 component files)
│ ├── message.md
│ ├── conversation.md
│ ├── reasoning.md
│ └── ...
└── examples/ (3 tutorial files)
├── chatbot.md
├── v0.md
└── workflow.md
```
### How to Access Documentation
1. **Search INDEX.md** for component names, keywords, or categories
2. **Read files** using the Read tool with absolute paths:
- Components: `.claude/skills/ai-elements/docs/components/[name].md`
- Examples: `.claude/skills/ai-elements/docs/examples/[name].md`
- General: `.claude/skills/ai-elements/docs/[name].md`
3. **Extract metadata** from INDEX.md entries (keywords, related components, categories)
4. **Generate suggestions** using category groupings and common pairings
## Query Interpretation Examples
### Example 1: Building a Feature
**Query:** "How do I build a chat interface with AI responses?"
**Routing:** Implementation query → Examples + Components
**Process:**
1. Search INDEX.md for "chat", "conversation", "chatbot"
2. Identify relevant example: chatbot.md
3. Read `.claude/skills/ai-elements/docs/examples/chatbot.md`
4. Extract key components mentioned: Conversation, Message, Response
5. Read those component files
6. Format multi-page response with integration pattern
**Response includes:**
- Installation commands for all components
- Full chatbot example
- Component API references
- Integration pattern explanation
### Example 2: API Lookup
**Query:** "What props does the Reasoning component accept?"
**Routing:** API reference query → Component only
**Process:**
1. Look up "reasoning" in INDEX.md
2. Read `.claude/skills/ai-elements/docs/components/reasoning.md`
3. Extract category from INDEX.md: "AI-Specific Features"
4. Find related components: Chain of Thought, Plan, Task
**Response includes:**
- Installation: `npx ai-elements add reasoning`
- Full Reasoning component documentation
- Props table and usage examples
- Suggestions: Chain of Thought, Plan, Task
### Example 3: Category Exploration
**Query:** "What components help with workflow visualization?"
**Routing:** Category query → README + key component
**Process:**
1. Search INDEX.md for "workflow" category
2. Read `.claude/skills/ai-elements/docs/README.md`
3. Extract Workflow & Canvas category section
4. Optionally read Canvas component as primary example
**Response includes:**
- List of all workflow components
- Brief descriptions from README
- Installation command for workflow set
- Suggestion to check workflow example
### Example 4: Getting Started
**Query:** "I'm new to AI Elements, how do I get started?"
**Routing:** General guidance → Overview docs
**Process:**
1. Read `.claude/skills/ai-elements/docs/introduction.md`
2. Read `.claude/skills/ai-elements/docs/usage.md`
**Response includes:**
- Installation instructions
- Prerequisites checklist
- Basic usage patterns
- Suggestions to explore chatbot/v0/workflow examples
## Important Notes
### Prerequisites
Always mention when relevant:
- Node.js 18 or later
- Next.js project
- AI SDK installed
- shadcn/ui installed
- React Flow (for workflow components only)
### Component Customization
Components are installed as source files into the user's codebase (typically `@/components/ai-elements/`), making them fully customizable. Reference `usage.md` for customization guidance.
### AI SDK Integration
All components are designed for use with the AI SDK. Reference integration patterns with `useChat`, `useCompletion`, and other AI SDK hooks when showing examples.
### React Flow Dependency
Canvas, Node, Edge, Connection, Toolbar, Panel, and Controls components require React Flow to be installed separately. Always note this dependency when fetching workflow component documentation.
## Workflow Summary
**For every query:**
1. **Identify intent** → Implementation, API reference, category, or general guidance
2. **Route appropriately** → Determine which files to fetch
3. **Search INDEX.md** → Find relevant components, keywords, categories
4. **Read files** → Use Read tool to access documentation
5. **Format response** → Include installation, content, integration pattern, suggestions
6. **Suggest related** → Use category groupings and common pairings
**Keep responses:**
- Focused on documented information
- Formatted with clear installation commands
- Enriched with relevant component suggestions
- Practical with integration patterns from examples

150
docs/README.md Normal file
View File

@@ -0,0 +1,150 @@
# AI Elements Documentation
This folder contains comprehensive documentation for AI Elements, a component library built on top of shadcn/ui for building AI-native applications.
## Introduction
[AI Elements](https://www.npmjs.com/package/ai-elements) is a component library and custom registry that helps you build AI-native applications faster. It provides pre-built components like conversations, messages, prompts, workflows, and more, all designed to integrate seamlessly with the [AI SDK](https://ai-sdk.dev/).
For more information, see [Introduction](introduction.md).
## Getting Started
### Installation
**Using AI Elements CLI:**
```bash
npx ai-elements@latest
```
**Using shadcn CLI:**
```bash
npx shadcn@latest add @ai-elements/all
```
### Prerequisites
- Node.js version 18 or later
- A Next.js project with the AI SDK installed
- shadcn/ui installed in your project
For detailed installation instructions, see [Introduction](introduction.md).
## Components
### Chat & Messaging
- [**Message**](elements/components/message.md) - Core message component with role-based styling for chat interfaces
- [**Conversation**](elements/components/conversation.md) - Chat conversation container with auto-scroll functionality
- [**Response**](elements/components/response.md) - Markdown response renderer using Streamdown for AI-generated content
- [**Loader**](elements/components/loader.md) - Loading indicators for AI responses with various animation styles
### Input & Prompts
- [**Prompt Input**](elements/components/prompt-input.md) - Rich input component with file attachments, model picker, and action buttons
- [**Suggestion**](elements/components/suggestion.md) - Horizontal row of clickable suggestion chips for quick interactions
### AI-Specific Features
- [**Tool**](elements/components/tool.md) - Display AI tool invocations with input/output visualization
- [**Reasoning**](elements/components/reasoning.md) - Collapsible component for displaying AI reasoning and chain-of-thought
- [**Chain of Thought**](elements/components/chain-of-thought.md) - Display AI reasoning process step-by-step
- [**Plan**](elements/components/plan.md) - Multi-step plan display with progress tracking and status indicators
- [**Task**](elements/components/task.md) - Collapsible task lists with file references and progress tracking
- [**Queue**](elements/components/queue.md) - Task queue visualization for managing AI workflows
- [**Artifact**](elements/components/artifact.md) - Container for displaying generated artifacts like code, documents, or previews
### Workflow & Canvas
- [**Canvas**](elements/components/canvas.md) - React Flow canvas wrapper for workflow visualizations
- [**Node**](elements/components/node.md) - Custom workflow nodes with headers, content, and footers
- [**Edge**](elements/components/edge.md) - Animated and temporary edge types for workflow connections
- [**Connection**](elements/components/connection.md) - Custom connection lines for React Flow graphs
- [**Toolbar**](elements/components/toolbar.md) - Node toolbar for React Flow with action buttons
- [**Panel**](elements/components/panel.md) - Positioned panels for canvas overlays and controls
- [**Controls**](elements/components/controls.md) - Canvas control buttons for zoom, fit view, and navigation
### UI Enhancements
- [**Actions**](elements/components/actions.md) - Action buttons for message interactions (copy, retry, regenerate, etc.)
- [**Confirmation**](elements/components/confirmation.md) - User approval flow for sensitive AI operations
- [**Sources**](elements/components/sources.md) - Collapsible source and citation list for AI responses
- [**Inline Citation**](elements/components/inline-citation.md) - Inline source citations with hover previews
- [**Code Block**](elements/components/code-block.md) - Syntax-highlighted code blocks with copy functionality
- [**Image**](elements/components/image.md) - Image display component with error handling
- [**Shimmer**](elements/components/shimmer.md) - Animated shimmer effect for loading states
- [**Web Preview**](elements/components/web-preview.md) - Iframe component for displaying web previews
- [**Open in Chat**](elements/components/open-in-chat.md) - Button to open content in chat interface
- [**Context**](elements/components/context.md) - Context menu system for conversations
- [**Branch**](elements/components/branch.md) - Multi-version message branches with navigation
## Examples
Complete tutorials demonstrating how to build AI applications with AI Elements:
- [**Chatbot**](elements/examples/chatbot.md) - Build a complete chatbot with reasoning, sources, model picker, and file attachments
- [**v0 Clone**](elements/examples/v0.md) - Create a v0 clone using the v0 Platform API with WebPreview component
- [**Workflow Visualization**](elements/examples/workflow.md) - Build workflow visualizations with React Flow, custom nodes, and animated edges
## Documentation
- [**Usage**](usage.md) - Learn how to use AI Elements components in your application
- [**Troubleshooting**](troubleshooting.md) - Common issues and solutions for AI Elements
## Component Categories
### By Use Case
**Building Chat Interfaces:**
- Message, Conversation, Response, Loader, Prompt Input, Suggestion, Actions
**Displaying AI Reasoning:**
- Reasoning, Chain of Thought, Plan, Task, Tool
**Creating Workflows:**
- Canvas, Node, Edge, Connection, Toolbar, Panel, Controls
**Enhancing Content:**
- Code Block, Image, Sources, Inline Citation, Web Preview, Artifact
**User Interactions:**
- Confirmation, Actions, Suggestion, Open in Chat, Context, Branch
## Installation Methods
You can install individual components or all components at once:
**Install a single component:**
```bash
npx ai-elements@latest add message
```
**Install all components:**
```bash
npx ai-elements@latest add @ai-elements/all
```
**Using shadcn CLI:**
```bash
npx shadcn@latest add @ai-elements/[component-name]
```
## Customization
All AI Elements components are installed directly into your codebase (typically in `@/components/ai-elements/`), making them fully customizable. You can modify styles, add features, or adapt components to your specific needs.
For customization examples, see [Usage](usage.md).
## Support
If you encounter any issues, check the [Troubleshooting](troubleshooting.md) guide or open an issue on the [GitHub repository](https://github.com/vercel/ai-elements/issues).

195
docs/components/actions.md Normal file
View File

@@ -0,0 +1,195 @@
# Actions
URL: /components/actions
---
title: Actions
description: A row of composable action buttons for AI responses, including retry, like, dislike, copy, share, and custom actions.
path: elements/components/actions
---
The `Actions` component provides a flexible row of action buttons for AI responses with common actions like retry, like, dislike, copy, and share.
<Preview path="actions" />
## Installation
<ElementsInstaller path="actions" />
## Usage
```tsx
import { Actions, Action } from "@/components/ai-elements/actions";
import { ThumbsUpIcon } from "lucide-react";
```
```tsx
<Actions className="mt-2">
<Action label="Like">
<ThumbsUpIcon className="size-4" />
</Action>
</Actions>
```
## Usage with AI SDK
Build a simple chat UI where the user can copy or regenerate the most recent message.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { useState } from "react";
import { Actions, Action } from "@/components/ai-elements/actions";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { Response } from "@/components/ai-elements/response";
import { RefreshCcwIcon, CopyIcon } from "lucide-react";
import { useChat } from "@ai-sdk/react";
import { Fragment } from "react";
const ActionsDemo = () => {
const [input, setInput] = useState("");
const { messages, sendMessage, status, regenerate } = useChat();
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput("");
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<Conversation>
<ConversationContent>
{messages.map((message, messageIndex) => (
<Fragment key={message.id}>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
const isLastMessage = messageIndex === messages.length - 1;
return (
<Fragment key={`${message.id}-${i}`}>
<Message from={message.role}>
<MessageContent>
<Response>{part.text}</Response>
</MessageContent>
</Message>
{message.role === "assistant" && isLastMessage && (
<Actions>
<Action onClick={() => regenerate()} label="Retry">
<RefreshCcwIcon className="size-3" />
</Action>
<Action onClick={() => navigator.clipboard.writeText(part.text)} label="Copy">
<CopyIcon className="size-3" />
</Action>
</Actions>
)}
</Fragment>
);
default:
return null;
}
})}
</Fragment>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit
status={status === "streaming" ? "streaming" : "ready"}
disabled={!input.trim()}
className="absolute bottom-1 right-1"
/>
</Input>
</div>
</div>
);
};
export default ActionsDemo;
```
Add the following route to your backend:
```tsx title="api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { model, messages }: { messages: UIMessage[]; model: string } = await req.json();
const result = streamText({
model: "openai/gpt-4o",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
## Features
- Row of composable action buttons with consistent styling
- Support for custom actions with tooltips
- State management for toggle actions (like, dislike, favorite)
- Keyboard accessible with proper ARIA labels
- Clipboard and Web Share API integration
- TypeScript support with proper type definitions
- Consistent with design system styling
## Examples
<Preview path="actions-hover" />
## Props
### `<Actions />`
<TypeTable
type={{
'...props': {
description: 'HTML attributes to spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<Action />`
<TypeTable
type={{
tooltip: {
description: 'Optional tooltip text shown on hover.',
type: 'string',
},
label: {
description:
'Accessible label for screen readers. Also used as fallback if tooltip is not provided.',
type: 'string',
},
'...props': {
description:
'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>

168
docs/components/artifact.md Normal file
View File

@@ -0,0 +1,168 @@
# Artifact
URL: /components/artifact
---
title: Artifact
description: A container component for displaying generated content like code, documents, or other outputs with built-in actions.
path: elements/components/artifact
---
The `Artifact` component provides a structured container for displaying generated content like code, documents, or other outputs with built-in header actions.
<Preview path="artifact" />
## Installation
<ElementsInstaller path="artifact" />
## Usage
```tsx
import {
Artifact,
ArtifactAction,
ArtifactActions,
ArtifactContent,
ArtifactDescription,
ArtifactHeader,
ArtifactTitle,
} from "@/components/ai-elements/artifact";
```
```tsx
<Artifact>
<ArtifactHeader>
<div>
<ArtifactTitle>Dijkstra's Algorithm Implementation</ArtifactTitle>
<ArtifactDescription>Updated 1 minute ago</ArtifactDescription>
</div>
<ArtifactActions>
<ArtifactAction icon={CopyIcon} label="Copy" tooltip="Copy to clipboard" />
</ArtifactActions>
</ArtifactHeader>
<ArtifactContent>{/* Your content here */}</ArtifactContent>
</Artifact>
```
## Features
- Structured container with header and content areas
- Built-in header with title and description support
- Flexible action buttons with tooltips
- Customizable styling for all subcomponents
- Support for close buttons and action groups
- Clean, modern design with border and shadow
- Responsive layout that adapts to content
- TypeScript support with proper type definitions
- Composable architecture for maximum flexibility
## Examples
### With Code Display
<Preview path="artifact" />
## Props
### `<Artifact />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<ArtifactHeader />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<ArtifactTitle />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying paragraph element.',
type: 'React.HTMLAttributes<HTMLParagraphElement>',
},
}}
/>
### `<ArtifactDescription />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying paragraph element.',
type: 'React.HTMLAttributes<HTMLParagraphElement>',
},
}}
/>
### `<ArtifactActions />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<ArtifactAction />`
<TypeTable
type={{
tooltip: {
description: 'Tooltip text to display on hover.',
type: 'string',
},
label: {
description: 'Screen reader label for the action button.',
type: 'string',
},
icon: {
description: 'Lucide icon component to display in the button.',
type: 'LucideIcon',
},
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<ArtifactClose />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<ArtifactContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>

146
docs/components/branch.md Normal file
View File

@@ -0,0 +1,146 @@
# Branch
URL: /components/branch
---
title: Branch
description: Manages multiple versions of AI messages, allowing users to navigate between different response branches.
path: elements/components/branch
---
The `Branch` component manages multiple versions of AI messages, allowing users to navigate between different response branches. It provides a clean, modern interface with customizable themes and keyboard-accessible navigation buttons.
<Preview path="branch" />
## Installation
<ElementsInstaller path="branch" />
## Usage
```tsx
import { Branch, BranchMessages, BranchNext, BranchPage, BranchPrevious, BranchSelector } from "@/components/ai-elements/branch";
```
```tsx
<Branch defaultBranch={0}>
<BranchMessages>
<Message from="user">
<MessageContent>Hello</MessageContent>
</Message>
<Message from="user">
<MessageContent>Hi!</MessageContent>
</Message>
</BranchMessages>
<BranchSelector from="user">
<BranchPrevious />
<BranchPage />
<BranchNext />
</BranchSelector>
</Branch>
```
## Usage with AI SDK
<Callout>
Branching is an advanced use case that you can implement yourself to suit your
application's needs. While the AI SDK does not provide built-in support for
branching, you have full flexibility to design and manage multiple response
paths as required.
</Callout>
## Features
- Context-based state management for multiple message branches
- Navigation controls for moving between branches (previous/next)
- Uses CSS to prevent re-rendering of branches when switching
- Branch counter showing current position (e.g., "1 of 3")
- Automatic branch tracking and synchronization
- Callbacks for branch change and navigation using `onBranchChange`
- Support for custom branch change callbacks
- Responsive design with mobile-friendly controls
- Clean, modern styling with customizable themes
- Keyboard-accessible navigation buttons
## Props
### `<Branch />`
<TypeTable
type={{
defaultBranch: {
description: 'The index of the branch to show by default.',
type: 'number',
default: '0',
},
onBranchChange: {
description: 'Callback fired when the branch changes.',
type: '(branchIndex: number) => void',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<BranchMessages />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<BranchSelector />`
<TypeTable
type={{
from: {
description: 'Aligns the selector for user, assistant or system messages.',
type: 'UIMessage["role"]',
},
'...props': {
description: 'Any other props are spread to the selector container.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<BranchPrevious />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<BranchNext />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<BranchPage />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying span element.',
type: 'React.HTMLAttributes<HTMLSpanElement>',
},
}}
/>

60
docs/components/canvas.md Normal file
View File

@@ -0,0 +1,60 @@
# Canvas
URL: /components/canvas
---
title: Canvas
description: A React Flow-based canvas component for building interactive node-based interfaces.
path: elements/components/canvas
---
The `Canvas` component provides a React Flow-based canvas for building interactive node-based interfaces. It comes pre-configured with sensible defaults for AI applications, including panning, zooming, and selection behaviors.
<Callout>
The Canvas component is designed to be used with the [Node](/elements/components/node) and [Edge](/elements/components/edge) components. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="canvas" />
## Usage
```tsx
import { Canvas } from "@/components/ai-elements/canvas";
```
```tsx
<Canvas nodes={nodes} edges={edges} nodeTypes={nodeTypes} edgeTypes={edgeTypes} />
```
## Features
- Pre-configured React Flow canvas with AI-optimized defaults
- Pan on scroll enabled for intuitive navigation
- Selection on drag for multi-node operations
- Customizable background color using CSS variables
- Delete key support (Backspace and Delete keys)
- Auto-fit view to show all nodes
- Disabled double-click zoom for better UX
- Disabled pan on drag to prevent accidental canvas movement
- Fully compatible with React Flow props and API
## Props
### `<Canvas />`
<TypeTable
type={{
children: {
description: 'Child components like Background, Controls, or MiniMap.',
type: 'ReactNode',
},
'...props': {
description: 'Any other React Flow props like nodes, edges, nodeTypes, edgeTypes, onNodesChange, etc.',
type: 'ReactFlowProps',
},
}}
/>

View File

@@ -0,0 +1,180 @@
# Chain of Thought
URL: /components/chain-of-thought
---
title: Chain of Thought
description: A collapsible component that visualizes AI reasoning steps with support for search results, images, and step-by-step progress indicators.
path: elements/components/chain-of-thought
---
The `ChainOfThought` component provides a visual representation of an AI's reasoning process, showing step-by-step thinking with support for search results, images, and progress indicators. It helps users understand how AI arrives at conclusions.
<Preview path="chain-of-thought" />
## Installation
<ElementsInstaller path="chain-of-thought" />
## Usage
```tsx
import {
ChainOfThought,
ChainOfThoughtContent,
ChainOfThoughtHeader,
ChainOfThoughtImage,
ChainOfThoughtSearchResult,
ChainOfThoughtSearchResults,
ChainOfThoughtStep,
} from "@/components/ai-elements/chain-of-thought";
```
```tsx
<ChainOfThought defaultOpen>
<ChainOfThoughtHeader />
<ChainOfThoughtContent>
<ChainOfThoughtStep icon={SearchIcon} label="Searching for information" status="complete">
<ChainOfThoughtSearchResults>
<ChainOfThoughtSearchResult>Result 1</ChainOfThoughtSearchResult>
</ChainOfThoughtSearchResults>
</ChainOfThoughtStep>
</ChainOfThoughtContent>
</ChainOfThought>
```
## Features
- Collapsible interface with smooth animations powered by Radix UI
- Step-by-step visualization of AI reasoning process
- Support for different step statuses (complete, active, pending)
- Built-in search results display with badge styling
- Image support with captions for visual content
- Custom icons for different step types
- Context-aware components using React Context API
- Fully typed with TypeScript
- Accessible with keyboard navigation support
- Responsive design that adapts to different screen sizes
- Smooth fade and slide animations for content transitions
- Composable architecture for flexible customization
## Props
### `<ChainOfThought />`
<TypeTable
type={{
open: {
description: 'Controlled open state of the collapsible.',
type: 'boolean',
},
defaultOpen: {
description: 'Default open state when uncontrolled.',
type: 'boolean',
default: 'false',
},
onOpenChange: {
description: 'Callback when the open state changes.',
type: '(open: boolean) => void',
},
'...props': {
description: 'Any other props are spread to the root div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<ChainOfThoughtHeader />`
<TypeTable
type={{
children: {
description: 'Custom header text.',
type: 'React.ReactNode',
default: '"Chain of Thought"',
},
'...props': {
description: 'Any other props are spread to the CollapsibleTrigger component.',
type: 'React.ComponentProps<typeof CollapsibleTrigger>',
},
}}
/>
### `<ChainOfThoughtStep />`
<TypeTable
type={{
icon: {
description: 'Icon to display for the step.',
type: 'LucideIcon',
default: 'DotIcon',
},
label: {
description: 'The main text label for the step.',
type: 'string',
},
description: {
description: 'Optional description text shown below the label.',
type: 'string',
},
status: {
description: 'Visual status of the step.',
type: '"complete" | "active" | "pending"',
default: '"complete"',
},
'...props': {
description: 'Any other props are spread to the root div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<ChainOfThoughtSearchResults />`
<TypeTable
type={{
'...props': {
description: 'Any props are spread to the container div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<ChainOfThoughtSearchResult />`
<TypeTable
type={{
'...props': {
description: 'Any props are spread to the Badge component.',
type: 'React.ComponentProps<typeof Badge>',
},
}}
/>
### `<ChainOfThoughtContent />`
<TypeTable
type={{
'...props': {
description: 'Any props are spread to the CollapsibleContent component.',
type: 'React.ComponentProps<typeof CollapsibleContent>',
},
}}
/>
### `<ChainOfThoughtImage />`
<TypeTable
type={{
caption: {
description: 'Optional caption text displayed below the image.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the container div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>

View File

@@ -0,0 +1,196 @@
# Code Block
URL: /components/code-block
---
title: Code Block
description: Provides syntax highlighting, line numbers, and copy to clipboard functionality for code blocks.
path: elements/components/code-block
---
The `CodeBlock` component provides syntax highlighting, line numbers, and copy to clipboard functionality for code blocks.
<Preview path="code-block" />
## Installation
<ElementsInstaller path="code-block" />
## Usage
```tsx
import { CodeBlock, CodeBlockCopyButton } from "@/components/ai-elements/code-block";
```
```tsx
<CodeBlock data={"console.log('hello world')"} language="jsx">
<CodeBlockCopyButton onCopy={() => console.log("Copied code to clipboard")} onError={() => console.error("Failed to copy code to clipboard")} />
</CodeBlock>
```
## Usage with AI SDK
Build a simple code generation tool using the [`experimental_useObject`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-object) hook.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { experimental_useObject as useObject } from "@ai-sdk/react";
import { codeBlockSchema } from "@/app/api/codegen/route";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { CodeBlock, CodeBlockCopyButton } from "@/components/ai-elements/code-block";
import { useState } from "react";
export default function Page() {
const [input, setInput] = useState("");
const { object, submit, isLoading } = useObject({
api: "/api/codegen",
schema: codeBlockSchema,
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
submit(input);
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="flex-1 overflow-auto mb-4">
{object?.code && object?.language && (
<CodeBlock code={object.code} language={object.language} showLineNumbers={true}>
<CodeBlockCopyButton />
</CodeBlock>
)}
</div>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={input}
placeholder="Generate a React todolist component"
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit status={isLoading ? "streaming" : "ready"} disabled={!input.trim()} className="absolute bottom-1 right-1" />
</Input>
</div>
</div>
);
}
```
Add the following route to your backend:
```tsx title="api/codegen/route.ts"
import { streamObject } from "ai";
import { z } from "zod";
export const codeBlockSchema = z.object({
language: z.string(),
filename: z.string(),
code: z.string(),
});
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const context = await req.json();
const result = streamObject({
model: "openai/gpt-4o",
schema: codeBlockSchema,
prompt: `You are a helpful coding assitant. Only generate code, no markdown formatting or backticks, or text.` + context,
});
return result.toTextStreamResponse();
}
```
## Features
- Syntax highlighting with react-syntax-highlighter
- Line numbers (optional)
- Copy to clipboard functionality
- Automatic light/dark theme switching
- Customizable styles
- Accessible design
## Examples
### Dark Mode
To use the `CodeBlock` component in dark mode, you can wrap it in a `div` with the `dark` class.
<Preview path="code-block-dark" />
## Props
### `<CodeBlock />`
<TypeTable
type={{
code: {
description: 'The code content to display.',
type: 'string',
},
language: {
description: 'The programming language for syntax highlighting.',
type: 'string',
},
showLineNumbers: {
description: 'Whether to show line numbers.',
type: 'boolean',
default: 'false',
},
children: {
description: 'Child elements (like CodeBlockCopyButton) positioned in the top-right corner.',
type: 'React.ReactNode',
},
className: {
description: 'Additional CSS classes to apply to the root container.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<CodeBlockCopyButton />`
<TypeTable
type={{
onCopy: {
description: 'Callback fired after a successful copy.',
type: '() => void',
},
onError: {
description: 'Callback fired if copying fails.',
type: '(error: Error) => void',
},
timeout: {
description: 'How long to show the copied state (ms).',
type: 'number',
default: '2000',
},
children: {
description: 'Custom content for the button. Defaults to copy/check icons.',
type: 'React.ReactNode',
},
className: {
description: 'Additional CSS classes to apply to the button.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>

View File

@@ -0,0 +1,347 @@
# Confirmation
URL: /components/confirmation
---
title: Confirmation
description: An alert-based component for managing tool execution approval workflows with request, accept, and reject states.
path: elements/components/confirmation
---
The `Confirmation` component provides a flexible system for displaying tool approval requests and their outcomes. Perfect for showing users when AI tools require approval before execution, and displaying the approval status afterward.
<Preview path="confirmation" />
## Installation
<ElementsInstaller path="confirmation" />
## Usage
```tsx
import {
Confirmation,
ConfirmationContent,
ConfirmationRequest,
ConfirmationAccepted,
ConfirmationRejected,
ConfirmationActions,
ConfirmationAction,
} from "@/components/ai-elements/confirmation";
```
```tsx
<Confirmation approval={{ id: "tool-1" }} state="approval-requested">
<ConfirmationContent>
<ConfirmationRequest>This tool wants to access your file system. Do you approve?</ConfirmationRequest>
<ConfirmationAccepted>
<CheckIcon className="size-4" />
<span>Approved</span>
</ConfirmationAccepted>
<ConfirmationRejected>
<XIcon className="size-4" />
<span>Rejected</span>
</ConfirmationRejected>
</ConfirmationContent>
<ConfirmationActions>
<ConfirmationAction variant="outline" onClick={handleReject}>
Reject
</ConfirmationAction>
<ConfirmationAction variant="default" onClick={handleApprove}>
Approve
</ConfirmationAction>
</ConfirmationActions>
</Confirmation>
```
## Usage with AI SDK
Build a chat UI with tool approval workflow where dangerous tools require user confirmation before execution.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport, type ToolUIPart } from "ai";
import { useState } from "react";
import { CheckIcon, XIcon } from "lucide-react";
import { Button } from "@/components/ui/button";
import {
Confirmation,
ConfirmationContent,
ConfirmationRequest,
ConfirmationAccepted,
ConfirmationRejected,
ConfirmationActions,
ConfirmationAction,
} from "@/components/ai-elements/confirmation";
import { Response } from "@/components/ai-elements/response";
type DeleteFileInput = {
filePath: string;
confirm: boolean;
};
type DeleteFileToolUIPart = ToolUIPart<{
delete_file: {
input: DeleteFileInput;
output: { success: boolean; message: string };
};
}>;
const Example = () => {
const { messages, sendMessage, status, respondToConfirmationRequest } = useChat({
transport: new DefaultChatTransport({
api: "/api/chat",
}),
});
const handleDeleteFile = () => {
sendMessage({ text: "Delete the file at /tmp/example.txt" });
};
const latestMessage = messages[messages.length - 1];
const deleteTool = latestMessage?.parts?.find((part) => part.type === "tool-delete_file") as DeleteFileToolUIPart | undefined;
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full space-y-4">
<Button onClick={handleDeleteFile} disabled={status !== "ready"}>
Delete Example File
</Button>
{deleteTool?.approval && (
<Confirmation approval={deleteTool.approval} state={deleteTool.state}>
<ConfirmationContent>
<ConfirmationRequest>
This tool wants to delete: <code>{deleteTool.input?.filePath}</code>
<br />
Do you approve this action?
</ConfirmationRequest>
<ConfirmationAccepted>
<CheckIcon className="size-4" />
<span>You approved this tool execution</span>
</ConfirmationAccepted>
<ConfirmationRejected>
<XIcon className="size-4" />
<span>You rejected this tool execution</span>
</ConfirmationRejected>
</ConfirmationContent>
<ConfirmationActions>
<ConfirmationAction
variant="outline"
onClick={() =>
respondToConfirmationRequest({
approvalId: deleteTool.approval!.id,
approved: false,
})
}
>
Reject
</ConfirmationAction>
<ConfirmationAction
variant="default"
onClick={() =>
respondToConfirmationRequest({
approvalId: deleteTool.approval!.id,
approved: true,
})
}
>
Approve
</ConfirmationAction>
</ConfirmationActions>
</Confirmation>
)}
{deleteTool?.output && (
<Response>{deleteTool.output.success ? deleteTool.output.message : `Error: ${deleteTool.output.message}`}</Response>
)}
</div>
</div>
);
};
export default Example;
```
Add the following route to your backend:
```ts title="app/api/chat/route.tsx"
import { streamText, UIMessage, convertToModelMessages } from "ai";
import { z } from "zod";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: "openai/gpt-4o",
messages: convertToModelMessages(messages),
tools: {
delete_file: {
description: "Delete a file from the file system",
parameters: z.object({
filePath: z.string().describe("The path to the file to delete"),
confirm: z.boolean().default(false).describe("Confirmation that the user wants to delete the file"),
}),
requireApproval: true, // Enable approval workflow
execute: async ({ filePath, confirm }) => {
if (!confirm) {
return {
success: false,
message: "Deletion not confirmed",
};
}
// Simulate file deletion
await new Promise((resolve) => setTimeout(resolve, 500));
return {
success: true,
message: `Successfully deleted ${filePath}`,
};
},
},
},
});
return result.toUIMessageStreamResponse();
}
```
## Features
- Context-based state management for approval workflow
- Conditional rendering based on approval state
- Support for approval-requested, approval-responded, output-denied, and output-available states
- Built on shadcn/ui Alert and Button components
- TypeScript support with comprehensive type definitions
- Customizable styling with Tailwind CSS
- Keyboard navigation and accessibility support
- Theme-aware with automatic dark mode support
## Examples
### Approval Request State
Shows the approval request with action buttons when state is `approval-requested`.
<Preview path="confirmation-request" />
### Approved State
Shows the accepted status when user approves and state is `approval-responded` or `output-available`.
<Preview path="confirmation-accepted" />
### Rejected State
Shows the rejected status when user rejects and state is `output-denied`.
<Preview path="confirmation-rejected" />
## Props
### `<Confirmation />`
<TypeTable
type={{
approval: {
description: 'The approval object containing the approval ID and status. If not provided or undefined, the component will not render.',
type: 'ToolUIPart["approval"]',
},
state: {
description: 'The current state of the tool (input-streaming, input-available, approval-requested, approval-responded, output-denied, or output-available). Will not render for input-streaming or input-available states.',
type: 'ToolUIPart["state"]',
},
className: {
description: 'Additional CSS classes to apply to the Alert component.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the Alert component.',
type: 'React.ComponentProps<typeof Alert>',
},
}}
/>
### `<ConfirmationContent />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the AlertDescription component.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the AlertDescription component.',
type: 'React.ComponentProps<typeof AlertDescription>',
},
}}
/>
### `<ConfirmationRequest />`
<TypeTable
type={{
children: {
description: 'The content to display when approval is requested. Only renders when state is "approval-requested".',
type: 'React.ReactNode',
},
}}
/>
### `<ConfirmationAccepted />`
<TypeTable
type={{
children: {
description: 'The content to display when approval is accepted. Only renders when approval.approved is true and state is "approval-responded", "output-denied", or "output-available".',
type: 'React.ReactNode',
},
}}
/>
### `<ConfirmationRejected />`
<TypeTable
type={{
children: {
description: 'The content to display when approval is rejected. Only renders when approval.approved is false and state is "approval-responded", "output-denied", or "output-available".',
type: 'React.ReactNode',
},
}}
/>
### `<ConfirmationActions />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the actions container.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the div element. Only renders when state is "approval-requested".',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<ConfirmationAction />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the Button component. Styled with h-8 px-3 text-sm classes by default.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>

View File

@@ -0,0 +1,66 @@
# Connection
URL: /components/connection
---
title: Connection
description: A custom connection line component for React Flow-based canvases with animated bezier curve styling.
path: elements/components/connection
---
The `Connection` component provides a styled connection line for React Flow canvases. It renders an animated bezier curve with a circle indicator at the target end, using consistent theming through CSS variables.
<Callout>
The Connection component is designed to be used with the [Canvas](/elements/components/canvas) component. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="connection" />
## Usage
```tsx
import { Connection } from "@/components/ai-elements/connection";
```
```tsx
<ReactFlow connectionLineComponent={Connection} />
```
## Features
- Smooth bezier curve animation for connection lines
- Visual indicator circle at the target position
- Theme-aware styling using CSS variables
- Cubic bezier curve calculation for natural flow
- Lightweight implementation with minimal props
- Full TypeScript support with React Flow types
- Compatible with React Flow's connection system
## Props
### `<Connection />`
<TypeTable
type={{
fromX: {
description: 'The x-coordinate of the connection start point.',
type: 'number',
},
fromY: {
description: 'The y-coordinate of the connection start point.',
type: 'number',
},
toX: {
description: 'The x-coordinate of the connection end point.',
type: 'number',
},
toY: {
description: 'The y-coordinate of the connection end point.',
type: 'number',
},
}}
/>

244
docs/components/context.md Normal file
View File

@@ -0,0 +1,244 @@
# Context
URL: /components/context
---
title: Context
description: A compound component system for displaying AI model context window usage, token consumption, and cost estimation.
path: elements/components/context
---
The `Context` component provides a comprehensive view of AI model usage through a compound component system. It displays context window utilization, token consumption breakdown (input, output, reasoning, cache), and cost estimation in an interactive hover card interface.
<Preview path="context" />
## Installation
<ElementsInstaller path="context" />
## Usage
```tsx
import {
Context,
ContextTrigger,
ContextContent,
ContextContentHeader,
ContextContentBody,
ContextContentFooter,
ContextInputUsage,
ContextOutputUsage,
ContextReasoningUsage,
ContextCacheUsage,
} from "@/components/ai-elements/context";
```
```tsx
<Context
maxTokens={128000}
usedTokens={40000}
usage={{
inputTokens: 32000,
outputTokens: 8000,
totalTokens: 40000,
cachedInputTokens: 0,
reasoningTokens: 0,
}}
modelId="openai:gpt-4"
>
<ContextTrigger />
<ContextContent>
<ContextContentHeader />
<ContextContentBody>
<ContextInputUsage />
<ContextOutputUsage />
<ContextReasoningUsage />
<ContextCacheUsage />
</ContextContentBody>
<ContextContentFooter />
</ContextContent>
</Context>
```
## Features
- **Compound Component Architecture**: Flexible composition of context display elements
- **Visual Progress Indicator**: Circular SVG progress ring showing context usage percentage
- **Token Breakdown**: Detailed view of input, output, reasoning, and cached tokens
- **Cost Estimation**: Real-time cost calculation using the `tokenlens` library
- **Intelligent Formatting**: Automatic token count formatting (K, M, B suffixes)
- **Interactive Hover Card**: Detailed information revealed on hover
- **Context Provider Pattern**: Clean data flow through React Context API
- **TypeScript Support**: Full type definitions for all components
- **Accessible Design**: Proper ARIA labels and semantic HTML
- **Theme Integration**: Uses currentColor for automatic theme adaptation
## Props
### `<Context />`
<TypeTable
type={{
maxTokens: {
description: 'The total context window size in tokens.',
type: 'number',
},
usedTokens: {
description: 'The number of tokens currently used.',
type: 'number',
},
usage: {
description: 'Detailed token usage breakdown from the AI SDK (input, output, reasoning, cached tokens).',
type: 'LanguageModelUsage',
},
modelId: {
description: 'Model identifier for cost calculation (e.g., "openai:gpt-4", "anthropic:claude-3-opus").',
type: 'ModelId',
},
'...props': {
description: 'Any other props are spread to the HoverCard component.',
type: 'ComponentProps<HoverCard>',
},
}}
/>
### `<ContextTrigger />`
<TypeTable
type={{
children: {
description: 'Custom trigger element. If not provided, renders a default button with percentage and icon.',
type: 'React.ReactNode',
},
'...props': {
description: 'Props spread to the default button element.',
type: 'ComponentProps<Button>',
},
}}
/>
### `<ContextContent />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes for the hover card content.',
type: 'string',
},
'...props': {
description: 'Props spread to the HoverCardContent component.',
type: 'ComponentProps<HoverCardContent>',
},
}}
/>
### `<ContextContentHeader />`
<TypeTable
type={{
children: {
description: 'Custom header content. If not provided, renders percentage and token count with progress bar.',
type: 'React.ReactNode',
},
'...props': {
description: 'Props spread to the header div element.',
type: 'ComponentProps<div>',
},
}}
/>
### `<ContextContentBody />`
<TypeTable
type={{
children: {
description: 'Body content, typically containing usage breakdown components.',
type: 'React.ReactNode',
},
'...props': {
description: 'Props spread to the body div element.',
type: 'ComponentProps<div>',
},
}}
/>
### `<ContextContentFooter />`
<TypeTable
type={{
children: {
description: 'Custom footer content. If not provided, renders total cost when modelId is provided.',
type: 'React.ReactNode',
},
'...props': {
description: 'Props spread to the footer div element.',
type: 'ComponentProps<div>',
},
}}
/>
### Usage Components
All usage components (`ContextInputUsage`, `ContextOutputUsage`, `ContextReasoningUsage`, `ContextCacheUsage`) share the same props:
<TypeTable
type={{
children: {
description: 'Custom content. If not provided, renders token count and cost for the respective usage type.',
type: 'React.ReactNode',
},
className: {
description: 'Additional CSS classes.',
type: 'string',
},
'...props': {
description: 'Props spread to the div element.',
type: 'ComponentProps<div>',
},
}}
/>
## Component Architecture
The Context component uses a compound component pattern with React Context for data sharing:
1. **`<Context>`** - Root provider component that holds all context data
2. **`<ContextTrigger>`** - Interactive trigger element (default: button with percentage)
3. **`<ContextContent>`** - Hover card content container
4. **`<ContextContentHeader>`** - Header section with progress visualization
5. **`<ContextContentBody>`** - Body section for usage breakdowns
6. **`<ContextContentFooter>`** - Footer section for total cost
7. **Usage Components** - Individual token usage displays (Input, Output, Reasoning, Cache)
## Token Formatting
The component uses `Intl.NumberFormat` with compact notation for automatic formatting:
- Under 1,000: Shows exact count (e.g., "842")
- 1,000+: Shows with K suffix (e.g., "32K")
- 1,000,000+: Shows with M suffix (e.g., "1.5M")
- 1,000,000,000+: Shows with B suffix (e.g., "2.1B")
## Cost Calculation
When a `modelId` is provided, the component automatically calculates costs using the `tokenlens` library:
- **Input tokens**: Cost based on model's input pricing
- **Output tokens**: Cost based on model's output pricing
- **Reasoning tokens**: Special pricing for reasoning-capable models
- **Cached tokens**: Reduced pricing for cached input tokens
- **Total cost**: Sum of all token type costs
Costs are formatted using `Intl.NumberFormat` with USD currency.
## Styling
The component uses Tailwind CSS classes and follows your design system:
- Progress indicator uses `currentColor` for theme adaptation
- Hover card has customizable width and padding
- Footer has a secondary background for visual separation
- All text sizes use the `text-xs` class for consistency
- Muted foreground colors for secondary information

View File

@@ -0,0 +1,60 @@
# Controls
URL: /components/controls
---
title: Controls
description: A styled controls component for React Flow-based canvases with zoom and fit view functionality.
path: elements/components/controls
---
The `Controls` component provides interactive zoom and fit view controls for React Flow canvases. It includes a modern, themed design with backdrop blur and card styling.
<Callout>
The Controls component is designed to be used with the [Canvas](/elements/components/canvas) component. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="controls" />
## Usage
```tsx
import { Controls } from "@/components/ai-elements/controls";
```
```tsx
<ReactFlow>
<Controls />
</ReactFlow>
```
## Features
- Zoom in/out controls
- Fit view button to center and scale content
- Rounded pill design with backdrop blur
- Theme-aware card background
- Subtle drop shadow for depth
- Full TypeScript support
- Compatible with all React Flow control features
## Props
### `<Controls />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the controls.',
type: 'string',
},
'...props': {
description: 'Any other props from @xyflow/react Controls component (showZoom, showFitView, showInteractive, position, etc.).',
type: 'ComponentProps<typeof Controls>',
},
}}
/>

View File

@@ -0,0 +1,238 @@
# Conversation
URL: /components/conversation
---
title: Conversation
description: Wraps messages and automatically scrolls to the bottom. Also includes a scroll button that appears when not at the bottom.
path: elements/components/conversation
---
The `Conversation` component wraps messages and automatically scrolls to the bottom. Also includes a scroll button that appears when not at the bottom.
<Preview path="conversation" className="p-0" />
## Installation
<ElementsInstaller path="conversation" />
## Usage
```tsx
import { Conversation, ConversationContent, ConversationEmptyState, ConversationScrollButton } from "@/components/ai-elements/conversation";
```
```tsx
<Conversation className="relative w-full" style={{ height: "500px" }}>
<ConversationContent>
{messages.length === 0 ? (
<ConversationEmptyState
icon={<MessageSquare className="size-12" />}
title="No messages yet"
description="Start a conversation to see messages here"
/>
) : (
messages.map((message) => (
<Message from={message.from} key={message.id}>
<MessageContent>{message.content}</MessageContent>
</Message>
))
)}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
```
## Usage with AI SDK
Build a simple conversational UI with `Conversation` and [`PromptInput`](/elements/components/prompt-input):
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { Conversation, ConversationContent, ConversationEmptyState, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { MessageSquare } from "lucide-react";
import { useState } from "react";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
const ConversationDemo = () => {
const [input, setInput] = useState("");
const { messages, sendMessage, status } = useChat();
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput("");
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<Conversation>
<ConversationContent>
{messages.length === 0 ? (
<ConversationEmptyState
icon={<MessageSquare className="size-12" />}
title="Start a conversation"
description="Type a message below to begin chatting"
/>
) : (
messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case "text": // we don't use any reasoning or tool calls in this example
return <Response key={`${message.id}-${i}`}>{part.text}</Response>;
default:
return null;
}
})}
</MessageContent>
</Message>
))
)}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit
status={status === "streaming" ? "streaming" : "ready"}
disabled={!input.trim()}
className="absolute bottom-1 right-1"
/>
</Input>
</div>
</div>
);
};
export default ConversationDemo;
```
Add the following route to your backend:
```tsx title="api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: "openai/gpt-4o",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
## Features
- Automatic scrolling to the bottom when new messages are added
- Smooth scrolling behavior with configurable animation
- Scroll button that appears when not at the bottom
- Responsive design with customizable padding and spacing
- Flexible content layout with consistent message spacing
- Accessible with proper ARIA roles for screen readers
- Customizable styling through className prop
- Support for any number of child message components
## Props
### `<Conversation />`
<TypeTable
type={{
contextRef: {
description: 'Optional ref to access the StickToBottom context object.',
type: 'React.Ref<StickToBottomContext>',
},
instance: {
description: 'Optional instance for controlling the StickToBottom component.',
type: 'StickToBottomInstance',
},
children: {
description: 'Render prop or ReactNode for custom rendering with context.',
type: '((context: StickToBottomContext) => ReactNode) | ReactNode',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'Omit<React.HTMLAttributes<HTMLDivElement>, "children">',
},
}}
/>
### `<ConversationContent />`
<TypeTable
type={{
children: {
description: 'Render prop or ReactNode for custom rendering with context.',
type: '((context: StickToBottomContext) => ReactNode) | ReactNode',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'Omit<React.HTMLAttributes<HTMLDivElement>, "children">',
},
}}
/>
### `<ConversationEmptyState />`
<TypeTable
type={{
title: {
description: 'The title text to display.',
type: 'string',
default: '"No messages yet"',
},
description: {
description: 'The description text to display.',
type: 'string',
default: '"Start a conversation to see messages here"',
},
icon: {
description: 'Optional icon to display above the text.',
type: 'React.ReactNode',
},
children: {
description: 'Optional additional content to render below the text.',
type: 'React.ReactNode',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'ComponentProps<"div">',
},
}}
/>
### `<ConversationScrollButton />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'ComponentProps<typeof Button>',
},
}}
/>

110
docs/components/edge.md Normal file
View File

@@ -0,0 +1,110 @@
# Edge
URL: /components/edge
---
title: Edge
description: Customizable edge components for React Flow canvases with animated and temporary states.
path: elements/components/edge
---
The `Edge` component provides two pre-styled edge types for React Flow canvases: `Temporary` for dashed temporary connections and `Animated` for connections with animated indicators.
<Callout>
The Edge component is designed to be used with the [Canvas](/elements/components/canvas) component. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="edge" />
## Usage
```tsx
import { Edge } from "@/components/ai-elements/edge";
```
```tsx
const edgeTypes = {
temporary: Edge.Temporary,
animated: Edge.Animated,
};
<Canvas nodes={nodes} edges={edges} edgeTypes={edgeTypes} />;
```
## Features
- Two distinct edge types: Temporary and Animated
- Temporary edges use dashed lines with ring color
- Animated edges include a moving circle indicator
- Automatic handle position calculation
- Smart offset calculation based on handle type and position
- Uses Bezier curves for smooth, natural-looking connections
- Fully compatible with React Flow's edge system
- Type-safe implementation with TypeScript
## Edge Types
### `Edge.Temporary`
A dashed edge style for temporary or preview connections. Uses a simple Bezier path with a dashed stroke pattern.
### `Edge.Animated`
A solid edge with an animated circle that moves along the path. The animation repeats indefinitely with a 2-second duration, providing visual feedback for active connections.
## Props
Both edge types accept standard React Flow `EdgeProps`:
<TypeTable
type={{
id: {
description: 'Unique identifier for the edge.',
type: 'string',
},
source: {
description: 'ID of the source node.',
type: 'string',
},
target: {
description: 'ID of the target node.',
type: 'string',
},
sourceX: {
description: 'X coordinate of the source handle (Temporary only).',
type: 'number',
},
sourceY: {
description: 'Y coordinate of the source handle (Temporary only).',
type: 'number',
},
targetX: {
description: 'X coordinate of the target handle (Temporary only).',
type: 'number',
},
targetY: {
description: 'Y coordinate of the target handle (Temporary only).',
type: 'number',
},
sourcePosition: {
description: 'Position of the source handle (Left, Right, Top, Bottom).',
type: 'Position',
},
targetPosition: {
description: 'Position of the target handle (Left, Right, Top, Bottom).',
type: 'Position',
},
markerEnd: {
description: 'SVG marker ID for the edge end (Animated only).',
type: 'string',
},
style: {
description: 'Custom styles for the edge (Animated only).',
type: 'React.CSSProperties',
},
}}
/>

164
docs/components/image.md Normal file
View File

@@ -0,0 +1,164 @@
# Image
URL: /components/image
---
title: Image
description: Displays AI-generated images from the AI SDK.
path: elements/components/image
---
The `Image` component displays AI-generated images from the AI SDK. It accepts a [`Experimental_GeneratedImage`](/docs/reference/ai-sdk-core/generate-image) object from the AI SDK's `generateImage` function and automatically renders it as an image.
<Preview path="image" />
## Installation
<ElementsInstaller path="image" />
## Usage
```tsx
import { Image } from "@/components/ai-elements/image";
```
```tsx
<Image
base64="valid base64 string"
mediaType: 'image/jpeg',
uint8Array: new Uint8Array([]),
alt="Example generated image"
className="h-[150px] aspect-square border"
/>
```
## Usage with AI SDK
Build a simple app allowing a user to generate an image given a prompt.
Install the `@ai-sdk/openai` package:
```package-install
npm i @ai-sdk/openai
```
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { Image } from "@/components/ai-elements/image";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { useState } from "react";
import { Loader } from "@/components/ai-elements/loader";
const ImageDemo = () => {
const [prompt, setPrompt] = useState("A futuristic cityscape at sunset");
const [imageData, setImageData] = useState<any>(null);
const [isLoading, setIsLoading] = useState(false);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
if (!prompt.trim()) return;
setPrompt("");
setIsLoading(true);
try {
const response = await fetch("/api/image", {
method: "POST",
body: JSON.stringify({ prompt: prompt.trim() }),
});
const data = await response.json();
setImageData(data);
} catch (error) {
console.error("Error generating image:", error);
} finally {
setIsLoading(false);
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="flex-1 overflow-y-auto p-4">
{imageData && (
<div className="flex justify-center">
<Image {...imageData} alt="Generated image" className="h-[300px] aspect-square border rounded-lg" />
</div>
)}
{isLoading && <Loader />}
</div>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={prompt}
placeholder="Describe the image you want to generate..."
onChange={(e) => setPrompt(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit status={isLoading ? "submitted" : "ready"} disabled={!prompt.trim()} className="absolute bottom-1 right-1" />
</Input>
</div>
</div>
);
};
export default ImageDemo;
```
Add the following route to your backend:
```ts title="app/api/image/route.ts"
import { openai } from "@ai-sdk/openai";
import { experimental_generateImage } from "ai";
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const { image } = await experimental_generateImage({
model: openai.image("dall-e-3"),
prompt: prompt,
size: "1024x1024",
});
return Response.json({
base64: image.base64,
uint8Array: image.uint8Array,
mediaType: image.mediaType,
});
}
```
## Features
- Accepts `Experimental_GeneratedImage` objects directly from the AI SDK
- Automatically creates proper data URLs from base64-encoded image data
- Supports all standard HTML image attributes
- Responsive by default with `max-w-full h-auto` styling
- Customizable with additional CSS classes
- Includes proper TypeScript types for AI SDK compatibility
## Props
### `<Image />`
<TypeTable
type={{
alt: {
description: 'Alternative text for the image.',
type: 'string',
},
className: {
description: 'Additional CSS classes to apply to the image.',
type: 'string',
},
'...props': {
description: 'The image data to display, as returned by the AI SDK.',
type: 'Experimental_GeneratedImage',
},
}}
/>

View File

@@ -0,0 +1,388 @@
# Inline Citation
URL: /components/inline-citation
---
title: Inline Citation
description: A hoverable citation component that displays source information and quotes inline with text, perfect for AI-generated content with references.
path: elements/components/inline-citation
---
The `InlineCitation` component provides a way to display citations inline with text content, similar to academic papers or research documents. It consists of a citation pill that shows detailed source information on hover, making it perfect for AI-generated content that needs to reference sources.
<Preview path="inline-citation" />
## Installation
<ElementsInstaller path="inline-citation" />
## Usage
```tsx
import {
InlineCitation,
InlineCitationCard,
InlineCitationCardBody,
InlineCitationCardTrigger,
InlineCitationCarousel,
InlineCitationCarouselContent,
InlineCitationCarouselItem,
InlineCitationCarouselHeader,
InlineCitationCarouselIndex,
InlineCitationSource,
InlineCitationText,
} from "@/components/ai-elements/inline-citation";
```
```tsx
<InlineCitation>
<InlineCitationText>{citation.text}</InlineCitationText>
<InlineCitationCard>
<InlineCitationCardTrigger sources={citation.sources.map((source) => source.url)} />
<InlineCitationCardBody>
<InlineCitationCarousel>
<InlineCitationCarouselHeader>
<InlineCitationCarouselIndex />
</InlineCitationCarouselHeader>
<InlineCitationCarouselContent>
<InlineCitationCarouselItem>
<InlineCitationSource title="AI SDK" url="https://ai-sdk.dev" description="The AI Toolkit for TypeScript" />
</InlineCitationCarouselItem>
</InlineCitationCarouselContent>
</InlineCitationCarousel>
</InlineCitationCardBody>
</InlineCitationCard>
</InlineCitation>
```
## Usage with AI SDK
Build citations for AI-generated content using [`experimental_generateObject`](/docs/reference/ai-sdk-ui/use-object).
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { experimental_useObject as useObject } from "@ai-sdk/react";
import {
InlineCitation,
InlineCitationText,
InlineCitationCard,
InlineCitationCardTrigger,
InlineCitationCardBody,
InlineCitationCarousel,
InlineCitationCarouselContent,
InlineCitationCarouselItem,
InlineCitationCarouselHeader,
InlineCitationCarouselIndex,
InlineCitationCarouselPrev,
InlineCitationCarouselNext,
InlineCitationSource,
InlineCitationQuote,
} from "@/components/ai-elements/inline-citation";
import { Button } from "@/components/ui/button";
import { citationSchema } from "@/app/api/citation/route";
const CitationDemo = () => {
const { object, submit, isLoading } = useObject({
api: "/api/citation",
schema: citationSchema,
});
const handleSubmit = (topic: string) => {
submit({ prompt: topic });
};
return (
<div className="max-w-4xl mx-auto p-6 space-y-6">
<div className="flex gap-2 mb-6">
<Button onClick={() => handleSubmit("artificial intelligence")} disabled={isLoading} variant="outline">
Generate AI Content
</Button>
<Button onClick={() => handleSubmit("climate change")} disabled={isLoading} variant="outline">
Generate Climate Content
</Button>
</div>
{isLoading && !object && <div className="text-muted-foreground">Generating content with citations...</div>}
{object?.content && (
<div className="prose prose-sm max-w-none">
<p className="leading-relaxed">
{object.content.split(/(\[\d+\])/).map((part, index) => {
const citationMatch = part.match(/\[(\d+)\]/);
if (citationMatch) {
const citationNumber = citationMatch[1];
const citation = object.citations?.find((c: any) => c.number === citationNumber);
if (citation) {
return (
<InlineCitation key={index}>
<InlineCitationCard>
<InlineCitationCardTrigger sources={[citation.url]} />
<InlineCitationCardBody>
<InlineCitationCarousel>
<InlineCitationCarouselHeader>
<InlineCitationCarouselPrev />
<InlineCitationCarouselNext />
<InlineCitationCarouselIndex />
</InlineCitationCarouselHeader>
<InlineCitationCarouselContent>
<InlineCitationCarouselItem>
<InlineCitationSource
title={citation.title}
url={citation.url}
description={citation.description}
/>
{citation.quote && <InlineCitationQuote>{citation.quote}</InlineCitationQuote>}
</InlineCitationCarouselItem>
</InlineCitationCarouselContent>
</InlineCitationCarousel>
</InlineCitationCardBody>
</InlineCitationCard>
</InlineCitation>
);
}
}
return part;
})}
</p>
</div>
)}
</div>
);
};
export default CitationDemo;
```
Add the following route to your backend:
```ts title="app/api/citation/route.ts"
import { streamObject } from "ai";
import { z } from "zod";
export const citationSchema = z.object({
content: z.string(),
citations: z.array(
z.object({
number: z.string(),
title: z.string(),
url: z.string(),
description: z.string().optional(),
quote: z.string().optional(),
})
),
});
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamObject({
model: "openai/gpt-4o",
schema: citationSchema,
prompt: `Generate a well-researched paragraph about ${prompt} with proper citations.
Include:
- A comprehensive paragraph with inline citations marked as [1], [2], etc.
- 2-3 citations with realistic source information
- Each citation should have a title, URL, and optional description/quote
- Make the content informative and the sources credible
Format citations as numbered references within the text.`,
});
return result.toTextStreamResponse();
}
```
## Features
- Hover interaction to reveal detailed citation information
- **Carousel navigation** for multiple citations with prev/next controls
- **Live index tracking** showing current slide position (e.g., "1/5")
- Support for source titles, URLs, and descriptions
- Optional quote blocks for relevant excerpts
- Composable architecture for flexible citation formats
- Accessible design with proper keyboard navigation
- Seamless integration with AI-generated content
- Clean visual design that doesn't disrupt reading flow
- Smart badge display showing source hostname and count
## Props
### `<InlineCitation />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the root span element.',
type: 'React.ComponentProps<"span">',
},
}}
/>
### `<InlineCitationText />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying span element.',
type: 'React.ComponentProps<"span">',
},
}}
/>
### `<InlineCitationCard />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the HoverCard component.',
type: 'React.ComponentProps<"span">',
},
}}
/>
### `<InlineCitationCardTrigger />`
<TypeTable
type={{
sources: {
description: 'Array of source URLs. The length determines the number displayed in the badge.',
type: 'string[]',
},
'...props': {
description: 'Any other props are spread to the underlying button element.',
type: 'React.ComponentProps<"button">',
},
}}
/>
### `<InlineCitationCardBody />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<InlineCitationCarousel />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying Carousel component.',
type: 'React.ComponentProps<typeof Carousel>',
},
}}
/>
### `<InlineCitationCarouselContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CarouselContent component.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<InlineCitationCarouselItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<InlineCitationCarouselHeader />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<InlineCitationCarouselIndex />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div. Children will override the default index display.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<InlineCitationCarouselPrev />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CarouselPrevious component.',
type: 'React.ComponentProps<typeof CarouselPrevious>',
},
}}
/>
### `<InlineCitationCarouselNext />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CarouselNext component.',
type: 'React.ComponentProps<typeof CarouselNext>',
},
}}
/>
### `<InlineCitationSource />`
<TypeTable
type={{
title: {
description: 'The title of the source.',
type: 'string',
},
url: {
description: 'The URL of the source.',
type: 'string',
},
description: {
description: 'A brief description of the source.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<InlineCitationQuote />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying blockquote element.',
type: 'React.ComponentProps<"blockquote">',
},
}}
/>

163
docs/components/loader.md Normal file
View File

@@ -0,0 +1,163 @@
# Loader
URL: /components/loader
---
title: Loader
description: A spinning loader component for indicating loading states in AI applications.
path: elements/components/loader
---
The `Loader` component provides a spinning animation to indicate loading states in your AI applications. It includes both a customizable wrapper component and the underlying icon for flexible usage.
<Preview path="loader" />
## Installation
<ElementsInstaller path="loader" />
## Usage
```tsx
import { Loader } from "@/components/ai-elements/loader";
```
```tsx
<Loader />
```
## Usage with AI SDK
Build a simple chat app that displays a loader before the response starts streaming by using `status === "submitted"`.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { Loader } from "@/components/ai-elements/loader";
import { useState } from "react";
import { useChat } from "@ai-sdk/react";
const LoaderDemo = () => {
const [input, setInput] = useState("");
const { messages, sendMessage, status } = useChat();
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput("");
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<Conversation>
<ConversationContent>
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <div key={`${message.id}-${i}`}>{part.text}</div>;
default:
return null;
}
})}
</MessageContent>
</Message>
))}
{status === "submitted" && <Loader />}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit
status={status === "streaming" ? "streaming" : "ready"}
disabled={!input.trim()}
className="absolute bottom-1 right-1"
/>
</Input>
</div>
</div>
);
};
export default LoaderDemo;
```
Add the following route to your backend:
```ts title="app/api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { model, messages }: { messages: UIMessage[]; model: string } = await req.json();
const result = streamText({
model: "openai/gpt-4o",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
## Features
- Clean, modern spinning animation using CSS animations
- Configurable size with the `size` prop
- Customizable styling with CSS classes
- Built-in `animate-spin` animation with proper centering
- Exports both `AILoader` wrapper and `LoaderIcon` for flexible usage
- Supports all standard HTML div attributes
- TypeScript support with proper type definitions
- Optimized SVG icon with multiple opacity levels for smooth animation
- Uses `currentColor` for proper theme integration
- Responsive and accessible design
## Examples
### Different Sizes
<Preview path="loader-sizes" />
### Custom Styling
<Preview path="loader-custom" />
## Props
### `<Loader />`
<TypeTable
type={{
size: {
description: 'The size (width and height) of the loader in pixels.',
type: 'number',
default: '16',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>

179
docs/components/message.md Normal file
View File

@@ -0,0 +1,179 @@
# Message
URL: /components/message
---
title: Message
description: Displays a chat interface message from either a user or an AI.
path: elements/components/message
---
The `Message` component displays a chat interface message from either a user or an AI. It includes an avatar, a name, and a message content.
<Preview path="message" />
## Installation
<ElementsInstaller path="message" />
## Usage
```tsx
import { Message, MessageContent } from "@/components/ai-elements/message";
```
```tsx
// Default contained variant
<Message from="user">
<MessageContent>Hi there!</MessageContent>
</Message>
// Flat variant for a minimalist look
<Message from="assistant">
<MessageContent variant="flat">Hello! How can I help you today?</MessageContent>
</Message>
```
## Usage with AI SDK
Render messages in a list with `useChat`.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
const MessageDemo = () => {
const { messages } = useChat();
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case "text": // we don't use any reasoning or tool calls in this example
return <Response key={`${message.id}-${i}`}>{part.text}</Response>;
default:
return null;
}
})}
</MessageContent>
</Message>
))}
</div>
</div>
);
};
export default MessageDemo;
```
## Features
- Displays messages from both the user and AI assistant with distinct styling.
- Two visual variants: **contained** (default) and **flat** for different design preferences.
- Includes avatar images for message senders with fallback initials.
- Shows the sender's name through avatar fallbacks.
- Automatically aligns user and assistant messages on opposite sides.
- Uses different background colors for user and assistant messages.
- Accepts any React node as message content.
## Variants
### Contained (default)
The **contained** variant provides distinct visual separation with colored backgrounds:
- User messages appear with primary background color and are right-aligned
- Assistant messages have secondary background color and are left-aligned
- Both message types have padding and rounded corners
### Flat
The **flat** variant offers a minimalist design that matches modern AI interfaces like ChatGPT and Gemini:
- User messages use softer secondary colors with subtle borders
- Assistant messages display full-width without background or padding
- Creates a cleaner, more streamlined conversation appearance
## Notes
Always render the `AIMessageContent` first, then the `AIMessageAvatar`. The `AIMessage` component is a wrapper that determines the alignment of the message.
## Examples
### Render Markdown
We can use the [`Response`](/elements/components/response) component to render markdown content.
<Preview path="message-markdown" />
### Flat Variant
The flat variant provides a minimalist design that matches modern AI interfaces.
<Preview path="message-flat" />
## Props
### `<Message />`
<TypeTable
type={{
from: {
description:
'The role of the message sender ("user", "assistant", or "system").',
type: 'UIMessage["role"]',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<MessageContent />`
<TypeTable
type={{
variant: {
description: 'Visual style variant. "contained" (default) shows colored backgrounds, "flat" provides a minimalist design.',
type: '"contained" | "flat"',
default: '"contained"',
},
'...props': {
description: 'Any other props are spread to the content div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<MessageAvatar />`
<TypeTable
type={{
src: {
description: 'The URL of the avatar image.',
type: 'string',
},
name: {
description:
'The name to use for the avatar fallback (first 2 letters shown if image is missing).',
type: 'string',
},
'...props': {
description:
'Any other props are spread to the underlying Avatar component.',
type: 'React.ComponentProps<typeof Avatar>',
},
}}
/>

151
docs/components/node.md Normal file
View File

@@ -0,0 +1,151 @@
# Node
URL: /components/node
---
title: Node
description: A composable node component for React Flow-based canvases with Card-based styling.
path: elements/components/node
---
The `Node` component provides a composable, Card-based node for React Flow canvases. It includes support for connection handles, structured layouts, and consistent styling using shadcn/ui components.
<Callout>
The Node component is designed to be used with the [Canvas](/elements/components/canvas) component. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="node" />
## Usage
```tsx
import { Node, NodeHeader, NodeTitle, NodeDescription, NodeAction, NodeContent, NodeFooter } from "@/components/ai-elements/node";
```
```tsx
<Node handles={{ target: true, source: true }}>
<NodeHeader>
<NodeTitle>Node Title</NodeTitle>
<NodeDescription>Optional description</NodeDescription>
<NodeAction>
<Button>Action</Button>
</NodeAction>
</NodeHeader>
<NodeContent>Main content goes here</NodeContent>
<NodeFooter>Footer content</NodeFooter>
</Node>
```
## Features
- Built on shadcn/ui Card components for consistent styling
- Automatic handle placement (left for target, right for source)
- Composable sub-components (Header, Title, Description, Action, Content, Footer)
- Semantic structure for organizing node information
- Pre-styled sections with borders and backgrounds
- Responsive sizing with fixed small width
- Full TypeScript support with proper type definitions
- Compatible with React Flow's node system
## Props
### `<Node />`
<TypeTable
type={{
handles: {
description: 'Configuration for connection handles. Target renders on the left, source on the right.',
type: '{ target: boolean; source: boolean; }',
},
className: {
description: 'Additional CSS classes to apply to the node.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying Card component.',
type: 'ComponentProps<typeof Card>',
},
}}
/>
### `<NodeHeader />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the header.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying CardHeader component.',
type: 'ComponentProps<typeof CardHeader>',
},
}}
/>
### `<NodeTitle />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CardTitle component.',
type: 'ComponentProps<typeof CardTitle>',
},
}}
/>
### `<NodeDescription />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CardDescription component.',
type: 'ComponentProps<typeof CardDescription>',
},
}}
/>
### `<NodeAction />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CardAction component.',
type: 'ComponentProps<typeof CardAction>',
},
}}
/>
### `<NodeContent />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the content.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying CardContent component.',
type: 'ComponentProps<typeof CardContent>',
},
}}
/>
### `<NodeFooter />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the footer.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying CardFooter component.',
type: 'ComponentProps<typeof CardFooter>',
},
}}
/>

View File

@@ -0,0 +1,133 @@
# Open In Chat
URL: /components/open-in-chat
---
title: Open In Chat
description: A dropdown menu for opening queries in various AI chat platforms including ChatGPT, Claude, T3, Scira, and v0.
path: elements/components/open-in-chat
---
The `OpenIn` component provides a dropdown menu that allows users to open queries in different AI chat platforms with a single click.
<Preview path="open-in-chat" />
## Installation
<ElementsInstaller path="open-in-chat" />
## Usage
```tsx
import {
OpenIn,
OpenInChatGPT,
OpenInClaude,
OpenInContent,
OpenInCursor,
OpenInScira,
OpenInT3,
OpenInTrigger,
OpenInv0,
} from "@/components/ai-elements/open-in-chat";
```
```tsx
<OpenIn query="How can I implement authentication in Next.js?">
<OpenInTrigger />
<OpenInContent>
<OpenInChatGPT />
<OpenInClaude />
<OpenInT3 />
<OpenInScira />
<OpenInv0 />
<OpenInCursor />
</OpenInContent>
</OpenIn>
```
## Features
- Pre-configured links to popular AI chat platforms
- Context-based query passing for cleaner API
- Customizable dropdown trigger button
- Automatic URL parameter encoding for queries
- Support for ChatGPT, Claude, T3 Chat, Scira AI, v0, and Cursor
- Branded icons for each platform
- TypeScript support with proper type definitions
- Accessible dropdown menu with keyboard navigation
- External link indicators for clarity
## Supported Platforms
- **ChatGPT** - Opens query in OpenAI's ChatGPT with search hints
- **Claude** - Opens query in Anthropic's Claude AI
- **T3 Chat** - Opens query in T3 Chat platform
- **Scira AI** - Opens query in Scira's AI assistant
- **v0** - Opens query in Vercel's v0 platform
- **Cursor** - Opens query in Cursor AI editor
## Props
### `<OpenIn />`
<TypeTable
type={{
query: {
description: 'The query text to be sent to all AI platforms.',
type: 'string',
},
'...props': {
description: 'Props to spread to the underlying radix-ui DropdownMenu component.',
type: 'React.ComponentProps<typeof DropdownMenu>',
},
}}
/>
### `<OpenInTrigger />`
<TypeTable
type={{
children: {
description: 'Custom trigger button.',
type: 'React.ReactNode',
default: '"Open in chat" button with chevron icon',
},
'...props': {
description: 'Props to spread to the underlying DropdownMenuTrigger component.',
type: 'React.ComponentProps<typeof DropdownMenuTrigger>',
},
}}
/>
### `<OpenInContent />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the dropdown content.',
type: 'string',
},
'...props': {
description: 'Props to spread to the underlying DropdownMenuContent component.',
type: 'React.ComponentProps<typeof DropdownMenuContent>',
},
}}
/>
### `<OpenInChatGPT />`, `<OpenInClaude />`, `<OpenInT3 />`, `<OpenInScira />`, `<OpenInv0 />`, `<OpenInCursor />`
<TypeTable
type={{
'...props': {
description: 'Props to spread to the underlying DropdownMenuItem component. The query is automatically provided via context from the parent OpenIn component.',
type: 'React.ComponentProps<typeof DropdownMenuItem>',
},
}}
/>
### `<OpenInItem />`, `<OpenInLabel />`, `<OpenInSeparator />`
Additional composable components for custom dropdown menu items, labels, and separators that follow the same props pattern as their underlying radix-ui counterparts.

66
docs/components/panel.md Normal file
View File

@@ -0,0 +1,66 @@
# Panel
URL: /components/panel
---
title: Panel
description: A styled panel component for React Flow-based canvases to position custom UI elements.
path: elements/components/panel
---
The `Panel` component provides a positioned container for custom UI elements on React Flow canvases. It includes modern card styling with backdrop blur and flexible positioning options.
<Callout>
The Panel component is designed to be used with the [Canvas](/elements/components/canvas) component. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="panel" />
## Usage
```tsx
import { Panel } from "@/components/ai-elements/panel";
```
```tsx
<ReactFlow>
<Panel position="top-left">
<Button>Custom Action</Button>
</Panel>
</ReactFlow>
```
## Features
- Flexible positioning (top-left, top-right, bottom-left, bottom-right, top-center, bottom-center)
- Rounded pill design with backdrop blur
- Theme-aware card background
- Flexbox layout for easy content alignment
- Subtle drop shadow for depth
- Full TypeScript support
- Compatible with React Flow's panel system
## Props
### `<Panel />`
<TypeTable
type={{
position: {
description: 'Position of the panel on the canvas.',
type: "'top-left' | 'top-center' | 'top-right' | 'bottom-left' | 'bottom-center' | 'bottom-right'",
},
className: {
description: 'Additional CSS classes to apply to the panel.',
type: 'string',
},
'...props': {
description: 'Any other props from @xyflow/react Panel component.',
type: 'ComponentProps<typeof Panel>',
},
}}
/>

167
docs/components/plan.md Normal file
View File

@@ -0,0 +1,167 @@
# Plan
URL: /components/plan
---
title: Plan
description: A collapsible plan component for displaying AI-generated execution plans with streaming support and shimmer animations.
path: elements/components/plan
---
The `Plan` component provides a flexible system for displaying AI-generated execution plans with collapsible content. Perfect for showing multi-step workflows, task breakdowns, and implementation strategies with support for streaming content and loading states.
<Preview path="plan" />
## Installation
<ElementsInstaller path="plan" />
## Usage
```tsx
import { Plan, PlanAction, PlanContent, PlanDescription, PlanFooter, PlanHeader, PlanTitle, PlanTrigger } from "@/components/ai-elements/plan";
```
```tsx
<Plan defaultOpen={false}>
<PlanHeader>
<div>
<PlanTitle>Implement new feature</PlanTitle>
<PlanDescription>Add authentication system with JWT tokens and refresh logic.</PlanDescription>
</div>
<PlanTrigger />
</PlanHeader>
<PlanContent>
<div className="space-y-4 text-sm">
<div>
<h3 className="mb-2 font-semibold">Overview</h3>
<p>This plan outlines the implementation strategy...</p>
</div>
</div>
</PlanContent>
<PlanFooter>
<Button>Execute Plan</Button>
</PlanFooter>
</Plan>
```
## Features
- Collapsible content with smooth animations
- Streaming support with shimmer loading states
- Built on shadcn/ui Card and Collapsible components
- TypeScript support with comprehensive type definitions
- Customizable styling with Tailwind CSS
- Responsive design with mobile-friendly interactions
- Keyboard navigation and accessibility support
- Theme-aware with automatic dark mode support
- Context-based state management for streaming
## Props
### `<Plan />`
<TypeTable
type={{
isStreaming: {
description: 'Whether content is currently streaming. Enables shimmer animations on title and description.',
type: 'boolean',
default: 'false',
},
defaultOpen: {
description: 'Whether the plan is expanded by default.',
type: 'boolean',
},
'...props': {
description: 'Any other props are spread to the Collapsible component.',
type: 'React.ComponentProps<typeof Collapsible>',
},
}}
/>
### `<PlanHeader />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CardHeader component.',
type: 'React.ComponentProps<typeof CardHeader>',
},
}}
/>
### `<PlanTitle />`
<TypeTable
type={{
children: {
description: 'The title text. Displays with shimmer animation when isStreaming is true.',
type: 'string',
},
'...props': {
description: 'Any other props (except children) are spread to the CardTitle component.',
type: 'Omit<React.ComponentProps<typeof CardTitle>, "children">',
},
}}
/>
### `<PlanDescription />`
<TypeTable
type={{
children: {
description: 'The description text. Displays with shimmer animation when isStreaming is true.',
type: 'string',
},
'...props': {
description: 'Any other props (except children) are spread to the CardDescription component.',
type: 'Omit<React.ComponentProps<typeof CardDescription>, "children">',
},
}}
/>
### `<PlanTrigger />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CollapsibleTrigger component. Renders as a Button with chevron icon.',
type: 'React.ComponentProps<typeof CollapsibleTrigger>',
},
}}
/>
### `<PlanContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CardContent component.',
type: 'React.ComponentProps<typeof CardContent>',
},
}}
/>
### `<PlanFooter />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<PlanAction />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CardAction component.',
type: 'React.ComponentProps<typeof CardAction>',
},
}}
/>

View File

@@ -0,0 +1,811 @@
# Prompt Input
URL: /components/prompt-input
---
title: Prompt Input
description: Allows a user to send a message with file attachments to a large language model. It includes a textarea, file upload capabilities, a submit button, and a dropdown for selecting the model.
path: elements/components/prompt-input
---
The `PromptInput` component allows a user to send a message with file attachments to a large language model. It includes a textarea, file upload capabilities, a submit button, and a dropdown for selecting the model.
<Preview path="prompt-input" />
## Installation
<ElementsInstaller path="prompt-input" />
## Usage
```tsx
import {
PromptInput,
PromptInputActionAddAttachments,
PromptInputActionMenu,
PromptInputActionMenuContent,
PromptInputActionMenuItem,
PromptInputActionMenuTrigger,
PromptInputAttachment,
PromptInputAttachments,
PromptInputBody,
PromptInputButton,
PromptInputProvider,
PromptInputSpeechButton,
PromptInputSubmit,
PromptInputTextarea,
PromptInputFooter,
PromptInputTools,
usePromptInputAttachments,
} from "@/components/ai-elements/prompt-input";
```
```tsx
import { GlobeIcon } from "lucide-react";
<PromptInput onSubmit={() => {}} className="mt-4 relative">
<PromptInputHeader>
<PromptInputAttachments>{(attachment) => <PromptInputAttachment data={attachment} />}</PromptInputAttachments>
</PromptInputHeader>
<PromptInputBody>
<PromptInputTextarea onChange={(e) => {}} value={""} />
</PromptInputBody>
<PromptInputFooter>
<PromptInputTools>
<PromptInputActionMenu>
<PromptInputActionMenuTrigger />
<PromptInputActionMenuContent>
<PromptInputActionAddAttachments />
</PromptInputActionMenuContent>
</PromptInputActionMenu>
<PromptInputSpeechButton />
<PromptInputButton>
<GlobeIcon size={16} />
<span>Search</span>
</PromptInputButton>
<PromptInputModelSelect onValueChange={(value) => {}} value="gpt-4o">
<PromptInputModelSelectTrigger>
<PromptInputModelSelectValue />
</PromptInputModelSelectTrigger>
<PromptInputModelSelectContent>
<PromptInputModelSelectItem value="gpt-4o">GPT-4o</PromptInputModelSelectItem>
<PromptInputModelSelectItem value="claude-opus-4-20250514">Claude 4 Opus</PromptInputModelSelectItem>
</PromptInputModelSelectContent>
</PromptInputModelSelect>
</PromptInputTools>
<PromptInputSubmit disabled={false} status={"ready"} />
</PromptInputFooter>
</PromptInput>;
```
## Usage with AI SDK
Build a fully functional chat app using `PromptInput`, [`Conversation`](/elements/components/conversation) with a model picker:
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import {
PromptInput,
PromptInputActionAddAttachments,
PromptInputActionMenu,
PromptInputActionMenuContent,
PromptInputActionMenuTrigger,
PromptInputAttachment,
PromptInputAttachments,
PromptInputBody,
PromptInputButton,
type PromptInputMessage,
PromptInputModelSelect,
PromptInputModelSelectContent,
PromptInputModelSelectItem,
PromptInputModelSelectTrigger,
PromptInputModelSelectValue,
PromptInputSpeechButton,
PromptInputSubmit,
PromptInputTextarea,
PromptInputFooter,
PromptInputTools,
} from "@/components/ai-elements/prompt-input";
import { GlobeIcon } from "lucide-react";
import { useRef, useState } from "react";
import { useChat } from "@ai-sdk/react";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Response } from "@/components/ai-elements/response";
const models = [
{ id: "gpt-4o", name: "GPT-4o" },
{ id: "claude-opus-4-20250514", name: "Claude 4 Opus" },
];
const InputDemo = () => {
const [text, setText] = useState<string>("");
const [model, setModel] = useState<string>(models[0].id);
const [useWebSearch, setUseWebSearch] = useState<boolean>(false);
const textareaRef = useRef<HTMLTextAreaElement>(null);
const { messages, status, sendMessage } = useChat();
const handleSubmit = (message: PromptInputMessage) => {
const hasText = Boolean(message.text);
const hasAttachments = Boolean(message.files?.length);
if (!(hasText || hasAttachments)) {
return;
}
sendMessage(
{
text: message.text || "Sent with attachments",
files: message.files,
},
{
body: {
model: model,
webSearch: useWebSearch,
},
}
);
setText("");
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<Conversation>
<ConversationContent>
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <Response key={`${message.id}-${i}`}>{part.text}</Response>;
default:
return null;
}
})}
</MessageContent>
</Message>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<PromptInput onSubmit={handleSubmit} className="mt-4" globalDrop multiple>
<PromptInputHeader>
<PromptInputAttachments>{(attachment) => <PromptInputAttachment data={attachment} />}</PromptInputAttachments>
</PromptInputHeader>
<PromptInputBody>
<PromptInputTextarea onChange={(e) => setText(e.target.value)} ref={textareaRef} value={text} />
</PromptInputBody>
<PromptInputFooter>
<PromptInputTools>
<PromptInputActionMenu>
<PromptInputActionMenuTrigger />
<PromptInputActionMenuContent>
<PromptInputActionAddAttachments />
</PromptInputActionMenuContent>
</PromptInputActionMenu>
<PromptInputSpeechButton onTranscriptionChange={setText} textareaRef={textareaRef} />
<PromptInputButton onClick={() => setUseWebSearch(!useWebSearch)} variant={useWebSearch ? "default" : "ghost"}>
<GlobeIcon size={16} />
<span>Search</span>
</PromptInputButton>
<PromptInputModelSelect
onValueChange={(value) => {
setModel(value);
}}
value={model}
>
<PromptInputModelSelectTrigger>
<PromptInputModelSelectValue />
</PromptInputModelSelectTrigger>
<PromptInputModelSelectContent>
{models.map((model) => (
<PromptInputModelSelectItem key={model.id} value={model.id}>
{model.name}
</PromptInputModelSelectItem>
))}
</PromptInputModelSelectContent>
</PromptInputModelSelect>
</PromptInputTools>
<PromptInputSubmit disabled={!text && !status} status={status} />
</PromptInputFooter>
</PromptInput>
</div>
</div>
);
};
export default InputDemo;
```
Add the following route to your backend:
```ts title="app/api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const {
model,
messages,
webSearch,
}: {
messages: UIMessage[];
model: string;
webSearch?: boolean;
} = await req.json();
const result = streamText({
model: webSearch ? "perplexity/sonar" : model,
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
## Features
- Auto-resizing textarea that adjusts height based on content
- File attachment support with drag-and-drop
- Image preview for image attachments
- Configurable file constraints (max files, max size, accepted types)
- Automatic submit button icons based on status
- Support for keyboard shortcuts (Enter to submit, Shift+Enter for new line)
- Customizable min/max height for the textarea
- Flexible toolbar with support for custom actions and tools
- Built-in model selection dropdown
- Built-in native speech recognition button (Web Speech API)
- Optional provider for lifted state management
- Form automatically resets on submit
- Responsive design with mobile-friendly controls
- Clean, modern styling with customizable themes
- Form-based submission handling
- Hidden file input sync for native form posts
- Global document drop support (opt-in)
## Examples
### Cursor style
<Preview path="prompt-input-cursor" />
## Props
### `<PromptInput />`
<TypeTable
type={{
onSubmit: {
description: 'Handler called when the form is submitted with message text and files.',
type: '(message: PromptInputMessage, event: FormEvent) => void',
},
accept: {
description: 'File types to accept (e.g., "image/*"). Leave undefined for any.',
type: 'string',
},
multiple: {
description: 'Whether to allow multiple file selection.',
type: 'boolean',
},
globalDrop: {
description: 'When true, accepts file drops anywhere on the document.',
type: 'boolean',
},
syncHiddenInput: {
description: 'Render a hidden input with given name for native form posts.',
type: 'boolean',
},
maxFiles: {
description: 'Maximum number of files allowed.',
type: 'number',
},
maxFileSize: {
description: 'Maximum file size in bytes.',
type: 'number',
},
onError: {
description: 'Handler for file validation errors.',
type: '(err: { code: "max_files" | "max_file_size" | "accept", message: string }) => void',
},
'...props': {
description: 'Any other props are spread to the root form element.',
type: 'React.HTMLAttributes<HTMLFormElement>',
},
}}
/>
### `<PromptInputTextarea />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying Textarea component.',
type: 'React.ComponentProps<typeof Textarea>',
},
}}
/>
### `<PromptInputFooter />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the toolbar div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputTools />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the tools div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputButton />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<PromptInputSubmit />`
<TypeTable
type={{
status: {
description: 'Current chat status to determine button icon (submitted, streaming, error).',
type: 'ChatStatus',
},
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<PromptInputModelSelect />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying Select component.',
type: 'React.ComponentProps<typeof Select>',
},
}}
/>
### `<PromptInputModelSelectTrigger />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying SelectTrigger component.',
type: 'React.ComponentProps<typeof SelectTrigger>',
},
}}
/>
### `<PromptInputModelSelectContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying SelectContent component.',
type: 'React.ComponentProps<typeof SelectContent>',
},
}}
/>
### `<PromptInputModelSelectItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying SelectItem component.',
type: 'React.ComponentProps<typeof SelectItem>',
},
}}
/>
### `<PromptInputModelSelectValue />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying SelectValue component.',
type: 'React.ComponentProps<typeof SelectValue>',
},
}}
/>
### `<PromptInputBody />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the body div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputAttachments />`
<TypeTable
type={{
children: {
description: 'Render function for each attachment.',
type: '(attachment: FileUIPart & { id: string }) => React.ReactNode',
},
'...props': {
description: 'Any other props are spread to the attachments container.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputAttachment />`
<TypeTable
type={{
data: {
description: 'The attachment data to display.',
type: 'FileUIPart & { id: string }',
},
'...props': {
description: 'Any other props are spread to the attachment div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputActionMenu />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying DropdownMenu component.',
type: 'React.ComponentProps<typeof DropdownMenu>',
},
}}
/>
### `<PromptInputActionMenuTrigger />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<PromptInputActionMenuContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying DropdownMenuContent component.',
type: 'React.ComponentProps<typeof DropdownMenuContent>',
},
}}
/>
### `<PromptInputActionMenuItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying DropdownMenuItem component.',
type: 'React.ComponentProps<typeof DropdownMenuItem>',
},
}}
/>
### `<PromptInputActionAddAttachments />`
<TypeTable
type={{
label: {
description: 'Label for the menu item.',
type: 'string',
default: '"Add photos or files"',
},
'...props': {
description: 'Any other props are spread to the underlying DropdownMenuItem component.',
type: 'React.ComponentProps<typeof DropdownMenuItem>',
},
}}
/>
### `<PromptInputProvider />`
<TypeTable
type={{
initialInput: {
description: 'Initial text input value.',
type: 'string',
},
children: {
description: 'Child components that will have access to the provider context.',
type: 'React.ReactNode',
},
}}
/>
Optional global provider that lifts PromptInput state outside of PromptInput. When used, it allows you to access and control the input state from anywhere within the provider tree. If not used, PromptInput stays fully self-managed.
### `<PromptInputSpeechButton />`
<TypeTable
type={{
textareaRef: {
description: 'Reference to the textarea element to insert transcribed text.',
type: 'RefObject<HTMLTextAreaElement | null>',
},
onTranscriptionChange: {
description: 'Callback fired when transcription text changes.',
type: '(text: string) => void',
},
'...props': {
description: 'Any other props are spread to the underlying PromptInputButton component.',
type: 'React.ComponentProps<typeof PromptInputButton>',
},
}}
/>
Built-in button component that provides native speech recognition using the Web Speech API. The button will be disabled if speech recognition is not supported in the browser. Displays a microphone icon and pulses while actively listening.
## Hooks
### `usePromptInputAttachments`
Access and manage file attachments within a PromptInput context.
```tsx
const attachments = usePromptInputAttachments();
// Available methods:
attachments.files; // Array of current attachments
attachments.add(files); // Add new files
attachments.remove(id); // Remove an attachment by ID
attachments.clear(); // Clear all attachments
attachments.openFileDialog(); // Open file selection dialog
```
### `usePromptInputController`
Access the full PromptInput controller from a PromptInputProvider. Only available when using the provider.
```tsx
const controller = usePromptInputController();
// Available methods:
controller.textInput.value; // Current text input value
controller.textInput.setInput(value); // Set text input value
controller.textInput.clear(); // Clear text input
controller.attachments; // Same as usePromptInputAttachments
```
### `useProviderAttachments`
Access attachments context from a PromptInputProvider. Only available when using the provider.
```tsx
const attachments = useProviderAttachments();
// Same interface as usePromptInputAttachments
```
### `<PromptInputHeader />`
<TypeTable
type={{
'...props': {
description: 'Any other props (except align) are spread to the InputGroupAddon component.',
type: 'Omit<React.ComponentProps<typeof InputGroupAddon>, "align">',
},
}}
/>
### `<PromptInputHoverCard />`
<TypeTable
type={{
openDelay: {
description: 'Delay in milliseconds before opening.',
type: 'number',
default: '0',
},
closeDelay: {
description: 'Delay in milliseconds before closing.',
type: 'number',
default: '0',
},
'...props': {
description: 'Any other props are spread to the HoverCard component.',
type: 'React.ComponentProps<typeof HoverCard>',
},
}}
/>
### `<PromptInputHoverCardTrigger />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the HoverCardTrigger component.',
type: 'React.ComponentProps<typeof HoverCardTrigger>',
},
}}
/>
### `<PromptInputHoverCardContent />`
<TypeTable
type={{
align: {
description: 'Alignment of the hover card content.',
type: '"start" | "center" | "end"',
default: '"start"',
},
'...props': {
description: 'Any other props are spread to the HoverCardContent component.',
type: 'React.ComponentProps<typeof HoverCardContent>',
},
}}
/>
### `<PromptInputTabsList />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputTab />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputTabLabel />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the h3 element.',
type: 'React.HTMLAttributes<HTMLHeadingElement>',
},
}}
/>
### `<PromptInputTabBody />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputTabItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<PromptInputCommand />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the Command component.',
type: 'React.ComponentProps<typeof Command>',
},
}}
/>
### `<PromptInputCommandInput />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CommandInput component.',
type: 'React.ComponentProps<typeof CommandInput>',
},
}}
/>
### `<PromptInputCommandList />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CommandList component.',
type: 'React.ComponentProps<typeof CommandList>',
},
}}
/>
### `<PromptInputCommandEmpty />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CommandEmpty component.',
type: 'React.ComponentProps<typeof CommandEmpty>',
},
}}
/>
### `<PromptInputCommandGroup />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CommandGroup component.',
type: 'React.ComponentProps<typeof CommandGroup>',
},
}}
/>
### `<PromptInputCommandItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CommandItem component.',
type: 'React.ComponentProps<typeof CommandItem>',
},
}}
/>
### `<PromptInputCommandSeparator />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CommandSeparator component.',
type: 'React.ComponentProps<typeof CommandSeparator>',
},
}}
/>

272
docs/components/queue.md Normal file
View File

@@ -0,0 +1,272 @@
# Queue
URL: /components/queue
---
title: Queue
description: A comprehensive queue component system for displaying message lists, todos, and collapsible task sections in AI applications.
path: elements/components/queue
---
The `Queue` component provides a flexible system for displaying lists of messages, todos, attachments, and collapsible sections. Perfect for showing AI workflow progress, pending tasks, message history, or any structured list of items in your application.
<Preview path="queue" />
## Installation
<ElementsInstaller path="queue" />
## Usage
```tsx
import {
Queue,
QueueSection,
QueueSectionTrigger,
QueueSectionLabel,
QueueSectionContent,
QueueList,
QueueItem,
QueueItemIndicator,
QueueItemContent,
} from "@/components/ai-elements/queue";
```
```tsx
<Queue>
<QueueSection>
<QueueSectionTrigger>
<QueueSectionLabel count={3} label="Tasks" />
</QueueSectionTrigger>
<QueueSectionContent>
<QueueList>
<QueueItem>
<QueueItemIndicator />
<QueueItemContent>Analyze user requirements</QueueItemContent>
</QueueItem>
</QueueList>
</QueueSectionContent>
</QueueSection>
</Queue>
```
## Features
- Flexible component system with composable parts
- Collapsible sections with smooth animations
- Support for completed/pending state indicators
- Built-in scroll area for long lists
- Attachment display with images and file indicators
- Hover-revealed action buttons for queue items
- TypeScript support with comprehensive type definitions
- Customizable styling with Tailwind CSS
- Responsive design with mobile-friendly interactions
- Keyboard navigation and accessibility support
- Theme-aware with automatic dark mode support
## Examples
### With PromptInput
<Preview path="queue-prompt-input" />
## Props
### `<Queue />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<QueueSection />`
<TypeTable
type={{
defaultOpen: {
description: 'Whether the section is open by default.',
type: 'boolean',
default: 'true',
},
'...props': {
description: 'Any other props are spread to the Collapsible component.',
type: 'React.ComponentProps<typeof Collapsible>',
},
}}
/>
### `<QueueSectionTrigger />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the button element.',
type: 'React.ComponentProps<"button">',
},
}}
/>
### `<QueueSectionLabel />`
<TypeTable
type={{
label: {
description: 'The label text to display.',
type: 'string',
},
count: {
description: 'The count to display before the label.',
type: 'number',
},
icon: {
description: 'An optional icon to display before the count.',
type: 'React.ReactNode',
},
'...props': {
description: 'Any other props are spread to the span element.',
type: 'React.ComponentProps<"span">',
},
}}
/>
### `<QueueSectionContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CollapsibleContent component.',
type: 'React.ComponentProps<typeof CollapsibleContent>',
},
}}
/>
### `<QueueList />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the ScrollArea component.',
type: 'React.ComponentProps<typeof ScrollArea>',
},
}}
/>
### `<QueueItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the li element.',
type: 'React.ComponentProps<"li">',
},
}}
/>
### `<QueueItemIndicator />`
<TypeTable
type={{
completed: {
description: 'Whether the item is completed. Affects the indicator styling.',
type: 'boolean',
default: 'false',
},
'...props': {
description: 'Any other props are spread to the span element.',
type: 'React.ComponentProps<"span">',
},
}}
/>
### `<QueueItemContent />`
<TypeTable
type={{
completed: {
description: 'Whether the item is completed. Affects text styling with strikethrough and opacity.',
type: 'boolean',
default: 'false',
},
'...props': {
description: 'Any other props are spread to the span element.',
type: 'React.ComponentProps<"span">',
},
}}
/>
### `<QueueItemDescription />`
<TypeTable
type={{
completed: {
description: 'Whether the item is completed. Affects text styling.',
type: 'boolean',
default: 'false',
},
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<QueueItemActions />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<QueueItemAction />`
<TypeTable
type={{
'...props': {
description: 'Any other props (except variant and size) are spread to the Button component.',
type: 'Omit<React.ComponentProps<typeof Button>, "variant" | "size">',
},
}}
/>
### `<QueueItemAttachment />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the div element.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<QueueItemImage />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the img element.',
type: 'React.ComponentProps<"img">',
},
}}
/>
### `<QueueItemFile />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the span element.',
type: 'React.ComponentProps<"span">',
},
}}
/>

View File

@@ -0,0 +1,213 @@
# Reasoning
URL: /components/reasoning
---
title: Reasoning
description: A collapsible component that displays AI reasoning content, automatically opening during streaming and closing when finished.
path: elements/components/reasoning
---
The `Reasoning` component displays AI reasoning content, automatically opening during streaming and closing when finished.
<Preview path="reasoning" />
## Installation
<ElementsInstaller path="reasoning" />
## Usage
```tsx
import { Reasoning, ReasoningContent, ReasoningTrigger } from "@/components/ai-elements/reasoning";
```
```tsx
<Reasoning className="w-full" isStreaming={false}>
<ReasoningTrigger />
<ReasoningContent>I need to computer the square of 2.</ReasoningContent>
</Reasoning>
```
## Usage with AI SDK
Build a chatbot with reasoning using Deepseek R1.
Add the following component to your frontend:
```tsx title="app/page.tsx"
'use client';
import {
Reasoning,
ReasoningContent,
ReasoningTrigger,
} from '@/components/ai-elements/reasoning';
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from '@/components/ai-elements/conversation';
import {
PromptInput,
PromptInputTextarea,
PromptInputSubmit,
} from '@/components/ai-elements/prompt-input';
import { Loader } from '@/components/ai-elements/loader';
import { Message, MessageContent } from '@/components/ai-elements/message';
import { useState } from 'react';
import { useChat } from '@ai-sdk/react';
import { Response } from @/components/ai-elements/response';
const ReasoningDemo = () => {
const [input, setInput] = useState('');
const { messages, sendMessage, status } = useChat();
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<Conversation>
<ConversationContent>
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return (
<Response key={`${message.id}-${i}`}>
{part.text}
</Response>
);
case 'reasoning':
return (
<Reasoning
key={`${message.id}-${i}`}
className="w-full"
isStreaming={status === 'streaming' && i === message.parts.length - 1 && message.id === messages.at(-1)?.id}
>
<ReasoningTrigger />
<ReasoningContent>{part.text}</ReasoningContent>
</Reasoning>
);
}
})}
</MessageContent>
</Message>
))}
{status === 'submitted' && <Loader />}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<PromptInput
onSubmit={handleSubmit}
className="mt-4 w-full max-w-2xl mx-auto relative"
>
<PromptInputTextarea
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit
status={status === 'streaming' ? 'streaming' : 'ready'}
disabled={!input.trim()}
className="absolute bottom-1 right-1"
/>
</PromptInput>
</div>
</div>
);
};
export default ReasoningDemo;
```
Add the following route to your backend:
```ts title="app/api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { model, messages }: { messages: UIMessage[]; model: string } = await req.json();
const result = streamText({
model: "deepseek/deepseek-r1",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse({
sendReasoning: true,
});
}
```
## Features
- Automatically opens when streaming content and closes when finished
- Manual toggle control for user interaction
- Smooth animations and transitions powered by Radix UI
- Visual streaming indicator with pulsing animation
- Composable architecture with separate trigger and content components
- Built with accessibility in mind including keyboard navigation
- Responsive design that works across different screen sizes
- Seamlessly integrates with both light and dark themes
- Built on top of shadcn/ui Collapsible primitives
- TypeScript support with proper type definitions
## Props
### `<Reasoning />`
<TypeTable
type={{
isStreaming: {
description: 'Whether the reasoning is currently streaming (auto-opens and closes the panel).',
type: 'boolean',
},
'...props': {
description: 'Any other props are spread to the underlying Collapsible component.',
type: 'React.ComponentProps<typeof Collapsible>',
},
}}
/>
### `<ReasoningTrigger />`
<TypeTable
type={{
title: {
description: 'Optional title to display in the trigger.',
type: 'string',
default: '"Reasoning"',
},
'...props': {
description: 'Any other props are spread to the underlying CollapsibleTrigger component.',
type: 'React.ComponentProps<typeof CollapsibleTrigger>',
},
}}
/>
### `<ReasoningContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying CollapsibleContent component.',
type: 'React.ComponentProps<typeof CollapsibleContent>',
},
}}
/>

153
docs/components/response.md Normal file
View File

@@ -0,0 +1,153 @@
# Response
URL: /components/response
---
title: Response
description: A component that renders a Markdown response from a large language model.
path: elements/components/response
---
The `Response` component renders a Markdown response from a large language model. It uses [Streamdown](https://streamdown.ai/) under the hood to render the markdown.
<Preview path="response" />
## Installation
<ElementsInstaller path="response" />
<Callout label={false} type="warning">
**Important:** After adding the component, you'll need to add the following to your `globals.css` file:
```css
@source "../node_modules/streamdown/dist/index.js";
```
This is **required** for the Response component to work properly. Without this import, the Streamdown styles will not be applied to your project. See [Streamdown's documentation](https://streamdown.ai/) for more details.
</Callout>
## Usage
```tsx
import { Response } from "@/components/ai-elements/response";
```
```tsx
<Response>**Hi there.** I am an AI model designed to help you.</Response>
```
## Usage with AI SDK
Populate a markdown response with messages from [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
const ResponseDemo = () => {
const { messages } = useChat();
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<Conversation>
<ConversationContent>
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case "text": // we don't use any reasoning or tool calls in this example
return <Response key={`${message.id}-${i}`}>{part.text}</Response>;
default:
return null;
}
})}
</MessageContent>
</Message>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
</div>
</div>
);
};
export default ResponseDemo;
```
## Features
- Renders markdown content with support for paragraphs, links, and code blocks
- Supports GFM features like tables, task lists, and strikethrough text via remark-gfm
- Supports rendering Math Equations via rehype-katex
- **Smart streaming support** - automatically completes incomplete formatting during real-time text streaming
- Code blocks are rendered with syntax highlighting for various programming languages
- Code blocks include a button to easily copy code to clipboard
- Adapts to different screen sizes while maintaining readability
- Seamlessly integrates with both light and dark themes
- Customizable appearance through className props and Tailwind CSS utilities
- Built with accessibility in mind for all users
## Props
### `<Response />`
<TypeTable
type={{
children: {
description: 'The markdown content to render.',
type: 'string',
},
parseIncompleteMarkdown: {
description: 'Whether to parse and fix incomplete markdown syntax (e.g., unclosed code blocks or lists).',
type: 'boolean',
default: 'true',
},
className: {
description: 'CSS class names to apply to the wrapper div element.',
type: 'string',
},
components: {
description: 'Custom React components to use for rendering markdown elements (e.g., custom heading, paragraph, code block components).',
type: 'object',
},
allowedImagePrefixes: {
description: 'Array of allowed URL prefixes for images. Use ["*"] to allow all images.',
type: 'string[]',
default: '["*"]',
},
allowedLinkPrefixes: {
description: 'Array of allowed URL prefixes for links. Use ["*"] to allow all links.',
type: 'string[]',
default: '["*"]',
},
defaultOrigin: {
description: 'Default origin to use for relative URLs in links and images.',
type: 'string',
},
rehypePlugins: {
description: 'Array of rehype plugins to use for processing HTML. Includes KaTeX for math rendering by default.',
type: 'array',
default: '[rehypeKatex]',
},
remarkPlugins: {
description: 'Array of remark plugins to use for processing markdown. Includes GitHub Flavored Markdown and math support by default.',
type: 'array',
default: '[remarkGfm, remarkMath]',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>

View File

@@ -0,0 +1,84 @@
# Shimmer
URL: /components/shimmer
---
title: Shimmer
description: An animated text shimmer component for creating eye-catching loading states and progressive reveal effects.
path: elements/components/shimmer
---
The `Shimmer` component provides an animated shimmer effect that sweeps across text, perfect for indicating loading states, progressive reveals, or drawing attention to dynamic content in AI applications.
<Preview path="shimmer" />
## Installation
<ElementsInstaller path="shimmer" />
## Usage
```tsx
import { Shimmer } from "@/components/ai-elements/shimmer";
```
```tsx
<Shimmer>Loading your response...</Shimmer>
```
## Features
- Smooth animated shimmer effect using CSS gradients and Framer Motion
- Customizable animation duration and spread
- Polymorphic component - render as any HTML element via the `as` prop
- Automatic spread calculation based on text length
- Theme-aware styling using CSS custom properties
- Infinite looping animation with linear easing
- TypeScript support with proper type definitions
- Memoized for optimal performance
- Responsive and accessible design
- Uses `text-transparent` with background-clip for crisp text rendering
## Examples
### Different Durations
<Preview path="shimmer-duration" />
### Custom Elements
<Preview path="shimmer-elements" />
## Props
### `<Shimmer />`
<TypeTable
type={{
children: {
description: 'The text content to apply the shimmer effect to.',
type: 'string',
},
as: {
description: 'The HTML element or React component to render.',
type: 'ElementType',
default: '"p"',
},
className: {
description: 'Additional CSS classes to apply to the component.',
type: 'string',
},
duration: {
description: 'The duration of the shimmer animation in seconds.',
type: 'number',
default: '2',
},
spread: {
description: 'The spread multiplier for the shimmer gradient, multiplied by text length.',
type: 'number',
default: '2',
},
}}
/>

219
docs/components/sources.md Normal file
View File

@@ -0,0 +1,219 @@
# Sources
URL: /components/sources
---
title: Sources
description: A component that allows a user to view the sources or citations used to generate a response.
path: elements/components/sources
---
The `Sources` component allows a user to view the sources or citations used to generate a response.
<Preview path="sources" />
## Installation
<ElementsInstaller path="sources" />
## Usage
```tsx
import { Source, Sources, SourcesContent, SourcesTrigger } from "@/components/ai-elements/sources";
```
```tsx
<Sources>
<SourcesTrigger count={1} />
<SourcesContent>
<Source href="https://ai-sdk.dev" title="AI SDK" />
</SourcesContent>
</Sources>
```
## Usage with AI SDK
Build a simple web search agent with Perplexity Sonar.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { useChat } from "@ai-sdk/react";
import { Source, Sources, SourcesContent, SourcesTrigger } from "@/components/ai-elements/sources";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Response } from "@/components/ai-elements/response";
import { useState } from "react";
import { DefaultChatTransport } from "ai";
const SourceDemo = () => {
const [input, setInput] = useState("");
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({
api: "/api/sources",
}),
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput("");
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="flex-1 overflow-auto mb-4">
<Conversation>
<ConversationContent>
{messages.map((message) => (
<div key={message.id}>
{message.role === "assistant" && (
<Sources>
<SourcesTrigger count={message.parts.filter((part) => part.type === "source-url").length} />
{message.parts.map((part, i) => {
switch (part.type) {
case "source-url":
return (
<SourcesContent key={`${message.id}-${i}`}>
<Source key={`${message.id}-${i}`} href={part.url} title={part.url} />
</SourcesContent>
);
}
})}
</Sources>
)}
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return <Response key={`${message.id}-${i}`}>{part.text}</Response>;
default:
return null;
}
})}
</MessageContent>
</Message>
</div>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
</div>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={input}
placeholder="Ask a question and search the..."
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit
status={status === "streaming" ? "streaming" : "ready"}
disabled={!input.trim()}
className="absolute bottom-1 right-1"
/>
</Input>
</div>
</div>
);
};
export default SourceDemo;
```
Add the following route to your backend:
```tsx title="api/chat/route.ts"
import { convertToModelMessages, streamText, UIMessage } from "ai";
import { perplexity } from "@ai-sdk/perplexity";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: "perplexity/sonar",
system: "You are a helpful assistant. Keep your responses short (< 100 words) unless you are asked for more details. ALWAYS USE SEARCH.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse({
sendSources: true,
});
}
```
## Features
- Collapsible component that allows a user to view the sources or citations used to generate a response
- Customizable trigger and content components
- Support for custom sources or citations
- Responsive design with mobile-friendly controls
- Clean, modern styling with customizable themes
## Examples
### Custom rendering
<Preview path="sources-custom" />
## Props
### `<Sources />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<SourcesTrigger />`
<TypeTable
type={{
count: {
description: 'The number of sources to display in the trigger.',
type: 'number',
},
'...props': {
description: 'Any other props are spread to the trigger button.',
type: 'React.ButtonHTMLAttributes<HTMLButtonElement>',
},
}}
/>
### `<SourcesContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the content container.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<Source />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the anchor element.',
type: 'React.AnchorHTMLAttributes<HTMLAnchorElement>',
},
}}
/>

View File

@@ -0,0 +1,143 @@
# Suggestion
URL: /components/suggestion
---
title: Suggestion
description: A suggestion component that displays a horizontal row of clickable suggestions for user interaction.
path: elements/components/suggestion
---
The `Suggestion` component displays a horizontal row of clickable suggestions for user interaction.
<Preview path="suggestion" />
## Installation
<ElementsInstaller path="suggestion" />
## Usage
```tsx
import { Suggestion, Suggestions } from "@/components/ai-elements/suggestion";
```
```tsx
<Suggestions>
<Suggestion suggestion="What are the latest trends in AI?" />
</Suggestions>
```
## Usage with AI SDK
Build a simple input with suggestions users can click to send a message to the LLM.
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { Suggestion, Suggestions } from "@/components/ai-elements/suggestion";
import { useState } from "react";
import { useChat } from "@ai-sdk/react";
const suggestions = ["Can you explain how to play tennis?", "What is the weather in Tokyo?", "How do I make a really good fish taco?"];
const SuggestionDemo = () => {
const [input, setInput] = useState("");
const { sendMessage, status } = useChat();
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput("");
}
};
const handleSuggestionClick = (suggestion: string) => {
sendMessage({ text: suggestion });
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="flex flex-col gap-4">
<Suggestions>
{suggestions.map((suggestion) => (
<Suggestion key={suggestion} onClick={handleSuggestionClick} suggestion={suggestion} />
))}
</Suggestions>
<Input onSubmit={handleSubmit} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.currentTarget.value)}
className="pr-12"
/>
<PromptInputSubmit
status={status === "streaming" ? "streaming" : "ready"}
disabled={!input.trim()}
className="absolute bottom-1 right-1"
/>
</Input>
</div>
</div>
</div>
);
};
export default SuggestionDemo;
```
## Features
- Horizontal row of clickable suggestion buttons
- Customizable styling with variant and size options
- Flexible layout that wraps suggestions on smaller screens
- onClick callback that emits the selected suggestion string
- Support for both individual suggestions and suggestion lists
- Clean, modern styling with hover effects
- Responsive design with mobile-friendly touch targets
- TypeScript support with proper type definitions
## Examples
### Usage with AI Input
<Preview path="suggestion-input" />
## Props
### `<Suggestions />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying ScrollArea component.',
type: 'React.ComponentProps<typeof ScrollArea>',
},
}}
/>
### `<Suggestion />`
<TypeTable
type={{
suggestion: {
description: 'The suggestion string to display and emit on click.',
type: 'string',
},
onClick: {
description: 'Callback fired when the suggestion is clicked.',
type: '(suggestion: string) => void',
},
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'Omit<React.ComponentProps<typeof Button>, "onClick">',
},
}}
/>

247
docs/components/task.md Normal file
View File

@@ -0,0 +1,247 @@
# Task
URL: /components/task
---
title: Task
description: A collapsible task list component for displaying AI workflow progress, with status indicators and optional descriptions.
path: elements/components/task
---
The `Task` component provides a structured way to display task lists or workflow progress with collapsible details, status indicators, and progress tracking. It consists of a main `Task` container with `TaskTrigger` for the clickable header and `TaskContent` for the collapsible content area.
<Preview path="task" />
## Installation
<ElementsInstaller path="task" />
## Usage
```tsx
import { Task, TaskContent, TaskItem, TaskItemFile, TaskTrigger } from "@/components/ai-elements/task";
```
```tsx
<Task className="w-full">
<TaskTrigger title="Found project files" />
<TaskContent>
<TaskItem>
Read <TaskItemFile>index.md</TaskItemFile>
</TaskItem>
</TaskContent>
</Task>
```
## Usage with AI SDK
Build a mock async programming agent using [`experimental_generateObject`](/docs/reference/ai-sdk-ui/use-object).
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { experimental_useObject as useObject } from "@ai-sdk/react";
import { Task, TaskItem, TaskItemFile, TaskTrigger, TaskContent } from "@/components/ai-elements/task";
import { Button } from "@/components/ui/button";
import { tasksSchema } from "@/app/api/task/route";
import { SiReact, SiTypescript, SiJavascript, SiCss, SiHtml5, SiJson, SiMarkdown } from "@icons-pack/react-simple-icons";
const iconMap = {
react: { component: SiReact, color: "#149ECA" },
typescript: { component: SiTypescript, color: "#3178C6" },
javascript: { component: SiJavascript, color: "#F7DF1E" },
css: { component: SiCss, color: "#1572B6" },
html: { component: SiHtml5, color: "#E34F26" },
json: { component: SiJson, color: "#000000" },
markdown: { component: SiMarkdown, color: "#000000" },
};
const TaskDemo = () => {
const { object, submit, isLoading } = useObject({
api: "/api/agent",
schema: tasksSchema,
});
const handleSubmit = (taskType: string) => {
submit({ prompt: taskType });
};
const renderTaskItem = (item: any, index: number) => {
if (item?.type === "file" && item.file) {
const iconInfo = iconMap[item.file.icon as keyof typeof iconMap];
if (iconInfo) {
const IconComponent = iconInfo.component;
return (
<span className="inline-flex items-center gap-1" key={index}>
{item.text}
<TaskItemFile>
<IconComponent color={item.file.color || iconInfo.color} className="size-4" />
<span>{item.file.name}</span>
</TaskItemFile>
</span>
);
}
}
return item?.text || "";
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="flex gap-2 mb-6 flex-wrap">
<Button onClick={() => handleSubmit("React component development")} disabled={isLoading} variant="outline">
React Development
</Button>
</div>
<div className="flex-1 overflow-auto space-y-4">
{isLoading && !object && <div className="text-muted-foreground">Generating tasks...</div>}
{object?.tasks?.map((task: any, taskIndex: number) => (
<Task key={taskIndex} defaultOpen={taskIndex === 0}>
<TaskTrigger title={task.title || "Loading..."} />
<TaskContent>
{task.items?.map((item: any, itemIndex: number) => (
<TaskItem key={itemIndex}>{renderTaskItem(item, itemIndex)}</TaskItem>
))}
</TaskContent>
</Task>
))}
</div>
</div>
</div>
);
};
export default TaskDemo;
```
Add the following route to your backend:
```ts title="app/api/agent.ts"
import { streamObject } from "ai";
import { z } from "zod";
export const taskItemSchema = z.object({
type: z.enum(["text", "file"]),
text: z.string(),
file: z
.object({
name: z.string(),
icon: z.string(),
color: z.string().optional(),
})
.optional(),
});
export const taskSchema = z.object({
title: z.string(),
items: z.array(taskItemSchema),
status: z.enum(["pending", "in_progress", "completed"]),
});
export const tasksSchema = z.object({
tasks: z.array(taskSchema),
});
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamObject({
model: "openai/gpt-4o",
schema: tasksSchema,
prompt: `You are an AI assistant that generates realistic development task workflows. Generate a set of tasks that would occur during ${prompt}.
Each task should have:
- A descriptive title
- Multiple task items showing the progression
- Some items should be plain text, others should reference files
- Use realistic file names and appropriate file types
- Status should progress from pending to in_progress to completed
For file items, use these icon types: 'react', 'typescript', 'javascript', 'css', 'html', 'json', 'markdown'
Generate 3-4 tasks total, with 4-6 items each.`,
});
return result.toTextStreamResponse();
}
```
## Features
- Visual icons for pending, in-progress, completed, and error states
- Expandable content for task descriptions and additional information
- Built-in progress counter showing completed vs total tasks
- Optional progressive reveal of tasks with customizable timing
- Support for custom content within task items
- Full type safety with proper TypeScript definitions
- Keyboard navigation and screen reader support
## Props
### `<Task />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the root Collapsible component.',
type: 'React.ComponentProps<typeof Collapsible>',
},
}}
/>
### `<TaskTrigger />`
<TypeTable
type={{
title: {
description: 'The title of the task that will be displayed in the trigger.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the CollapsibleTrigger component.',
type: 'React.ComponentProps<typeof CollapsibleTrigger>',
},
}}
/>
### `<TaskContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CollapsibleContent component.',
type: 'React.ComponentProps<typeof CollapsibleContent>',
},
}}
/>
### `<TaskItem />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<TaskItemFile />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>

292
docs/components/tool.md Normal file
View File

@@ -0,0 +1,292 @@
# Tool
URL: /components/tool
---
title: Tool
description: A collapsible component for displaying tool invocation details in AI chatbot interfaces.
path: elements/components/tool
---
The `Tool` component displays a collapsible interface for showing/hiding tool details. It is designed to take the `ToolUIPart` type from the AI SDK and display it in a collapsible interface.
<Preview path="tool" />
## Installation
<ElementsInstaller path="tool" />
## Usage
```tsx
import { Tool, ToolContent, ToolHeader, ToolOutput, ToolInput } from "@/components/ai-elements/tool";
```
```tsx
<Tool>
<ToolHeader type="tool-call" state={"output-available" as const} />
<ToolContent>
<ToolInput input="Input to tool call" />
<ToolOutput errorText="Error" output="Output from tool call" />
</ToolContent>
</Tool>
```
## Usage in AI SDK
Build a simple stateful weather app that renders the last message in a tool using [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport, type ToolUIPart } from "ai";
import { Button } from "@/components/ui/button";
import { Response } from "@/components/ai-elements/response";
import { Tool, ToolContent, ToolHeader, ToolInput, ToolOutput } from "@/components/ai-elements/tool";
type WeatherToolInput = {
location: string;
units: "celsius" | "fahrenheit";
};
type WeatherToolOutput = {
location: string;
temperature: string;
conditions: string;
humidity: string;
windSpeed: string;
lastUpdated: string;
};
type WeatherToolUIPart = ToolUIPart<{
fetch_weather_data: {
input: WeatherToolInput;
output: WeatherToolOutput;
};
}>;
const Example = () => {
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({
api: "/api/weather",
}),
});
const handleWeatherClick = () => {
sendMessage({ text: "Get weather data for San Francisco in fahrenheit" });
};
const latestMessage = messages[messages.length - 1];
const weatherTool = latestMessage?.parts?.find((part) => part.type === "tool-fetch_weather_data") as WeatherToolUIPart | undefined;
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="space-y-4">
<Button onClick={handleWeatherClick} disabled={status !== "ready"}>
Get Weather for San Francisco
</Button>
{weatherTool && (
<Tool defaultOpen={true}>
<ToolHeader type="tool-fetch_weather_data" state={weatherTool.state} />
<ToolContent>
<ToolInput input={weatherTool.input} />
<ToolOutput
output={<Response>{formatWeatherResult(weatherTool.output)}</Response>}
errorText={weatherTool.errorText}
/>
</ToolContent>
</Tool>
)}
</div>
</div>
</div>
);
};
function formatWeatherResult(result: WeatherToolOutput): string {
return `**Weather for ${result.location}**
**Temperature:** ${result.temperature}
**Conditions:** ${result.conditions}
**Humidity:** ${result.humidity}
**Wind Speed:** ${result.windSpeed}
*Last updated: ${result.lastUpdated}*`;
}
export default Example;
```
Add the following route to your backend:
```ts title="app/api/weather/route.tsx"
import { streamText, UIMessage, convertToModelMessages } from "ai";
import { z } from "zod";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: "openai/gpt-4o",
messages: convertToModelMessages(messages),
tools: {
fetch_weather_data: {
description: "Fetch weather information for a specific location",
parameters: z.object({
location: z.string().describe("The city or location to get weather for"),
units: z.enum(["celsius", "fahrenheit"]).default("celsius").describe("Temperature units"),
}),
inputSchema: z.object({
location: z.string(),
units: z.enum(["celsius", "fahrenheit"]).default("celsius"),
}),
execute: async ({ location, units }) => {
await new Promise((resolve) => setTimeout(resolve, 1500));
const temp = units === "celsius" ? Math.floor(Math.random() * 35) + 5 : Math.floor(Math.random() * 63) + 41;
return {
location,
temperature: `${temp}°${units === "celsius" ? "C" : "F"}`,
conditions: "Sunny",
humidity: `12%`,
windSpeed: `35 ${units === "celsius" ? "km/h" : "mph"}`,
lastUpdated: new Date().toLocaleString(),
};
},
},
},
});
return result.toUIMessageStreamResponse();
}
```
## Features
- Collapsible interface for showing/hiding tool details
- Visual status indicators with icons and badges
- Support for multiple tool execution states (pending, running, completed, error)
- Formatted parameter display with JSON syntax highlighting
- Result and error handling with appropriate styling
- Composable structure for flexible layouts
- Accessible keyboard navigation and screen reader support
- Consistent styling that matches your design system
- Auto-opens completed tools by default for better UX
## Examples
### Input Streaming (Pending)
Shows a tool in its initial state while parameters are being processed.
<Preview path="tool-input-streaming" />
### Input Available (Running)
Shows a tool that's actively executing with its parameters.
<Preview path="tool-input-available" />
### Output Available (Completed)
Shows a completed tool with successful results. Opens by default to show the results. In this instance, the output is a JSON object, so we can use the `CodeBlock` component to display it.
<Preview path="tool-output-available" />
### Output Error
Shows a tool that encountered an error during execution. Opens by default to display the error.
<Preview path="tool-output-error" />
## Props
### `<Tool />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the root Collapsible component.',
type: 'React.ComponentProps<typeof Collapsible>',
},
}}
/>
### `<ToolHeader />`
<TypeTable
type={{
type: {
description: 'The type/name of the tool.',
type: 'ToolUIPart["type"]',
},
state: {
description: 'The current state of the tool (input-streaming, input-available, output-available, or output-error).',
type: 'ToolUIPart["state"]',
},
className: {
description: 'Additional CSS classes to apply to the header.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the CollapsibleTrigger.',
type: 'React.ComponentProps<typeof CollapsibleTrigger>',
},
}}
/>
### `<ToolContent />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the CollapsibleContent.',
type: 'React.ComponentProps<typeof CollapsibleContent>',
},
}}
/>
### `<ToolInput />`
<TypeTable
type={{
input: {
description: 'The input parameters passed to the tool, displayed as formatted JSON.',
type: 'ToolUIPart["input"]',
},
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>
### `<ToolOutput />`
<TypeTable
type={{
output: {
description: 'The output/result of the tool execution.',
type: 'React.ReactNode',
},
errorText: {
description: 'An error message if the tool execution failed.',
type: 'ToolUIPart["errorText"]',
},
'...props': {
description: 'Any other props are spread to the underlying div.',
type: 'React.ComponentProps<"div">',
},
}}
/>

View File

@@ -0,0 +1,73 @@
# Toolbar
URL: /components/toolbar
---
title: Toolbar
description: A styled toolbar component for React Flow nodes with flexible positioning and custom actions.
path: elements/components/toolbar
---
The `Toolbar` component provides a positioned toolbar that attaches to nodes in React Flow canvases. It features modern card styling with backdrop blur and flexbox layout for action buttons and controls.
<Callout>
The Toolbar component is designed to be used with the [Node](/elements/components/node) component. See the [Workflow](/elements/examples/workflow) demo for a full example.
</Callout>
## Installation
<ElementsInstaller path="toolbar" />
## Usage
```tsx
import { Toolbar } from "@/components/ai-elements/toolbar";
```
```tsx
import { Toolbar } from "@/components/ai-elements/toolbar";
import { Button } from "@/components/ui/button";
const CustomNode = () => (
<Node>
<NodeContent>...</NodeContent>
<Toolbar>
<Button size="sm" variant="ghost">
Edit
</Button>
<Button size="sm" variant="ghost">
Delete
</Button>
</Toolbar>
</Node>
);
```
## Features
- Attaches to any React Flow node
- Bottom positioning by default
- Rounded card design with border
- Theme-aware background styling
- Flexbox layout with gap spacing
- Full TypeScript support
- Compatible with all React Flow NodeToolbar features
## Props
### `<Toolbar />`
<TypeTable
type={{
className: {
description: 'Additional CSS classes to apply to the toolbar.',
type: 'string',
},
'...props': {
description: 'Any other props from @xyflow/react NodeToolbar component (position, offset, isVisible, etc.).',
type: 'ComponentProps<typeof NodeToolbar>',
},
}}
/>

View File

@@ -0,0 +1,251 @@
# Web Preview
URL: /components/web-preview
---
title: Web Preview
description: A composable component for previewing the result of a generated UI, with support for live examples and code display.
path: elements/components/web-preview
---
The `WebPreview` component provides a flexible way to showcase the result of a generated UI component, along with its source code. It is designed for documentation and demo purposes, allowing users to interact with live examples and view the underlying implementation.
<Preview path="web-preview" />
## Installation
<ElementsInstaller path="web-preview" />
## Usage
```tsx
import { WebPreview, WebPreviewNavigation, WebPreviewUrl, WebPreviewBody } from "@/components/ai-elements/web-preview";
```
```tsx
<WebPreview defaultUrl="https://ai-sdk.dev" style={{ height: "400px" }}>
<WebPreviewNavigation>
<WebPreviewUrl src="https://ai-sdk.dev" />
</WebPreviewNavigation>
<WebPreviewBody src="https://ai-sdk.dev" />
</WebPreview>
```
## Usage with AI SDK
Build a simple v0 clone using the [v0 Platform API](https://v0.dev/docs/api/platform).
Install the `v0-sdk` package:
```package-install
npm i v0-sdk
```
Add the following component to your frontend:
```tsx title="app/page.tsx"
"use client";
import { WebPreview, WebPreviewBody, WebPreviewNavigation, WebPreviewUrl } from "@/components/ai-elements/web-preview";
import { useState } from "react";
import { Input, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input";
import { Loader } from "../ai-elements/loader";
const WebPreviewDemo = () => {
const [previewUrl, setPreviewUrl] = useState("");
const [prompt, setPrompt] = useState("");
const [isGenerating, setIsGenerating] = useState(false);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
if (!prompt.trim()) return;
setPrompt("");
setIsGenerating(true);
try {
const response = await fetch("/api/v0", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt }),
});
const data = await response.json();
setPreviewUrl(data.demo || "/");
console.log("Generation finished:", data);
} catch (error) {
console.error("Generation failed:", error);
} finally {
setIsGenerating(false);
}
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full rounded-lg border h-[600px]">
<div className="flex flex-col h-full">
<div className="flex-1 mb-4">
{isGenerating ? (
<div className="flex flex-col items-center justify-center h-full">
<Loader />
<p className="mt-4 text-muted-foreground">Generating app, this may take a few seconds...</p>
</div>
) : previewUrl ? (
<WebPreview defaultUrl={previewUrl}>
<WebPreviewNavigation>
<WebPreviewUrl />
</WebPreviewNavigation>
<WebPreviewBody src={previewUrl} />
</WebPreview>
) : (
<div className="flex items-center justify-center h-full text-muted-foreground">Your generated app will appear here</div>
)}
</div>
<Input onSubmit={handleSubmit} className="w-full max-w-2xl mx-auto relative">
<PromptInputTextarea
value={prompt}
placeholder="Describe the app you want to build..."
onChange={(e) => setPrompt(e.currentTarget.value)}
className="pr-12 min-h-[60px]"
/>
<PromptInputSubmit
status={isGenerating ? "streaming" : "ready"}
disabled={!prompt.trim()}
className="absolute bottom-1 right-1"
/>
</Input>
</div>
</div>
);
};
export default WebPreviewDemo;
```
Add the following route to your backend:
```ts title="app/api/v0/route.ts"
import { v0 } from "v0-sdk";
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = await v0.chats.create({
system: "You are an expert coder",
message: prompt,
modelConfiguration: {
modelId: "v0-1.5-sm",
imageGenerations: false,
thinking: false,
},
});
return Response.json({
demo: result.demo,
webUrl: result.webUrl,
});
}
```
## Features
- Live preview of UI components
- Composable architecture with dedicated sub-components
- Responsive design modes (Desktop, Tablet, Mobile)
- Navigation controls with back/forward functionality
- URL input and example selector
- Full screen mode support
- Console logging with timestamps
- Context-based state management
- Consistent styling with the design system
- Easy integration into documentation pages
## Props
### `<WebPreview />`
<TypeTable
type={{
defaultUrl: {
description: 'The initial URL to load in the preview.',
type: 'string',
default: '""',
},
onUrlChange: {
description: 'Callback fired when the URL changes.',
type: '(url: string) => void',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<WebPreviewNavigation />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the navigation container.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>
### `<WebPreviewNavigationButton />`
<TypeTable
type={{
tooltip: {
description: 'Tooltip text to display on hover.',
type: 'string',
},
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Button component.',
type: 'React.ComponentProps<typeof Button>',
},
}}
/>
### `<WebPreviewUrl />`
<TypeTable
type={{
'...props': {
description: 'Any other props are spread to the underlying shadcn/ui Input component.',
type: 'React.ComponentProps<typeof Input>',
},
}}
/>
### `<WebPreviewBody />`
<TypeTable
type={{
loading: {
description: 'Optional loading indicator to display over the preview.',
type: 'React.ReactNode',
},
'...props': {
description: 'Any other props are spread to the underlying iframe.',
type: 'React.IframeHTMLAttributes<HTMLIFrameElement>',
},
}}
/>
### `<WebPreviewConsole />`
<TypeTable
type={{
logs: {
description: 'Console log entries to display in the console panel.',
type: 'Array<{ level: "log" | "warn" | "error"; message: string; timestamp: Date }>',
},
'...props': {
description: 'Any other props are spread to the root div.',
type: 'React.HTMLAttributes<HTMLDivElement>',
},
}}
/>

276
docs/examples/chatbot.md Normal file
View File

@@ -0,0 +1,276 @@
# Chatbot
URL: /examples/chatbot
---
title: Chatbot
description: An example of how to use the AI Elements to build a chatbot.
---
An example of how to use the AI Elements to build a chatbot.
<Preview path="chatbot" type="block" className="p-0" />
## Tutorial
Let's walk through how to build a chatbot using AI Elements and AI SDK. Our example will include reasoning, web search with citations, and a model picker.
### Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind the project setup):
```bash title="Terminal"
npx create-next-app@latest ai-chatbot && cd ai-chatbot
```
Run the following command to install AI Elements. This will also set up shadcn/ui if you haven't already configured it:
```bash title="Terminal"
npx ai-elements@latest
```
Now, install the AI SDK dependencies:
```package-install
npm i ai @ai-sdk/react zod
```
In order to use the providers, let's configure an AI Gateway API key. Create a `.env.local` in your root directory and navigate [here](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai%2Fapi-keys&title=Get%20your%20AI%20Gateway%20key) to create a token, then paste it in your `.env.local`.
We're now ready to start building our app!
### Client
In your `app/page.tsx`, replace the code with the file below.
Here, we use the `PromptInput` component with its compound components to build a rich input experience with file attachments, model picker, and action menu. The input component uses the new `PromptInputMessage` type for handling both text and file attachments.
The whole chat lives in a `Conversation`. We switch on `message.parts` and render the respective part within `Message`, `Reasoning`, and `Sources`. We also use `status` from `useChat` to stream reasoning tokens, as well as render `Loader`.
```tsx title="app/page.tsx"
"use client";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import {
PromptInput,
PromptInputActionAddAttachments,
PromptInputActionMenu,
PromptInputActionMenuContent,
PromptInputActionMenuTrigger,
PromptInputAttachment,
PromptInputAttachments,
PromptInputBody,
PromptInputButton,
PromptInputHeader,
type PromptInputMessage,
PromptInputModelSelect,
PromptInputModelSelectContent,
PromptInputModelSelectItem,
PromptInputModelSelectTrigger,
PromptInputModelSelectValue,
PromptInputSubmit,
PromptInputTextarea,
PromptInputFooter,
PromptInputTools,
} from "@/components/ai-elements/prompt-input";
import { Action, Actions } from "@/components/ai-elements/actions";
import { Fragment, useState } from "react";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
import { CopyIcon, GlobeIcon, RefreshCcwIcon } from "lucide-react";
import { Source, Sources, SourcesContent, SourcesTrigger } from "@/components/ai-elements/sources";
import { Reasoning, ReasoningContent, ReasoningTrigger } from "@/components/ai-elements/reasoning";
import { Loader } from "@/components/ai-elements/loader";
const models = [
{
name: "GPT 4o",
value: "openai/gpt-4o",
},
{
name: "Deepseek R1",
value: "deepseek/deepseek-r1",
},
];
const ChatBotDemo = () => {
const [input, setInput] = useState("");
const [model, setModel] = useState<string>(models[0].value);
const [webSearch, setWebSearch] = useState(false);
const { messages, sendMessage, status, regenerate } = useChat();
const handleSubmit = (message: PromptInputMessage) => {
const hasText = Boolean(message.text);
const hasAttachments = Boolean(message.files?.length);
if (!(hasText || hasAttachments)) {
return;
}
sendMessage(
{
text: message.text || "Sent with attachments",
files: message.files,
},
{
body: {
model: model,
webSearch: webSearch,
},
}
);
setInput("");
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full h-screen">
<div className="flex flex-col h-full">
<Conversation className="h-full">
<ConversationContent>
{messages.map((message) => (
<div key={message.id}>
{message.role === "assistant" && message.parts.filter((part) => part.type === "source-url").length > 0 && (
<Sources>
<SourcesTrigger count={message.parts.filter((part) => part.type === "source-url").length} />
{message.parts
.filter((part) => part.type === "source-url")
.map((part, i) => (
<SourcesContent key={`${message.id}-${i}`}>
<Source key={`${message.id}-${i}`} href={part.url} title={part.url} />
</SourcesContent>
))}
</Sources>
)}
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return (
<Fragment key={`${message.id}-${i}`}>
<Message from={message.role}>
<MessageContent>
<Response>{part.text}</Response>
</MessageContent>
</Message>
{message.role === "assistant" && i === messages.length - 1 && (
<Actions className="mt-2">
<Action onClick={() => regenerate()} label="Retry">
<RefreshCcwIcon className="size-3" />
</Action>
<Action onClick={() => navigator.clipboard.writeText(part.text)} label="Copy">
<CopyIcon className="size-3" />
</Action>
</Actions>
)}
</Fragment>
);
case "reasoning":
return (
<Reasoning
key={`${message.id}-${i}`}
className="w-full"
isStreaming={
status === "streaming" && i === message.parts.length - 1 && message.id === messages.at(-1)?.id
}
>
<ReasoningTrigger />
<ReasoningContent>{part.text}</ReasoningContent>
</Reasoning>
);
default:
return null;
}
})}
</div>
))}
{status === "submitted" && <Loader />}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<PromptInput onSubmit={handleSubmit} className="mt-4" globalDrop multiple>
<PromptInputHeader>
<PromptInputAttachments>{(attachment) => <PromptInputAttachment data={attachment} />}</PromptInputAttachments>
</PromptInputHeader>
<PromptInputBody>
<PromptInputTextarea onChange={(e) => setInput(e.target.value)} value={input} />
</PromptInputBody>
<PromptInputFooter>
<PromptInputTools>
<PromptInputActionMenu>
<PromptInputActionMenuTrigger />
<PromptInputActionMenuContent>
<PromptInputActionAddAttachments />
</PromptInputActionMenuContent>
</PromptInputActionMenu>
<PromptInputButton variant={webSearch ? "default" : "ghost"} onClick={() => setWebSearch(!webSearch)}>
<GlobeIcon size={16} />
<span>Search</span>
</PromptInputButton>
<PromptInputModelSelect
onValueChange={(value) => {
setModel(value);
}}
value={model}
>
<PromptInputModelSelectTrigger>
<PromptInputModelSelectValue />
</PromptInputModelSelectTrigger>
<PromptInputModelSelectContent>
{models.map((model) => (
<PromptInputModelSelectItem key={model.value} value={model.value}>
{model.name}
</PromptInputModelSelectItem>
))}
</PromptInputModelSelectContent>
</PromptInputModelSelect>
</PromptInputTools>
<PromptInputSubmit disabled={!input && !status} status={status} />
</PromptInputFooter>
</PromptInput>
</div>
</div>
);
};
export default ChatBotDemo;
```
### Server
Create a new route handler `app/api/chat/route.ts` and paste in the following code. We're using `perplexity/sonar` for web search because by default the model returns search results. We also pass `sendSources` and `sendReasoning` to `toUIMessageStreamResponse` in order to receive as parts on the frontend. The handler now also accepts file attachments from the client.
```ts title="app/api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const {
messages,
model,
webSearch,
}: {
messages: UIMessage[];
model: string;
webSearch: boolean;
} = await req.json();
const result = streamText({
model: webSearch ? "perplexity/sonar" : model,
messages: convertToModelMessages(messages),
system: "You are a helpful assistant that can answer questions and help with tasks",
});
// send sources and reasoning back to the client
return result.toUIMessageStreamResponse({
sendSources: true,
sendReasoning: true,
});
}
```
You now have a working chatbot app with file attachment support! The chatbot can handle both text and file inputs through the action menu. Feel free to explore other components like [`Tool`](/elements/components/tool) or [`Task`](/elements/components/task) to extend your app, or view the other examples.

283
docs/examples/v0.md Normal file
View File

@@ -0,0 +1,283 @@
# v0 clone
URL: /examples/v0
---
title: v0 clone
description: An example of how to use the AI Elements to build a v0 clone.
---
An example of how to use the AI Elements to build a v0 clone.
## Tutorial
Let's walk through how to build a v0 clone using AI Elements and the [v0 Platform API](https://v0.dev/docs/api/platform).
### Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind the project setup):
```bash title="Terminal"
npx create-next-app@latest v0-clone && cd v0-clone
```
Run the following command to install shadcn/ui and AI Elements.
```bash title="Terminal"
npx shadcn@latest init && npx ai-elements@latest
```
Now, install the v0 sdk:
```package-install
npm i v0-sdk
```
In order to use the providers, let's configure a v0 API key. Create a `.env.local` in your root directory and navigate to your [v0 account settings](https://v0.dev/chat/settings/keys) to create a token, then paste it in your `.env.local` as `V0_API_KEY`.
We're now ready to start building our app!
### Client
In your `app/page.tsx`, replace the code with the file below.
Here, we use `Conversation` to wrap the conversation code, and the `WebPreview` component to render the URL returned from the v0 API.
```tsx title="app/page.tsx"
"use client";
import { useState } from "react";
import { PromptInput, type PromptInputMessage, PromptInputSubmit, PromptInputTextarea } from "@/components/ai-elements/prompt-input";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Conversation, ConversationContent } from "@/components/ai-elements/conversation";
import { WebPreview, WebPreviewNavigation, WebPreviewUrl, WebPreviewBody } from "@/components/ai-elements/web-preview";
import { Loader } from "@/components/ai-elements/loader";
import { Suggestions, Suggestion } from "@/components/ai-elements/suggestion";
interface Chat {
id: string;
demo: string;
}
export default function Home() {
const [message, setMessage] = useState("");
const [currentChat, setCurrentChat] = useState<Chat | null>(null);
const [isLoading, setIsLoading] = useState(false);
const [chatHistory, setChatHistory] = useState<
Array<{
type: "user" | "assistant";
content: string;
}>
>([]);
const handleSendMessage = async (promptMessage: PromptInputMessage) => {
const hasText = Boolean(promptMessage.text);
const hasAttachments = Boolean(promptMessage.files?.length);
if (!(hasText || hasAttachments) || isLoading) return;
const userMessage = promptMessage.text?.trim() || "Sent with attachments";
setMessage("");
setIsLoading(true);
setChatHistory((prev) => [...prev, { type: "user", content: userMessage }]);
try {
const response = await fetch("/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
message: userMessage,
chatId: currentChat?.id,
}),
});
if (!response.ok) {
throw new Error("Failed to create chat");
}
const chat: Chat = await response.json();
setCurrentChat(chat);
setChatHistory((prev) => [
...prev,
{
type: "assistant",
content: "Generated new app preview. Check the preview panel!",
},
]);
} catch (error) {
console.error("Error:", error);
setChatHistory((prev) => [
...prev,
{
type: "assistant",
content: "Sorry, there was an error creating your app. Please try again.",
},
]);
} finally {
setIsLoading(false);
}
};
return (
<div className="h-screen flex">
{/* Chat Panel */}
<div className="w-1/2 flex flex-col border-r">
{/* Header */}
<div className="border-b p-3 h-14 flex items-center justify-between">
<h1 className="text-lg font-semibold">v0 Clone</h1>
</div>
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{chatHistory.length === 0 ? (
<div className="text-center font-semibold mt-8">
<p className="text-3xl mt-4">What can we build together?</p>
</div>
) : (
<>
<Conversation>
<ConversationContent>
{chatHistory.map((msg, index) => (
<Message from={msg.type} key={index}>
<MessageContent>{msg.content}</MessageContent>
</Message>
))}
</ConversationContent>
</Conversation>
{isLoading && (
<Message from="assistant">
<MessageContent>
<div className="flex items-center gap-2">
<Loader />
Creating your app...
</div>
</MessageContent>
</Message>
)}
</>
)}
</div>
{/* Input */}
<div className="border-t p-4">
{!currentChat && (
<Suggestions>
<Suggestion
onClick={() => setMessage("Create a responsive navbar with Tailwind CSS")}
suggestion="Create a responsive navbar with Tailwind CSS"
/>
<Suggestion onClick={() => setMessage("Build a todo app with React")} suggestion="Build a todo app with React" />
<Suggestion
onClick={() => setMessage("Make a landing page for a coffee shop")}
suggestion="Make a landing page for a coffee shop"
/>
</Suggestions>
)}
<div className="flex gap-2">
<PromptInput onSubmit={handleSendMessage} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea onChange={(e) => setMessage(e.target.value)} value={message} className="pr-12 min-h-[60px]" />
<PromptInputSubmit className="absolute bottom-1 right-1" disabled={!message} status={isLoading ? "streaming" : "ready"} />
</PromptInput>
</div>
</div>
</div>
{/* Preview Panel */}
<div className="w-1/2 flex flex-col">
<WebPreview>
<WebPreviewNavigation>
<WebPreviewUrl readOnly placeholder="Your app here..." value={currentChat?.demo} />
</WebPreviewNavigation>
<WebPreviewBody src={currentChat?.demo} />
</WebPreview>
</div>
</div>
);
}
```
In this case, we'll also edit the base component `components/ai-elements/web-preview.tsx` in order to best match with our theme.
```tsx title="components/ai-elements/web-preview.tsx" highlight="5,24"
return (
<WebPreviewContext.Provider value={contextValue}>
<div
className={cn(
'flex size-full flex-col bg-card', // remove rounded-lg border
className,
)}
{...props}
>
{children}
</div>
</WebPreviewContext.Provider>
);
};
export type WebPreviewNavigationProps = ComponentProps<'div'>;
export const WebPreviewNavigation = ({
className,
children,
...props
}: WebPreviewNavigationProps) => (
<div
className={cn('flex items-center gap-1 border-b p-2 h-14', className)} // add h-14
{...props}
>
{children}
</div>
);
```
### Server
Create a new route handler `app/api/chat/route.ts` and paste in the following code. We use the v0 SDK to manage chats.
```ts title="app/api/chat/route.ts"
import { NextRequest, NextResponse } from "next/server";
import { v0 } from "v0-sdk";
export async function POST(request: NextRequest) {
try {
const { message, chatId } = await request.json();
if (!message) {
return NextResponse.json({ error: "Message is required" }, { status: 400 });
}
let chat;
if (chatId) {
// continue existing chat
chat = await v0.chats.sendMessage({
chatId: chatId,
message,
});
} else {
// create new chat
chat = await v0.chats.create({
message,
});
}
return NextResponse.json({
id: chat.id,
demo: chat.demo,
});
} catch (error) {
console.error("V0 API Error:", error);
return NextResponse.json({ error: "Failed to process request" }, { status: 500 });
}
}
```
To start your server, run `pnpm dev`, navigate to `localhost:3000` and try building an app!
You now have a working v0 clone you can build off of! Feel free to explore the [v0 Platform API](https://v0.dev/docs/api/platform) and components like [`Reasoning`](/elements/components/reasoning) and [`Task`](/elements/components/task) to extend your app, or view the other examples.

288
docs/examples/workflow.md Normal file
View File

@@ -0,0 +1,288 @@
# Workflow
URL: /examples/workflow
---
title: Workflow
description: An example of how to use the AI Elements to build a workflow visualization.
---
An example of how to use the AI Elements to build a workflow visualization with interactive nodes and animated connections, built with [React Flow](https://reactflow.dev/).
<Preview path="workflow" type="block" className="p-0" />
## Tutorial
Let's walk through how to build a workflow visualization using AI Elements. Our example will include custom nodes with headers, content, and footers, along with animated and temporary edge types.
### Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind in the project setup):
```bash title="Terminal"
npx create-next-app@latest ai-workflow && cd ai-workflow
```
Run the following command to install AI Elements. This will also set up shadcn/ui if you haven't already configured it:
```bash title="Terminal"
npx ai-elements@latest
```
Now, install the required dependencies:
```package-install
npm i @xyflow/react
```
We're now ready to start building our workflow!
### Client
Let's build the workflow visualization step by step. We'll create the component structure, define our nodes and edges, and configure the canvas.
#### Import the components
First, import the necessary AI Elements components in your `app/page.tsx`:
```tsx title="app/page.tsx"
"use client";
import { Canvas } from "@/components/ai-elements/canvas";
import { Connection } from "@/components/ai-elements/connection";
import { Controls } from "@/components/ai-elements/controls";
import { Edge } from "@/components/ai-elements/edge";
import { Node, NodeContent, NodeDescription, NodeFooter, NodeHeader, NodeTitle } from "@/components/ai-elements/node";
import { Panel } from "@/components/ai-elements/panel";
import { Toolbar } from "@/components/ai-elements/toolbar";
import { Button } from "@/components/ui/button";
```
#### Define node IDs
Create a constant object to manage node identifiers. This makes it easier to reference nodes when creating edges:
```tsx title="app/page.tsx"
const nodeIds = {
start: "start",
process1: "process1",
process2: "process2",
decision: "decision",
output1: "output1",
output2: "output2",
};
```
#### Create mock nodes
Define the nodes array with position, type, and data for each node in your workflow:
```tsx title="app/page.tsx"
const nodes = [
{
id: nodeIds.start,
type: "workflow",
position: { x: 0, y: 0 },
data: {
label: "Start",
description: "Initialize workflow",
handles: { target: false, source: true },
content: "Triggered by user action at 09:30 AM",
footer: "Status: Ready",
},
},
{
id: nodeIds.process1,
type: "workflow",
position: { x: 500, y: 0 },
data: {
label: "Process Data",
description: "Transform input",
handles: { target: true, source: true },
content: "Validating 1,234 records and applying business rules",
footer: "Duration: ~2.5s",
},
},
{
id: nodeIds.decision,
type: "workflow",
position: { x: 1000, y: 0 },
data: {
label: "Decision Point",
description: "Route based on conditions",
handles: { target: true, source: true },
content: "Evaluating: data.status === 'valid' && data.score > 0.8",
footer: "Confidence: 94%",
},
},
{
id: nodeIds.output1,
type: "workflow",
position: { x: 1500, y: -300 },
data: {
label: "Success Path",
description: "Handle success case",
handles: { target: true, source: true },
content: "1,156 records passed validation (93.7%)",
footer: "Next: Send to production",
},
},
{
id: nodeIds.output2,
type: "workflow",
position: { x: 1500, y: 300 },
data: {
label: "Error Path",
description: "Handle error case",
handles: { target: true, source: true },
content: "78 records failed validation (6.3%)",
footer: "Next: Queue for review",
},
},
{
id: nodeIds.process2,
type: "workflow",
position: { x: 2000, y: 0 },
data: {
label: "Complete",
description: "Finalize workflow",
handles: { target: true, source: false },
content: "All records processed and routed successfully",
footer: "Total time: 4.2s",
},
},
];
```
#### Create mock edges
Define the connections between nodes. Use `animated` for active paths and `temporary` for conditional or error paths:
```tsx title="app/page.tsx"
const edges = [
{
id: "edge1",
source: nodeIds.start,
target: nodeIds.process1,
type: "animated",
},
{
id: "edge2",
source: nodeIds.process1,
target: nodeIds.decision,
type: "animated",
},
{
id: "edge3",
source: nodeIds.decision,
target: nodeIds.output1,
type: "animated",
},
{
id: "edge4",
source: nodeIds.decision,
target: nodeIds.output2,
type: "temporary",
},
{
id: "edge5",
source: nodeIds.output1,
target: nodeIds.process2,
type: "animated",
},
{
id: "edge6",
source: nodeIds.output2,
target: nodeIds.process2,
type: "temporary",
},
];
```
#### Create the node types
Define custom node rendering using the compound Node components:
```tsx title="app/page.tsx"
const nodeTypes = {
workflow: ({
data,
}: {
data: {
label: string;
description: string;
handles: { target: boolean; source: boolean };
content: string;
footer: string;
};
}) => (
<Node handles={data.handles}>
<NodeHeader>
<NodeTitle>{data.label}</NodeTitle>
<NodeDescription>{data.description}</NodeDescription>
</NodeHeader>
<NodeContent>
<p className="text-sm">{data.content}</p>
</NodeContent>
<NodeFooter>
<p className="text-muted-foreground text-xs">{data.footer}</p>
</NodeFooter>
<Toolbar>
<Button size="sm" variant="ghost">
Edit
</Button>
<Button size="sm" variant="ghost">
Delete
</Button>
</Toolbar>
</Node>
),
};
```
#### Create the edge types
Map the edge type names to the Edge components:
```tsx title="app/page.tsx"
const edgeTypes = {
animated: Edge.Animated,
temporary: Edge.Temporary,
};
```
#### Build the main component
Finally, create the main component that renders the Canvas with all nodes, edges, controls, and custom UI panels:
```tsx title="app/page.tsx"
const App = () => (
<Canvas edges={edges} edgeTypes={edgeTypes} fitView nodes={nodes} nodeTypes={nodeTypes} connectionLineComponent={Connection}>
<Controls />
<Panel position="top-left">
<Button size="sm" variant="secondary">
Export
</Button>
</Panel>
</Canvas>
);
export default App;
```
### Key Features
The workflow visualization demonstrates several powerful features:
- **Custom Node Components**: Each node uses the compound components (`NodeHeader`, `NodeTitle`, `NodeDescription`, `NodeContent`, `NodeFooter`) for consistent, structured layouts.
- **Node Toolbars**: The `Toolbar` component attaches contextual actions (like Edit and Delete buttons) to individual nodes, appearing when hovering or selecting them.
- **Handle Configuration**: Nodes can have source and/or target handles, controlling which connections are possible.
- **Multiple Edge Types**: The `animated` type shows active data flow, while `temporary` indicates conditional or error paths.
- **Custom Connection Lines**: The `Connection` component provides styled bezier curves when dragging new connections between nodes.
- **Interactive Controls**: The `Controls` component adds zoom in/out and fit view buttons with a modern, themed design.
- **Custom UI Panels**: The `Panel` component allows you to position custom UI elements (like buttons, filters, or legends) anywhere on the canvas.
- **Automatic Layout**: The `Canvas` component auto-fits the view and provides pan/zoom controls out of the box.
You now have a working workflow visualization! Feel free to explore dynamic workflows by connecting this to AI-generated process flows, or extend it with interactive editing capabilities using React Flow's built-in features.

40
docs/introduction.md Normal file
View File

@@ -0,0 +1,40 @@
# Introduction
What is AI Elements and why you should use it.
[AI Elements](https://www.npmjs.com/package/ai-elements) is a component library and custom registry built on top of [shadcn/ui](https://ui.shadcn.com/) to help you build AI-native applications faster. It provides pre-built components like conversations, messages and more.
Installing AI Elements is straightforward and can be done in a couple of ways. You can use the dedicated CLI command for the fastest setup, or integrate via the standard shadcn/ui CLI if you've already adopted shadcn's workflow.
**Using AI Elements CLI:**
```bash
npx ai-elements@latest
```
**Using shadcn CLI:**
```bash
npx shadcn@latest add @ai-elements/all
```
## Quick Start
Here are some basic examples of what you can achieve using components from AI Elements.
## Prerequisites
Before installing AI Elements, make sure your environment meets the following requirements:
- [Node.js](https://nodejs.org/en/download/), version 18 or later
- A [Next.js](https://nextjs.org/) project with the [AI SDK](https://ai-sdk.dev/) installed.
- [shadcn/ui](https://ui.shadcn.com/) installed in your project. If you don't have it installed, running any install command will automatically install it for you.
- We also highly recommend using the [AI Gateway](https://vercel.com/docs/ai-gateway) and adding `AI_GATEWAY_API_KEY` to your `env.local` so you don't have to use an API key from every provider. AI Gateway also gives $5 in usage per month so you can experiment with models. You can obtain an API key [here](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai%2Fapi-keys&title=Get%20your%20AI%20Gateway%20key).
## Installing Components
You can install AI Elements components using either the AI Elements CLI or the shadcn/ui CLI. Both achieve the same result: adding the selected component's code and any needed dependencies to your project.
The CLI will download the component's code and integrate it into your project's directory (usually under your components folder). By default, AI Elements components are added to the `@/components/ai-elements/` directory (or whatever folder you've configured in your shadcn components settings).
After running the command, you should see a confirmation in your terminal that the files were added. You can then proceed to use the component in your code.

58
docs/troubleshooting.md Normal file
View File

@@ -0,0 +1,58 @@
# Troubleshooting
URL: /troubleshooting
---
title: Troubleshooting
description: What to do if you run into issues with AI Elements.
---
## Why are my components not styled?
Make sure your project is configured correctly for shadcn/ui in Tailwind 4 - this means having a `globals.css` file that imports Tailwind and includes the shadcn/ui base styles.
## I ran the AI Elements CLI but nothing was added to my project
Double-check that:
- Your current working directory is the root of your project (where `package.json` lives).
- Your components.json file (if using shadcn-style config) is set up correctly.
- Youre using the latest version of the AI Elements CLI:
```bash title="Terminal"
npx ai-elements@latest
```
If all else fails, feel free to open an [issue on GitHub](https://github.com/vercel/ai-elements/issues).
## Theme switching doesnt work — my app stays in light mode
Ensure your app is using the same data-theme system that shadcn/ui and AI Elements expect. The default implementation toggles a data-theme attribute on the `<html>` element. Make sure your tailwind.config.js is using class or data- selectors accordingly:
## The component imports fail with “module not found”
Check the file exists. If it does, make sure your `tsconfig.json` has a proper paths alias for `@/` i.e.
```json title="tsconfig.json"
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["./*"]
}
}
}
```
## My AI coding assistant can't access AI Elements components
1. Verify your config file syntax is valid JSON.
2. Check that the file path is correct for your AI tool.
3. Restart your coding assistant after making changes.
4. Ensure you have a stable internet connection.
## Still stuck?
If none of these answers help, open an [issue on GitHub](https://github.com/vercel/ai-elements/issues) and someone will be happy to assist.

80
docs/usage.md Normal file
View File

@@ -0,0 +1,80 @@
# Usage
URL: /usage
---
title: Usage
description: Learn how to use AI Elements components in your application.
---
Once an AI Elements component is installed, you can import it and use it in your application like any other React component. The components are added as part of your codebase (not hidden in a library), so the usage feels very natural.
## Example
After installing AI Elements components, you can use them in your application like any other React component. For example:
```tsx title="conversation.tsx"
"use client";
import { Message, MessageAvatar, MessageContent } from "@/components/ai-elements/message";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
const Example = () => {
const { messages } = useChat();
return (
<>
{messages.map(({ role, parts }, index) => (
<Message from={role} key={index}>
<MessageContent>
{parts.map((part, i) => {
switch (part.type) {
case "text":
return <Response key={`${role}-${i}`}>{part.text}</Response>;
}
})}
</MessageContent>
</Message>
))}
</>
);
};
export default Example;
```
In the example above, we import the `Message` component from our AI Elements directory and include it in our JSX. Then, we compose the component with the `MessageContent` and `Response` subcomponents. You can style or configure the component just as you would if you wrote it yourself since the code lives in your project, you can even open the component file to see how it works or make custom modifications.
## Extensibility
All AI Elements components take as many primitive attributes as possible. For example, the `Message` component extends `HTMLAttributes<HTMLDivElement>`, so you can pass any props that a `div` supports. This makes it easy to extend the component with your own styles or functionality.
## Customization
<Callout>
If you re-install AI Elements by rerunning `npx ai-elements@latest`, the CLI
will ask before overwriting the file so you can save any custom changes you
made.
</Callout>
After installation, no additional setup is needed. The components styles (Tailwind CSS classes) and scripts are already integrated. You can start interacting with the component in your app immediately.
For example, if you'd like to remove the rounding on `Message`, you can go to `components/ai-elements/message.tsx` and remove `rounded-lg` as follows:
```tsx title="components/ai-elements/message.tsx" highlight="8"
export const MessageContent = ({ children, className, ...props }: MessageContentProps) => (
<div
className={cn(
"flex flex-col gap-2 text-sm text-foreground",
"group-[.is-user]:bg-primary group-[.is-user]:text-primary-foreground group-[.is-user]:px-4 group-[.is-user]:py-3",
className
)}
{...props}
>
<div className="is-user:dark">{children}</div>
</div>
);
```

201
plugin.lock.json Normal file
View File

@@ -0,0 +1,201 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:nathanonn/claude-skills-ai-elements:.claude/skills/ai-elements",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "7d030fd30c48a45c75bf469eb975b0f7edb3af9e",
"treeHash": "f2f4ffd9314b251059fd74dea6652b3490c90a991361d0bd1b3cda10ef0c42ed",
"generatedAt": "2025-11-28T10:27:14.144668Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "ai-elements",
"description": "Intelligent documentation system for AI Elements component library. Activate automatically when working with AI-native applications or when AI Elements component names are mentioned (Message, Conversation, Reasoning, Canvas, etc.). Provides context-aware documentation retrieval - fetches examples for implementation queries, component references for API lookups, and smart multi-page fetching for complex tasks.",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "ed7d2e19d0978474f3fa0095420b26c89a59076e06926185fb76267a36ffcc9d"
},
{
"path": "INDEX.md",
"sha256": "2d6828fa8e6e25b0359024c3da40600603742a5819a7e66bbc41bb415bacb1cd"
},
{
"path": "SKILL.md",
"sha256": "2b6de3e030bbf105de002fc6c48c0c1e2ad8034369f639fe69d7032efd0fdc9f"
},
{
"path": "docs/troubleshooting.md",
"sha256": "0ddf956bdb648fc6104c87aa033cc52d370bae2a553ba7579b7a930a41633833"
},
{
"path": "docs/usage.md",
"sha256": "02f9f2e3e11f584e3201ea81e730486a8ddab5b5dbd7e6e8438256bb7d100a55"
},
{
"path": "docs/introduction.md",
"sha256": "16515c6907a0525a08e228cd19a3f5b387ba80f6520b014bdd84b75a9e4dfc9a"
},
{
"path": "docs/README.md",
"sha256": "b5566ab781a0d3a127369dd50eff2f8a615c8f3969525207306c8474b6c58609"
},
{
"path": "docs/components/loader.md",
"sha256": "ace9c0b025d0bd3d6ddf7b7ebefec9594baee252b4307a12e5ae6396e85fdc80"
},
{
"path": "docs/components/reasoning.md",
"sha256": "f2462cc6c81486817eaf22b0d2915ebc32970a0880e022778fe9526898b6535d"
},
{
"path": "docs/components/shimmer.md",
"sha256": "a2f438ab93c1c948e08fa417ed18b78dd33a52552b9537095853a82ee6ac609a"
},
{
"path": "docs/components/connection.md",
"sha256": "927e4f71149123fea9e5645985ed5ec84609dbd5c6cb564ba3961a34ca84a7bd"
},
{
"path": "docs/components/context.md",
"sha256": "24f698a43a200c25de4828c1993957c4521c9997b5ef2187862546096e6944b3"
},
{
"path": "docs/components/toolbar.md",
"sha256": "998e0cb0e3592b4e3f9cbc3ddc26c4dec402592103ba7edbd5c978503be4b1b2"
},
{
"path": "docs/components/confirmation.md",
"sha256": "76ccbb7ceced08bc56cb0475de5264cc8d19d5f0c133086fdecf365b8408766a"
},
{
"path": "docs/components/sources.md",
"sha256": "d7abca244a726610bda90c186978bf989dc3806e16b5b04b0f9c33adffc396bc"
},
{
"path": "docs/components/image.md",
"sha256": "f3dac88e6ac68be5d0a3f9dd0092839c0017337001deae481cfdca0236199cd9"
},
{
"path": "docs/components/tool.md",
"sha256": "580acc494d11ebe6b666b55a9c1149b50935459cddb637deed8312e7b69616c8"
},
{
"path": "docs/components/canvas.md",
"sha256": "57321127910b0904836432448b4dfb855b08aa1d2bdaf0bbe8e0808ed31aea48"
},
{
"path": "docs/components/open-in-chat.md",
"sha256": "a91dc3ee88b5135ae514c25607c3ea6ad3c7eb7db7476be49519d8aa3a7dd674"
},
{
"path": "docs/components/node.md",
"sha256": "ec4b6c7fb56d6ee6c54cb578e2db373d656db452ae6ec129fe5bd2922e881ee1"
},
{
"path": "docs/components/inline-citation.md",
"sha256": "502c9fba74b040a62948b0dea56117d40bc04da2ecfb315c2f6e32d43cb1f57a"
},
{
"path": "docs/components/panel.md",
"sha256": "5c3b46c78449daca0c7c1f4847e197135060356b11fd462bba7279c44bcc5de6"
},
{
"path": "docs/components/artifact.md",
"sha256": "3a0d68c578be1a7235a6b83f3c4a59a84346c7a2341b1e6b856de81357ada7bd"
},
{
"path": "docs/components/edge.md",
"sha256": "a00e77190f9250f500fa0abbbbb4a3e82de71adc6742b03e9b871c015121c7e1"
},
{
"path": "docs/components/suggestion.md",
"sha256": "3f32c03134f49afe5e53f1f32bb5ff1ad7f4ba373585c308081b6f1fe615937b"
},
{
"path": "docs/components/web-preview.md",
"sha256": "ad8a3de557ba2ad01f16662dc694f0557668251432165a9d5b2843fa1c2f5647"
},
{
"path": "docs/components/queue.md",
"sha256": "5fdb75ef4cc056e90e6a7c42224ec93803962142d80a0ac15e665b023f3edf87"
},
{
"path": "docs/components/prompt-input.md",
"sha256": "e94c472cb9a6982869fba8456eec5224b51d291fb66e199de37647aab92d6f2b"
},
{
"path": "docs/components/task.md",
"sha256": "6b463d93dd3d0410809005f2390860edc079abd101e68a870cf165c97119f9c1"
},
{
"path": "docs/components/plan.md",
"sha256": "1037c18149040bb1b260c11b0ff005999f5e1fee3a8d992371f7276a7b0a08e5"
},
{
"path": "docs/components/controls.md",
"sha256": "3b42590b86fcb32f573cab3c50fcbba3f782e48ed52b41c827bdf897e0623d58"
},
{
"path": "docs/components/branch.md",
"sha256": "9a79132a5da752ae522a07235974294cd74c28bdf2a5ce85e69bdb9509e4e5f1"
},
{
"path": "docs/components/message.md",
"sha256": "eeb2905da86f0c1d558d175998f687726bf69ae223783e67242959fe33bb674f"
},
{
"path": "docs/components/response.md",
"sha256": "3d3776ff5fc8a871eb6e5d8767eff978d9b5a52294c786dca696671c46acc166"
},
{
"path": "docs/components/actions.md",
"sha256": "666547878c08b764d813f993ca2aee96ed2cf8e4f2221a80c6cbcf319f1b52bd"
},
{
"path": "docs/components/code-block.md",
"sha256": "49c2c964541fb6df7220012e45435722fe56e41d0532688728451367597035ad"
},
{
"path": "docs/components/conversation.md",
"sha256": "5165121f0ba8a3e9ae2ecc9db7e260742cf968ecf3f109d3293abe2a15569c0d"
},
{
"path": "docs/components/chain-of-thought.md",
"sha256": "503e481f1d416d85f4d2259cbd270d8846b57ebf826cc95fdf0a6e506aa8e35c"
},
{
"path": "docs/examples/workflow.md",
"sha256": "624875da3cbd3f53b95f2f14a8082eba2b9060c03e261114751af77bae025e48"
},
{
"path": "docs/examples/chatbot.md",
"sha256": "5c4b957ce9731aa60e0a1f51c85a4eee57e5dcc905df7de577d64d276b78dd0f"
},
{
"path": "docs/examples/v0.md",
"sha256": "9aa7b2f26588fb6fe6ecc9f58494f3ebc8c1062756ebcbb94435d9dd5d837d79"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "ca3970f1ffb5d6f74a068bff1263d7232f497987b7782659f9e8f0ea8959f964"
}
],
"dirSha256": "f2f4ffd9314b251059fd74dea6652b3490c90a991361d0bd1b3cda10ef0c42ed"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}