Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:41:51 +08:00
commit 43ed648a28
43 changed files with 8444 additions and 0 deletions

276
docs/examples/chatbot.md Normal file
View File

@@ -0,0 +1,276 @@
# Chatbot
URL: /examples/chatbot
---
title: Chatbot
description: An example of how to use the AI Elements to build a chatbot.
---
An example of how to use the AI Elements to build a chatbot.
<Preview path="chatbot" type="block" className="p-0" />
## Tutorial
Let's walk through how to build a chatbot using AI Elements and AI SDK. Our example will include reasoning, web search with citations, and a model picker.
### Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind the project setup):
```bash title="Terminal"
npx create-next-app@latest ai-chatbot && cd ai-chatbot
```
Run the following command to install AI Elements. This will also set up shadcn/ui if you haven't already configured it:
```bash title="Terminal"
npx ai-elements@latest
```
Now, install the AI SDK dependencies:
```package-install
npm i ai @ai-sdk/react zod
```
In order to use the providers, let's configure an AI Gateway API key. Create a `.env.local` in your root directory and navigate [here](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai%2Fapi-keys&title=Get%20your%20AI%20Gateway%20key) to create a token, then paste it in your `.env.local`.
We're now ready to start building our app!
### Client
In your `app/page.tsx`, replace the code with the file below.
Here, we use the `PromptInput` component with its compound components to build a rich input experience with file attachments, model picker, and action menu. The input component uses the new `PromptInputMessage` type for handling both text and file attachments.
The whole chat lives in a `Conversation`. We switch on `message.parts` and render the respective part within `Message`, `Reasoning`, and `Sources`. We also use `status` from `useChat` to stream reasoning tokens, as well as render `Loader`.
```tsx title="app/page.tsx"
"use client";
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import {
PromptInput,
PromptInputActionAddAttachments,
PromptInputActionMenu,
PromptInputActionMenuContent,
PromptInputActionMenuTrigger,
PromptInputAttachment,
PromptInputAttachments,
PromptInputBody,
PromptInputButton,
PromptInputHeader,
type PromptInputMessage,
PromptInputModelSelect,
PromptInputModelSelectContent,
PromptInputModelSelectItem,
PromptInputModelSelectTrigger,
PromptInputModelSelectValue,
PromptInputSubmit,
PromptInputTextarea,
PromptInputFooter,
PromptInputTools,
} from "@/components/ai-elements/prompt-input";
import { Action, Actions } from "@/components/ai-elements/actions";
import { Fragment, useState } from "react";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
import { CopyIcon, GlobeIcon, RefreshCcwIcon } from "lucide-react";
import { Source, Sources, SourcesContent, SourcesTrigger } from "@/components/ai-elements/sources";
import { Reasoning, ReasoningContent, ReasoningTrigger } from "@/components/ai-elements/reasoning";
import { Loader } from "@/components/ai-elements/loader";
const models = [
{
name: "GPT 4o",
value: "openai/gpt-4o",
},
{
name: "Deepseek R1",
value: "deepseek/deepseek-r1",
},
];
const ChatBotDemo = () => {
const [input, setInput] = useState("");
const [model, setModel] = useState<string>(models[0].value);
const [webSearch, setWebSearch] = useState(false);
const { messages, sendMessage, status, regenerate } = useChat();
const handleSubmit = (message: PromptInputMessage) => {
const hasText = Boolean(message.text);
const hasAttachments = Boolean(message.files?.length);
if (!(hasText || hasAttachments)) {
return;
}
sendMessage(
{
text: message.text || "Sent with attachments",
files: message.files,
},
{
body: {
model: model,
webSearch: webSearch,
},
}
);
setInput("");
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full h-screen">
<div className="flex flex-col h-full">
<Conversation className="h-full">
<ConversationContent>
{messages.map((message) => (
<div key={message.id}>
{message.role === "assistant" && message.parts.filter((part) => part.type === "source-url").length > 0 && (
<Sources>
<SourcesTrigger count={message.parts.filter((part) => part.type === "source-url").length} />
{message.parts
.filter((part) => part.type === "source-url")
.map((part, i) => (
<SourcesContent key={`${message.id}-${i}`}>
<Source key={`${message.id}-${i}`} href={part.url} title={part.url} />
</SourcesContent>
))}
</Sources>
)}
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return (
<Fragment key={`${message.id}-${i}`}>
<Message from={message.role}>
<MessageContent>
<Response>{part.text}</Response>
</MessageContent>
</Message>
{message.role === "assistant" && i === messages.length - 1 && (
<Actions className="mt-2">
<Action onClick={() => regenerate()} label="Retry">
<RefreshCcwIcon className="size-3" />
</Action>
<Action onClick={() => navigator.clipboard.writeText(part.text)} label="Copy">
<CopyIcon className="size-3" />
</Action>
</Actions>
)}
</Fragment>
);
case "reasoning":
return (
<Reasoning
key={`${message.id}-${i}`}
className="w-full"
isStreaming={
status === "streaming" && i === message.parts.length - 1 && message.id === messages.at(-1)?.id
}
>
<ReasoningTrigger />
<ReasoningContent>{part.text}</ReasoningContent>
</Reasoning>
);
default:
return null;
}
})}
</div>
))}
{status === "submitted" && <Loader />}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<PromptInput onSubmit={handleSubmit} className="mt-4" globalDrop multiple>
<PromptInputHeader>
<PromptInputAttachments>{(attachment) => <PromptInputAttachment data={attachment} />}</PromptInputAttachments>
</PromptInputHeader>
<PromptInputBody>
<PromptInputTextarea onChange={(e) => setInput(e.target.value)} value={input} />
</PromptInputBody>
<PromptInputFooter>
<PromptInputTools>
<PromptInputActionMenu>
<PromptInputActionMenuTrigger />
<PromptInputActionMenuContent>
<PromptInputActionAddAttachments />
</PromptInputActionMenuContent>
</PromptInputActionMenu>
<PromptInputButton variant={webSearch ? "default" : "ghost"} onClick={() => setWebSearch(!webSearch)}>
<GlobeIcon size={16} />
<span>Search</span>
</PromptInputButton>
<PromptInputModelSelect
onValueChange={(value) => {
setModel(value);
}}
value={model}
>
<PromptInputModelSelectTrigger>
<PromptInputModelSelectValue />
</PromptInputModelSelectTrigger>
<PromptInputModelSelectContent>
{models.map((model) => (
<PromptInputModelSelectItem key={model.value} value={model.value}>
{model.name}
</PromptInputModelSelectItem>
))}
</PromptInputModelSelectContent>
</PromptInputModelSelect>
</PromptInputTools>
<PromptInputSubmit disabled={!input && !status} status={status} />
</PromptInputFooter>
</PromptInput>
</div>
</div>
);
};
export default ChatBotDemo;
```
### Server
Create a new route handler `app/api/chat/route.ts` and paste in the following code. We're using `perplexity/sonar` for web search because by default the model returns search results. We also pass `sendSources` and `sendReasoning` to `toUIMessageStreamResponse` in order to receive as parts on the frontend. The handler now also accepts file attachments from the client.
```ts title="app/api/chat/route.ts"
import { streamText, UIMessage, convertToModelMessages } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const {
messages,
model,
webSearch,
}: {
messages: UIMessage[];
model: string;
webSearch: boolean;
} = await req.json();
const result = streamText({
model: webSearch ? "perplexity/sonar" : model,
messages: convertToModelMessages(messages),
system: "You are a helpful assistant that can answer questions and help with tasks",
});
// send sources and reasoning back to the client
return result.toUIMessageStreamResponse({
sendSources: true,
sendReasoning: true,
});
}
```
You now have a working chatbot app with file attachment support! The chatbot can handle both text and file inputs through the action menu. Feel free to explore other components like [`Tool`](/elements/components/tool) or [`Task`](/elements/components/task) to extend your app, or view the other examples.

283
docs/examples/v0.md Normal file
View File

@@ -0,0 +1,283 @@
# v0 clone
URL: /examples/v0
---
title: v0 clone
description: An example of how to use the AI Elements to build a v0 clone.
---
An example of how to use the AI Elements to build a v0 clone.
## Tutorial
Let's walk through how to build a v0 clone using AI Elements and the [v0 Platform API](https://v0.dev/docs/api/platform).
### Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind the project setup):
```bash title="Terminal"
npx create-next-app@latest v0-clone && cd v0-clone
```
Run the following command to install shadcn/ui and AI Elements.
```bash title="Terminal"
npx shadcn@latest init && npx ai-elements@latest
```
Now, install the v0 sdk:
```package-install
npm i v0-sdk
```
In order to use the providers, let's configure a v0 API key. Create a `.env.local` in your root directory and navigate to your [v0 account settings](https://v0.dev/chat/settings/keys) to create a token, then paste it in your `.env.local` as `V0_API_KEY`.
We're now ready to start building our app!
### Client
In your `app/page.tsx`, replace the code with the file below.
Here, we use `Conversation` to wrap the conversation code, and the `WebPreview` component to render the URL returned from the v0 API.
```tsx title="app/page.tsx"
"use client";
import { useState } from "react";
import { PromptInput, type PromptInputMessage, PromptInputSubmit, PromptInputTextarea } from "@/components/ai-elements/prompt-input";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { Conversation, ConversationContent } from "@/components/ai-elements/conversation";
import { WebPreview, WebPreviewNavigation, WebPreviewUrl, WebPreviewBody } from "@/components/ai-elements/web-preview";
import { Loader } from "@/components/ai-elements/loader";
import { Suggestions, Suggestion } from "@/components/ai-elements/suggestion";
interface Chat {
id: string;
demo: string;
}
export default function Home() {
const [message, setMessage] = useState("");
const [currentChat, setCurrentChat] = useState<Chat | null>(null);
const [isLoading, setIsLoading] = useState(false);
const [chatHistory, setChatHistory] = useState<
Array<{
type: "user" | "assistant";
content: string;
}>
>([]);
const handleSendMessage = async (promptMessage: PromptInputMessage) => {
const hasText = Boolean(promptMessage.text);
const hasAttachments = Boolean(promptMessage.files?.length);
if (!(hasText || hasAttachments) || isLoading) return;
const userMessage = promptMessage.text?.trim() || "Sent with attachments";
setMessage("");
setIsLoading(true);
setChatHistory((prev) => [...prev, { type: "user", content: userMessage }]);
try {
const response = await fetch("/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
message: userMessage,
chatId: currentChat?.id,
}),
});
if (!response.ok) {
throw new Error("Failed to create chat");
}
const chat: Chat = await response.json();
setCurrentChat(chat);
setChatHistory((prev) => [
...prev,
{
type: "assistant",
content: "Generated new app preview. Check the preview panel!",
},
]);
} catch (error) {
console.error("Error:", error);
setChatHistory((prev) => [
...prev,
{
type: "assistant",
content: "Sorry, there was an error creating your app. Please try again.",
},
]);
} finally {
setIsLoading(false);
}
};
return (
<div className="h-screen flex">
{/* Chat Panel */}
<div className="w-1/2 flex flex-col border-r">
{/* Header */}
<div className="border-b p-3 h-14 flex items-center justify-between">
<h1 className="text-lg font-semibold">v0 Clone</h1>
</div>
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{chatHistory.length === 0 ? (
<div className="text-center font-semibold mt-8">
<p className="text-3xl mt-4">What can we build together?</p>
</div>
) : (
<>
<Conversation>
<ConversationContent>
{chatHistory.map((msg, index) => (
<Message from={msg.type} key={index}>
<MessageContent>{msg.content}</MessageContent>
</Message>
))}
</ConversationContent>
</Conversation>
{isLoading && (
<Message from="assistant">
<MessageContent>
<div className="flex items-center gap-2">
<Loader />
Creating your app...
</div>
</MessageContent>
</Message>
)}
</>
)}
</div>
{/* Input */}
<div className="border-t p-4">
{!currentChat && (
<Suggestions>
<Suggestion
onClick={() => setMessage("Create a responsive navbar with Tailwind CSS")}
suggestion="Create a responsive navbar with Tailwind CSS"
/>
<Suggestion onClick={() => setMessage("Build a todo app with React")} suggestion="Build a todo app with React" />
<Suggestion
onClick={() => setMessage("Make a landing page for a coffee shop")}
suggestion="Make a landing page for a coffee shop"
/>
</Suggestions>
)}
<div className="flex gap-2">
<PromptInput onSubmit={handleSendMessage} className="mt-4 w-full max-w-2xl mx-auto relative">
<PromptInputTextarea onChange={(e) => setMessage(e.target.value)} value={message} className="pr-12 min-h-[60px]" />
<PromptInputSubmit className="absolute bottom-1 right-1" disabled={!message} status={isLoading ? "streaming" : "ready"} />
</PromptInput>
</div>
</div>
</div>
{/* Preview Panel */}
<div className="w-1/2 flex flex-col">
<WebPreview>
<WebPreviewNavigation>
<WebPreviewUrl readOnly placeholder="Your app here..." value={currentChat?.demo} />
</WebPreviewNavigation>
<WebPreviewBody src={currentChat?.demo} />
</WebPreview>
</div>
</div>
);
}
```
In this case, we'll also edit the base component `components/ai-elements/web-preview.tsx` in order to best match with our theme.
```tsx title="components/ai-elements/web-preview.tsx" highlight="5,24"
return (
<WebPreviewContext.Provider value={contextValue}>
<div
className={cn(
'flex size-full flex-col bg-card', // remove rounded-lg border
className,
)}
{...props}
>
{children}
</div>
</WebPreviewContext.Provider>
);
};
export type WebPreviewNavigationProps = ComponentProps<'div'>;
export const WebPreviewNavigation = ({
className,
children,
...props
}: WebPreviewNavigationProps) => (
<div
className={cn('flex items-center gap-1 border-b p-2 h-14', className)} // add h-14
{...props}
>
{children}
</div>
);
```
### Server
Create a new route handler `app/api/chat/route.ts` and paste in the following code. We use the v0 SDK to manage chats.
```ts title="app/api/chat/route.ts"
import { NextRequest, NextResponse } from "next/server";
import { v0 } from "v0-sdk";
export async function POST(request: NextRequest) {
try {
const { message, chatId } = await request.json();
if (!message) {
return NextResponse.json({ error: "Message is required" }, { status: 400 });
}
let chat;
if (chatId) {
// continue existing chat
chat = await v0.chats.sendMessage({
chatId: chatId,
message,
});
} else {
// create new chat
chat = await v0.chats.create({
message,
});
}
return NextResponse.json({
id: chat.id,
demo: chat.demo,
});
} catch (error) {
console.error("V0 API Error:", error);
return NextResponse.json({ error: "Failed to process request" }, { status: 500 });
}
}
```
To start your server, run `pnpm dev`, navigate to `localhost:3000` and try building an app!
You now have a working v0 clone you can build off of! Feel free to explore the [v0 Platform API](https://v0.dev/docs/api/platform) and components like [`Reasoning`](/elements/components/reasoning) and [`Task`](/elements/components/task) to extend your app, or view the other examples.

288
docs/examples/workflow.md Normal file
View File

@@ -0,0 +1,288 @@
# Workflow
URL: /examples/workflow
---
title: Workflow
description: An example of how to use the AI Elements to build a workflow visualization.
---
An example of how to use the AI Elements to build a workflow visualization with interactive nodes and animated connections, built with [React Flow](https://reactflow.dev/).
<Preview path="workflow" type="block" className="p-0" />
## Tutorial
Let's walk through how to build a workflow visualization using AI Elements. Our example will include custom nodes with headers, content, and footers, along with animated and temporary edge types.
### Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind in the project setup):
```bash title="Terminal"
npx create-next-app@latest ai-workflow && cd ai-workflow
```
Run the following command to install AI Elements. This will also set up shadcn/ui if you haven't already configured it:
```bash title="Terminal"
npx ai-elements@latest
```
Now, install the required dependencies:
```package-install
npm i @xyflow/react
```
We're now ready to start building our workflow!
### Client
Let's build the workflow visualization step by step. We'll create the component structure, define our nodes and edges, and configure the canvas.
#### Import the components
First, import the necessary AI Elements components in your `app/page.tsx`:
```tsx title="app/page.tsx"
"use client";
import { Canvas } from "@/components/ai-elements/canvas";
import { Connection } from "@/components/ai-elements/connection";
import { Controls } from "@/components/ai-elements/controls";
import { Edge } from "@/components/ai-elements/edge";
import { Node, NodeContent, NodeDescription, NodeFooter, NodeHeader, NodeTitle } from "@/components/ai-elements/node";
import { Panel } from "@/components/ai-elements/panel";
import { Toolbar } from "@/components/ai-elements/toolbar";
import { Button } from "@/components/ui/button";
```
#### Define node IDs
Create a constant object to manage node identifiers. This makes it easier to reference nodes when creating edges:
```tsx title="app/page.tsx"
const nodeIds = {
start: "start",
process1: "process1",
process2: "process2",
decision: "decision",
output1: "output1",
output2: "output2",
};
```
#### Create mock nodes
Define the nodes array with position, type, and data for each node in your workflow:
```tsx title="app/page.tsx"
const nodes = [
{
id: nodeIds.start,
type: "workflow",
position: { x: 0, y: 0 },
data: {
label: "Start",
description: "Initialize workflow",
handles: { target: false, source: true },
content: "Triggered by user action at 09:30 AM",
footer: "Status: Ready",
},
},
{
id: nodeIds.process1,
type: "workflow",
position: { x: 500, y: 0 },
data: {
label: "Process Data",
description: "Transform input",
handles: { target: true, source: true },
content: "Validating 1,234 records and applying business rules",
footer: "Duration: ~2.5s",
},
},
{
id: nodeIds.decision,
type: "workflow",
position: { x: 1000, y: 0 },
data: {
label: "Decision Point",
description: "Route based on conditions",
handles: { target: true, source: true },
content: "Evaluating: data.status === 'valid' && data.score > 0.8",
footer: "Confidence: 94%",
},
},
{
id: nodeIds.output1,
type: "workflow",
position: { x: 1500, y: -300 },
data: {
label: "Success Path",
description: "Handle success case",
handles: { target: true, source: true },
content: "1,156 records passed validation (93.7%)",
footer: "Next: Send to production",
},
},
{
id: nodeIds.output2,
type: "workflow",
position: { x: 1500, y: 300 },
data: {
label: "Error Path",
description: "Handle error case",
handles: { target: true, source: true },
content: "78 records failed validation (6.3%)",
footer: "Next: Queue for review",
},
},
{
id: nodeIds.process2,
type: "workflow",
position: { x: 2000, y: 0 },
data: {
label: "Complete",
description: "Finalize workflow",
handles: { target: true, source: false },
content: "All records processed and routed successfully",
footer: "Total time: 4.2s",
},
},
];
```
#### Create mock edges
Define the connections between nodes. Use `animated` for active paths and `temporary` for conditional or error paths:
```tsx title="app/page.tsx"
const edges = [
{
id: "edge1",
source: nodeIds.start,
target: nodeIds.process1,
type: "animated",
},
{
id: "edge2",
source: nodeIds.process1,
target: nodeIds.decision,
type: "animated",
},
{
id: "edge3",
source: nodeIds.decision,
target: nodeIds.output1,
type: "animated",
},
{
id: "edge4",
source: nodeIds.decision,
target: nodeIds.output2,
type: "temporary",
},
{
id: "edge5",
source: nodeIds.output1,
target: nodeIds.process2,
type: "animated",
},
{
id: "edge6",
source: nodeIds.output2,
target: nodeIds.process2,
type: "temporary",
},
];
```
#### Create the node types
Define custom node rendering using the compound Node components:
```tsx title="app/page.tsx"
const nodeTypes = {
workflow: ({
data,
}: {
data: {
label: string;
description: string;
handles: { target: boolean; source: boolean };
content: string;
footer: string;
};
}) => (
<Node handles={data.handles}>
<NodeHeader>
<NodeTitle>{data.label}</NodeTitle>
<NodeDescription>{data.description}</NodeDescription>
</NodeHeader>
<NodeContent>
<p className="text-sm">{data.content}</p>
</NodeContent>
<NodeFooter>
<p className="text-muted-foreground text-xs">{data.footer}</p>
</NodeFooter>
<Toolbar>
<Button size="sm" variant="ghost">
Edit
</Button>
<Button size="sm" variant="ghost">
Delete
</Button>
</Toolbar>
</Node>
),
};
```
#### Create the edge types
Map the edge type names to the Edge components:
```tsx title="app/page.tsx"
const edgeTypes = {
animated: Edge.Animated,
temporary: Edge.Temporary,
};
```
#### Build the main component
Finally, create the main component that renders the Canvas with all nodes, edges, controls, and custom UI panels:
```tsx title="app/page.tsx"
const App = () => (
<Canvas edges={edges} edgeTypes={edgeTypes} fitView nodes={nodes} nodeTypes={nodeTypes} connectionLineComponent={Connection}>
<Controls />
<Panel position="top-left">
<Button size="sm" variant="secondary">
Export
</Button>
</Panel>
</Canvas>
);
export default App;
```
### Key Features
The workflow visualization demonstrates several powerful features:
- **Custom Node Components**: Each node uses the compound components (`NodeHeader`, `NodeTitle`, `NodeDescription`, `NodeContent`, `NodeFooter`) for consistent, structured layouts.
- **Node Toolbars**: The `Toolbar` component attaches contextual actions (like Edit and Delete buttons) to individual nodes, appearing when hovering or selecting them.
- **Handle Configuration**: Nodes can have source and/or target handles, controlling which connections are possible.
- **Multiple Edge Types**: The `animated` type shows active data flow, while `temporary` indicates conditional or error paths.
- **Custom Connection Lines**: The `Connection` component provides styled bezier curves when dragging new connections between nodes.
- **Interactive Controls**: The `Controls` component adds zoom in/out and fit view buttons with a modern, themed design.
- **Custom UI Panels**: The `Panel` component allows you to position custom UI elements (like buttons, filters, or legends) anywhere on the canvas.
- **Automatic Layout**: The `Canvas` component auto-fits the view and provides pan/zoom controls out of the box.
You now have a working workflow visualization! Feel free to explore dynamic workflows by connecting this to AI-generated process flows, or extend it with interactive editing capabilities using React Flow's built-in features.