Initial commit
This commit is contained in:
13
.claude-plugin/plugin.json
Normal file
13
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"name": "smart-home-services",
|
||||
"description": "Collection of smart home integration skills including Home Assistant automation and Ollama local AI capabilities",
|
||||
"version": "0.0.0-2025.11.28",
|
||||
"author": {
|
||||
"name": "Paulus Schoutsen",
|
||||
"email": "balloob@gmail.com"
|
||||
},
|
||||
"skills": [
|
||||
"./skills/home-assistant",
|
||||
"./skills/ollama"
|
||||
]
|
||||
}
|
||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# smart-home-services
|
||||
|
||||
Collection of smart home integration skills including Home Assistant automation and Ollama local AI capabilities
|
||||
64
plugin.lock.json
Normal file
64
plugin.lock.json
Normal file
@@ -0,0 +1,64 @@
|
||||
{
|
||||
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||
"pluginId": "gh:balloob/llm-skills:smart-home-services",
|
||||
"normalized": {
|
||||
"repo": null,
|
||||
"ref": "refs/tags/v20251128.0",
|
||||
"commit": "2c5a0b85c0b59fb1bd9d97e829f590a036a020fe",
|
||||
"treeHash": "2660c900a5e8061f8f284fab47582133927216c3312b7f69e3afc114657db8ac",
|
||||
"generatedAt": "2025-11-28T10:14:06.927789Z",
|
||||
"toolVersion": "publish_plugins.py@0.2.0"
|
||||
},
|
||||
"origin": {
|
||||
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||
"branch": "master",
|
||||
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||
},
|
||||
"manifest": {
|
||||
"name": "smart-home-services",
|
||||
"description": "Collection of smart home integration skills including Home Assistant automation and Ollama local AI capabilities"
|
||||
},
|
||||
"content": {
|
||||
"files": [
|
||||
{
|
||||
"path": "README.md",
|
||||
"sha256": "a4d4b417828d26a924421fb0b333b9c63d7e0fe5be5968e41fd5a579c51d90d6"
|
||||
},
|
||||
{
|
||||
"path": ".claude-plugin/plugin.json",
|
||||
"sha256": "683155e7b95fd03f75ed73bd463ddcf54742cfc4397262a9f97004ee9d1ce5e0"
|
||||
},
|
||||
{
|
||||
"path": "skills/home-assistant/SKILL.md",
|
||||
"sha256": "0f9fbc6e15a77f5eae55982a11d71eb0099dd36a6ee09e540cb5aa0457c00f68"
|
||||
},
|
||||
{
|
||||
"path": "skills/home-assistant/references/python_api.md",
|
||||
"sha256": "2b5581aca5b9e6d4bed10832f841c055df9e2f19d398027f3fb4868410e53eba"
|
||||
},
|
||||
{
|
||||
"path": "skills/home-assistant/references/node_api.md",
|
||||
"sha256": "90c295adcf7e5cf3fe320f94faa1363ffeba27e76f339584add0d13b66eda2a8"
|
||||
},
|
||||
{
|
||||
"path": "skills/ollama/SKILL.md",
|
||||
"sha256": "943ce247c81b36b988bd2e072c9074e2373cd4c5403fb90708455bbcb2379bcc"
|
||||
},
|
||||
{
|
||||
"path": "skills/ollama/references/nodejs_api.md",
|
||||
"sha256": "9b09530383283f612311266cb8e6713360c4c59185ae80361fe83e06821240b8"
|
||||
},
|
||||
{
|
||||
"path": "skills/ollama/references/python_api.md",
|
||||
"sha256": "4906bfeefcc7fe7046b670ebdca3d972fc91d5002a457b8c5c555429356fcf61"
|
||||
}
|
||||
],
|
||||
"dirSha256": "2660c900a5e8061f8f284fab47582133927216c3312b7f69e3afc114657db8ac"
|
||||
},
|
||||
"security": {
|
||||
"scannedAt": null,
|
||||
"scannerVersion": null,
|
||||
"flags": []
|
||||
}
|
||||
}
|
||||
282
skills/home-assistant/SKILL.md
Normal file
282
skills/home-assistant/SKILL.md
Normal file
@@ -0,0 +1,282 @@
|
||||
---
|
||||
name: home-assistant
|
||||
description: Use this if the user wants to connect to Home Assistant or leverage Home Assistant in any shape or form inside their project. Guide users integrating Home Assistant into projects for home automation control or data ingestion. Collects and validates connection credentials (URL and Long-Lived Access Token), provides API reference documentation for Python and Node.js implementations, and helps integrate Home Assistant APIs into user projects.
|
||||
---
|
||||
|
||||
# Home Assistant
|
||||
|
||||
## Overview
|
||||
|
||||
This skill helps users integrate Home Assistant into their projects, whether to control smart home devices or to ingest sensor data and state information. The skill guides users through connection setup, validates credentials, and provides comprehensive API reference documentation for both Python and Node.js.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when users want to:
|
||||
- Connect their application to Home Assistant
|
||||
- Control smart home devices (lights, switches, thermostats, etc.)
|
||||
- Read sensor data or entity states from Home Assistant
|
||||
- Automate home control based on custom logic
|
||||
- Build dashboards or monitoring tools using Home Assistant data
|
||||
- Integrate Home Assistant into existing Python or Node.js projects
|
||||
|
||||
## Connection Setup Workflow
|
||||
|
||||
### Step 1: Collect Connection Information
|
||||
|
||||
Collect two pieces of information from the user:
|
||||
|
||||
1. **Home Assistant URL**: The web address where Home Assistant is accessible
|
||||
2. **Long-Lived Access Token**: Authentication token for API access
|
||||
|
||||
### Step 2: Normalize the URL
|
||||
|
||||
If the user provides a URL with a path component (e.g., `http://homeassistant.local:8123/lovelace/dashboard`), normalize it by removing everything after the host and port. The base URL should only include the scheme, host, and port:
|
||||
|
||||
- ✓ Correct: `http://homeassistant.local:8123`
|
||||
- ✗ Incorrect: `http://homeassistant.local:8123/lovelace/dashboard`
|
||||
|
||||
### Step 3: Help Users Find Their Token
|
||||
|
||||
If users don't know where to find their Long-Lived Access Token, provide these instructions:
|
||||
|
||||
1. Log into Home Assistant web interface
|
||||
2. Click on the user profile (bottom left, user icon or name)
|
||||
3. Click on the "Security" tab
|
||||
4. Scroll down to the "Long-Lived Access Tokens" section
|
||||
5. Click "Create Token"
|
||||
6. Give the token a name (e.g., "My Project")
|
||||
7. Copy the generated token (it will only be shown once)
|
||||
|
||||
### Step 4: Validate the Connection
|
||||
|
||||
Use curl to test the connection and retrieve Home Assistant configuration information.
|
||||
|
||||
```bash
|
||||
curl -X GET \
|
||||
-H "Authorization: Bearer <TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
<URL>/api/config
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
curl -X GET \
|
||||
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
|
||||
-H "Content-Type: application/json" \
|
||||
http://homeassistant.local:8123/api/config
|
||||
```
|
||||
|
||||
**Success output:**
|
||||
```json
|
||||
{
|
||||
"location_name": "Home",
|
||||
"latitude": 37.7749,
|
||||
"longitude": -122.4194,
|
||||
"elevation": 0,
|
||||
"unit_system": {
|
||||
"length": "km",
|
||||
"mass": "g",
|
||||
"temperature": "°C",
|
||||
"volume": "L"
|
||||
},
|
||||
"time_zone": "America/Los_Angeles",
|
||||
"version": "2024.1.0",
|
||||
"config_dir": "/config",
|
||||
"allowlist_external_dirs": [],
|
||||
"allowlist_external_urls": [],
|
||||
"components": ["automation", "light", "switch", ...],
|
||||
"config_source": "storage"
|
||||
}
|
||||
```
|
||||
|
||||
**Key information from the response:**
|
||||
- `version`: Home Assistant version (e.g., "2024.1.0")
|
||||
- `location_name`: Name of the Home Assistant instance
|
||||
- `time_zone`: Configured time zone
|
||||
- `components`: List of loaded components/integrations
|
||||
|
||||
**Failure scenarios:**
|
||||
|
||||
Authentication failure (401):
|
||||
```json
|
||||
{"message": "Invalid authentication"}
|
||||
```
|
||||
|
||||
Connection failure:
|
||||
```
|
||||
curl: (7) Failed to connect to homeassistant.local port 8123: Connection refused
|
||||
```
|
||||
|
||||
If authentication fails, verify:
|
||||
1. The Long-Lived Access Token is correct
|
||||
2. The token hasn't been deleted or expired
|
||||
3. The URL is correct (including http/https and port)
|
||||
|
||||
### Step 5: Proceed with Implementation
|
||||
|
||||
Once the connection is validated, help the user implement their integration based on their programming language and requirements.
|
||||
|
||||
## Core Interaction Patterns
|
||||
|
||||
**IMPORTANT**: The following WebSocket API commands form the **core** of how users should interact with Home Assistant. These leverage the automation engine and keep scripts minimal by using native Home Assistant syntax.
|
||||
|
||||
### Automation Engine Commands (WebSocket API)
|
||||
|
||||
These commands require WebSocket API connection and provide the most powerful and flexible way to interact with Home Assistant:
|
||||
|
||||
#### 1. subscribe_trigger - Listen for Specific Events
|
||||
|
||||
**Use this when**: You want to be notified when specific conditions occur (state changes, time patterns, webhooks, etc.)
|
||||
|
||||
**Command structure**:
|
||||
```json
|
||||
{
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": {
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.motion_sensor",
|
||||
"to": "on"
|
||||
},
|
||||
"variables": {
|
||||
"custom_var": "value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Why use this**: Instead of subscribing to all state changes and filtering, subscribe directly to the triggers you care about. This is more efficient and uses Home Assistant's native trigger syntax.
|
||||
|
||||
#### 2. test_condition - Test Conditions Server-Side
|
||||
|
||||
**Use this when**: You need to check if a condition is met without implementing the logic in your script
|
||||
|
||||
**Command structure**:
|
||||
```json
|
||||
{
|
||||
"type": "test_condition",
|
||||
"condition": {
|
||||
"condition": "numeric_state",
|
||||
"entity_id": "sensor.temperature",
|
||||
"above": 20
|
||||
},
|
||||
"variables": {
|
||||
"custom_var": "value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Why use this**: Offload condition logic to Home Assistant. Your script stays simple while using Home Assistant's powerful condition engine.
|
||||
|
||||
#### 3. execute_script - Execute Multiple Actions
|
||||
|
||||
**Use this when**: You need to execute a sequence of actions, including `wait_for_trigger`, delays, service calls, and more
|
||||
|
||||
**Command structure**:
|
||||
```json
|
||||
{
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
{
|
||||
"service": "light.turn_on",
|
||||
"target": {"entity_id": "light.living_room"}
|
||||
},
|
||||
{
|
||||
"wait_for_trigger": [
|
||||
{
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.motion",
|
||||
"to": "off",
|
||||
"for": {"minutes": 5}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"service": "light.turn_off",
|
||||
"target": {"entity_id": "light.living_room"}
|
||||
}
|
||||
],
|
||||
"variables": {
|
||||
"custom_var": "value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Why use this**:
|
||||
- Execute complex automation logic using native Home Assistant syntax
|
||||
- Use `wait_for_trigger` to wait for events
|
||||
- Chain multiple actions together
|
||||
- Keep your script minimal - all logic is in HA syntax
|
||||
- **Getting response data**: To get response from service calls, store the result in a response variable and set it as the script result
|
||||
|
||||
**Example with response data**:
|
||||
```json
|
||||
{
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
{
|
||||
"service": "weather.get_forecasts",
|
||||
"target": {"entity_id": "weather.home"},
|
||||
"response_variable": "weather_data"
|
||||
},
|
||||
{
|
||||
"stop": "Done",
|
||||
"response_variable": "weather_data"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Essential Registry Information
|
||||
|
||||
To understand Home Assistant's information architecture, also use:
|
||||
|
||||
- **config/entity_registry/list**: Learn about entities and their unique IDs
|
||||
- **config/device_registry/list**: Learn about devices and their entities
|
||||
- **config/area_registry/list**: Understand how spaces are organized
|
||||
- **config/floor_registry/list**: Multi-floor layout information
|
||||
|
||||
### Current state of the home
|
||||
|
||||
If the user is building an application that wants to represent the current state of the home, use:
|
||||
|
||||
- **subscribe_entities**: Get real-time updates on all entity states (Home Assistant JS WebSocket has built-in support for this)
|
||||
|
||||
## Implementation Guidance
|
||||
|
||||
### Python Projects
|
||||
|
||||
For Python-based projects, refer to the Python API reference:
|
||||
|
||||
- **File**: `references/python_api.md`
|
||||
- **Usage**: Load this reference when implementing Python integrations
|
||||
- **Contains**:
|
||||
- **Example code**: Python scripts demonstrating common use cases.
|
||||
- **Key operations**: Automation engine commands, getting states, calling services, subscribing to events, error handling
|
||||
|
||||
### Node.js Projects
|
||||
|
||||
For Node.js-based projects, refer to the Node.js API reference:
|
||||
|
||||
- **File**: `references/node_api.md`
|
||||
- **Usage**: Load this reference when implementing Node.js integrations
|
||||
- **Contains**:
|
||||
- WebSocket API examples using `home-assistant-js-websocket` library
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Error Handling**: Always implement proper error handling for network failures and authentication issues
|
||||
2. **Connection Testing**: Validate connections before proceeding with implementation
|
||||
3. **Real-time Updates**: For monitoring scenarios, use WebSocket APIs instead of polling REST endpoints
|
||||
|
||||
## Common Integration Patterns
|
||||
|
||||
### Data Dashboard
|
||||
Read sensor states and display them in a custom dashboard or monitoring application.
|
||||
|
||||
### Automation Logic
|
||||
Subscribe to entity state changes and trigger custom actions based on conditions.
|
||||
|
||||
### External Triggers
|
||||
Call Home Assistant services from external events (webhooks, scheduled jobs, user actions).
|
||||
|
||||
### Data Export
|
||||
Retrieve historical data from Home Assistant for analysis or backup purposes.
|
||||
565
skills/home-assistant/references/node_api.md
Normal file
565
skills/home-assistant/references/node_api.md
Normal file
@@ -0,0 +1,565 @@
|
||||
# Home Assistant Node.js API Reference
|
||||
|
||||
This document provides guidance on using the Home Assistant WebSocket API with Node.js using the `home-assistant-js-websocket` library.
|
||||
|
||||
**RECOMMENDED APPROACH**: For monitoring entity states, always prefer using `subscribeEntities` from `home-assistant-js-websocket` instead of manually subscribing to `state_changed` events. See [Subscribe to All Entity State Changes](#subscribe-to-all-entity-state-changes).
|
||||
|
||||
## Installation
|
||||
|
||||
For Node.js 22+, you only need to install the library (built-in WebSocket support):
|
||||
|
||||
```bash
|
||||
npm install home-assistant-js-websocket
|
||||
```
|
||||
|
||||
For older Node.js versions (< 22), also install the `ws` package:
|
||||
|
||||
```bash
|
||||
npm install home-assistant-js-websocket ws
|
||||
```
|
||||
|
||||
## Authentication with Long-Lived Access Token
|
||||
|
||||
```javascript
|
||||
import {
|
||||
createConnection,
|
||||
createLongLivedTokenAuth,
|
||||
subscribeEntities,
|
||||
callService,
|
||||
} from "home-assistant-js-websocket";
|
||||
|
||||
const auth = createLongLivedTokenAuth(
|
||||
"http://homeassistant.local:8123",
|
||||
"YOUR_LONG_LIVED_ACCESS_TOKEN"
|
||||
);
|
||||
|
||||
const connection = await createConnection({ auth });
|
||||
console.log("Connected to Home Assistant!");
|
||||
```
|
||||
|
||||
## Connection Validation
|
||||
|
||||
### Get Home Assistant Configuration and Version
|
||||
|
||||
After connecting, you can retrieve configuration information including the Home Assistant version:
|
||||
|
||||
```javascript
|
||||
import { getConfig } from "home-assistant-js-websocket";
|
||||
|
||||
const config = await getConfig(connection);
|
||||
console.log(`Home Assistant Version: ${config.version}`);
|
||||
console.log(`Location: ${config.location_name}`);
|
||||
console.log(`Time Zone: ${config.time_zone}`);
|
||||
console.log(`Components loaded: ${config.components.length}`);
|
||||
```
|
||||
|
||||
The config object contains:
|
||||
- `version`: Home Assistant version (e.g., "2024.1.0")
|
||||
- `location_name`: Name of the instance
|
||||
- `time_zone`: Configured time zone
|
||||
- `unit_system`: Units for measurements (length, mass, temperature, volume)
|
||||
- `components`: Array of all loaded integrations
|
||||
- `latitude`, `longitude`, `elevation`: Location data
|
||||
|
||||
## Getting Entity States
|
||||
|
||||
### Subscribe to All Entity State Changes
|
||||
|
||||
**PREFERRED METHOD**: Use `subscribeEntities` for real-time entity state monitoring.
|
||||
|
||||
**Function Signature:**
|
||||
```typescript
|
||||
export const subscribeEntities = (
|
||||
conn: Connection,
|
||||
onChange: (state: HassEntities) => void,
|
||||
): UnsubscribeFunc => entitiesColl(conn).subscribe(onChange);
|
||||
```
|
||||
|
||||
**Why use subscribeEntities:**
|
||||
- Automatically maintains a complete, up-to-date map of all entities
|
||||
- More efficient than manually tracking state_changed events
|
||||
- Handles entity additions, deletions, and updates automatically
|
||||
- Provides clean HassEntities object indexed by entity_id
|
||||
|
||||
**Example:**
|
||||
```javascript
|
||||
import { subscribeEntities } from "home-assistant-js-websocket";
|
||||
|
||||
const unsubscribe = subscribeEntities(connection, (entities) => {
|
||||
// Called whenever any entity state changes
|
||||
// 'entities' is a complete map of all entity states
|
||||
console.log("Entities updated:", entities);
|
||||
|
||||
// Access specific entity by ID
|
||||
const light = entities["light.living_room"];
|
||||
if (light) {
|
||||
console.log(`Light state: ${light.state}`);
|
||||
console.log(`Brightness: ${light.attributes.brightness}`);
|
||||
}
|
||||
|
||||
// Monitor multiple entities
|
||||
const temp = entities["sensor.temperature"];
|
||||
const humidity = entities["sensor.humidity"];
|
||||
if (temp && humidity) {
|
||||
console.log(`Temp: ${temp.state}°C, Humidity: ${humidity.state}%`);
|
||||
}
|
||||
});
|
||||
|
||||
// To stop receiving updates
|
||||
// unsubscribe();
|
||||
```
|
||||
|
||||
### Get Current States Once
|
||||
|
||||
```javascript
|
||||
import { getStates } from "home-assistant-js-websocket";
|
||||
|
||||
const states = await getStates(connection);
|
||||
for (const state of states) {
|
||||
console.log(`${state.entity_id}: ${state.state}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Get Specific Entity State
|
||||
|
||||
```javascript
|
||||
import { getStates } from "home-assistant-js-websocket";
|
||||
|
||||
const states = await getStates(connection);
|
||||
const light = states.find(s => s.entity_id === "light.living_room");
|
||||
if (light) {
|
||||
console.log(`State: ${light.state}`);
|
||||
console.log(`Attributes:`, light.attributes);
|
||||
}
|
||||
```
|
||||
|
||||
## Calling Services
|
||||
|
||||
### Turn on a Light
|
||||
|
||||
```javascript
|
||||
await callService(connection, "light", "turn_on", {
|
||||
entity_id: "light.living_room",
|
||||
brightness: 255,
|
||||
rgb_color: [255, 0, 0], // Red
|
||||
});
|
||||
```
|
||||
|
||||
### Turn off a Switch
|
||||
|
||||
```javascript
|
||||
await callService(connection, "switch", "turn_off", {
|
||||
entity_id: "switch.bedroom_fan",
|
||||
});
|
||||
```
|
||||
|
||||
### Set Thermostat Temperature
|
||||
|
||||
```javascript
|
||||
await callService(connection, "climate", "set_temperature", {
|
||||
entity_id: "climate.living_room",
|
||||
temperature: 22,
|
||||
});
|
||||
```
|
||||
|
||||
### Send Notification
|
||||
|
||||
```javascript
|
||||
await callService(connection, "notify", "notify", {
|
||||
message: "Hello from Node.js!",
|
||||
title: "Notification Title",
|
||||
});
|
||||
```
|
||||
|
||||
### Common Service Patterns
|
||||
|
||||
```javascript
|
||||
// Light control
|
||||
await callService(connection, "light", "turn_on", {
|
||||
entity_id: "light.bedroom",
|
||||
brightness_pct: 50,
|
||||
});
|
||||
|
||||
// Switch control
|
||||
await callService(connection, "switch", "toggle", {
|
||||
entity_id: "switch.living_room_lamp",
|
||||
});
|
||||
|
||||
// Cover control
|
||||
await callService(connection, "cover", "open_cover", {
|
||||
entity_id: "cover.garage_door",
|
||||
});
|
||||
|
||||
// Media player control
|
||||
await callService(connection, "media_player", "play_media", {
|
||||
entity_id: "media_player.living_room",
|
||||
media_content_id: "https://example.com/song.mp3",
|
||||
media_content_type: "music",
|
||||
});
|
||||
```
|
||||
|
||||
## Automation Engine Commands (RECOMMENDED)
|
||||
|
||||
**IMPORTANT**: These commands form the **core** of how you should interact with Home Assistant. They leverage the automation engine and keep your code minimal by using native Home Assistant syntax.
|
||||
|
||||
### subscribe_trigger - Listen for Specific Events
|
||||
|
||||
**PREFERRED METHOD** for listening to specific state changes, time patterns, webhooks, etc.
|
||||
|
||||
```javascript
|
||||
// Subscribe to a state trigger
|
||||
const unsubscribe = await connection.subscribeMessage(
|
||||
(message) => {
|
||||
console.log("Trigger fired!", message);
|
||||
console.log("Variables:", message.variables);
|
||||
},
|
||||
{
|
||||
type: "subscribe_trigger",
|
||||
trigger: {
|
||||
platform: "state",
|
||||
entity_id: "binary_sensor.motion_sensor",
|
||||
to: "on"
|
||||
},
|
||||
variables: {
|
||||
custom_var: "value"
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Unsubscribe when done
|
||||
// unsubscribe();
|
||||
```
|
||||
|
||||
**More trigger examples**:
|
||||
|
||||
```javascript
|
||||
// Time pattern trigger
|
||||
await connection.subscribeMessage(
|
||||
(message) => console.log("Every 5 minutes!", message),
|
||||
{
|
||||
type: "subscribe_trigger",
|
||||
trigger: {
|
||||
platform: "time_pattern",
|
||||
minutes: "/5"
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Numeric state trigger
|
||||
await connection.subscribeMessage(
|
||||
(message) => console.log("Temperature above 25°C!", message),
|
||||
{
|
||||
type: "subscribe_trigger",
|
||||
trigger: {
|
||||
platform: "numeric_state",
|
||||
entity_id: "sensor.temperature",
|
||||
above: 25
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Template trigger
|
||||
await connection.subscribeMessage(
|
||||
(message) => console.log("Sun is up!", message),
|
||||
{
|
||||
type: "subscribe_trigger",
|
||||
trigger: {
|
||||
platform: "template",
|
||||
value_template: "{{ states('sun.sun') == 'above_horizon' }}"
|
||||
}
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### test_condition - Test Conditions Server-Side
|
||||
|
||||
Test conditions without implementing logic in your code:
|
||||
|
||||
```javascript
|
||||
// Test a numeric state condition
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "test_condition",
|
||||
condition: {
|
||||
condition: "numeric_state",
|
||||
entity_id: "sensor.temperature",
|
||||
above: 20
|
||||
}
|
||||
});
|
||||
|
||||
if (result.result) {
|
||||
console.log("Temperature is above 20°C");
|
||||
}
|
||||
```
|
||||
|
||||
**More condition examples**:
|
||||
|
||||
```javascript
|
||||
// State condition
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "test_condition",
|
||||
condition: {
|
||||
condition: "state",
|
||||
entity_id: "light.living_room",
|
||||
state: "on"
|
||||
}
|
||||
});
|
||||
|
||||
// Time condition
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "test_condition",
|
||||
condition: {
|
||||
condition: "time",
|
||||
after: "18:00:00",
|
||||
before: "23:00:00"
|
||||
}
|
||||
});
|
||||
|
||||
// Template condition
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "test_condition",
|
||||
condition: {
|
||||
condition: "template",
|
||||
value_template: "{{ is_state('sun.sun', 'above_horizon') }}"
|
||||
}
|
||||
});
|
||||
|
||||
// And/Or conditions
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "test_condition",
|
||||
condition: {
|
||||
condition: "and",
|
||||
conditions: [
|
||||
{
|
||||
condition: "state",
|
||||
entity_id: "binary_sensor.motion",
|
||||
state: "on"
|
||||
},
|
||||
{
|
||||
condition: "numeric_state",
|
||||
entity_id: "sensor.light_level",
|
||||
below: 100
|
||||
}
|
||||
]
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### execute_script - Execute Multiple Actions
|
||||
|
||||
**MOST POWERFUL METHOD**: Execute sequences of actions using Home Assistant's native syntax.
|
||||
|
||||
```javascript
|
||||
// Simple sequence
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "execute_script",
|
||||
sequence: [
|
||||
{
|
||||
service: "light.turn_on",
|
||||
target: { entity_id: "light.living_room" },
|
||||
data: { brightness: 255 }
|
||||
},
|
||||
{
|
||||
delay: { seconds: 5 }
|
||||
},
|
||||
{
|
||||
service: "light.turn_off",
|
||||
target: { entity_id: "light.living_room" }
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
**Advanced: Using wait_for_trigger**
|
||||
|
||||
```javascript
|
||||
// Turn on light and wait for motion to stop
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "execute_script",
|
||||
sequence: [
|
||||
{
|
||||
service: "light.turn_on",
|
||||
target: { entity_id: "light.living_room" }
|
||||
},
|
||||
{
|
||||
wait_for_trigger: [
|
||||
{
|
||||
platform: "state",
|
||||
entity_id: "binary_sensor.motion",
|
||||
to: "off",
|
||||
for: { minutes: 5 }
|
||||
}
|
||||
],
|
||||
timeout: { hours: 2 }
|
||||
},
|
||||
{
|
||||
service: "light.turn_off",
|
||||
target: { entity_id: "light.living_room" }
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
**Getting response data from service calls**:
|
||||
|
||||
```javascript
|
||||
// Call a service and get the response
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "execute_script",
|
||||
sequence: [
|
||||
{
|
||||
service: "weather.get_forecasts",
|
||||
target: { entity_id: "weather.home" },
|
||||
data: { type: "daily" },
|
||||
response_variable: "weather_data"
|
||||
},
|
||||
{
|
||||
stop: "Done",
|
||||
response_variable: "weather_data"
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
console.log("Weather forecast:", result.response_variable);
|
||||
```
|
||||
|
||||
**Complex automation example**:
|
||||
|
||||
```javascript
|
||||
// Full automation logic in execute_script
|
||||
const result = await connection.sendMessagePromise({
|
||||
type: "execute_script",
|
||||
sequence: [
|
||||
// Check if it's dark
|
||||
{
|
||||
condition: "numeric_state",
|
||||
entity_id: "sensor.light_level",
|
||||
below: 100
|
||||
},
|
||||
// Turn on lights
|
||||
{
|
||||
service: "light.turn_on",
|
||||
target: { area_id: "living_room" },
|
||||
data: { brightness_pct: 50 }
|
||||
},
|
||||
// Wait for motion to stop for 10 minutes
|
||||
{
|
||||
wait_for_trigger: [
|
||||
{
|
||||
platform: "state",
|
||||
entity_id: "binary_sensor.motion",
|
||||
to: "off",
|
||||
for: { minutes: 10 }
|
||||
}
|
||||
],
|
||||
timeout: { hours: 4 }
|
||||
},
|
||||
// Turn off lights
|
||||
{
|
||||
service: "light.turn_off",
|
||||
target: { area_id: "living_room" }
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
import {
|
||||
createConnection,
|
||||
createLongLivedTokenAuth,
|
||||
ERR_INVALID_AUTH,
|
||||
ERR_CONNECTION_LOST,
|
||||
} from "home-assistant-js-websocket";
|
||||
|
||||
try {
|
||||
const auth = createLongLivedTokenAuth(url, token);
|
||||
const connection = await createConnection({ auth });
|
||||
|
||||
console.log("Connected successfully!");
|
||||
|
||||
} catch (err) {
|
||||
if (err === ERR_INVALID_AUTH) {
|
||||
console.error("Invalid authentication - check your token");
|
||||
} else if (err === ERR_CONNECTION_LOST) {
|
||||
console.error("Connection lost - check your URL and network");
|
||||
} else {
|
||||
console.error("Connection failed:", err);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Complete Example
|
||||
|
||||
```javascript
|
||||
import {
|
||||
createConnection,
|
||||
createLongLivedTokenAuth,
|
||||
getConfig,
|
||||
subscribeEntities,
|
||||
callService,
|
||||
} from "home-assistant-js-websocket";
|
||||
|
||||
async function main() {
|
||||
// Connect
|
||||
const auth = createLongLivedTokenAuth(
|
||||
"http://homeassistant.local:8123",
|
||||
"YOUR_TOKEN"
|
||||
);
|
||||
|
||||
const connection = await createConnection({ auth });
|
||||
console.log("✓ Connected to Home Assistant");
|
||||
|
||||
// Get configuration and version
|
||||
const config = await getConfig(connection);
|
||||
console.log(`✓ Home Assistant ${config.version}`);
|
||||
console.log(` Location: ${config.location_name}`);
|
||||
|
||||
// Subscribe to entity changes
|
||||
subscribeEntities(connection, (entities) => {
|
||||
const temp = entities["sensor.living_room_temperature"];
|
||||
if (temp) {
|
||||
console.log(`Temperature: ${temp.state}°C`);
|
||||
|
||||
// Auto-control based on temperature
|
||||
if (parseFloat(temp.state) > 25) {
|
||||
callService(connection, "switch", "turn_on", {
|
||||
entity_id: "switch.fan",
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Call a service
|
||||
await callService(connection, "light", "turn_on", {
|
||||
entity_id: "light.living_room",
|
||||
});
|
||||
|
||||
console.log("✓ Light turned on");
|
||||
}
|
||||
|
||||
main().catch(console.error);
|
||||
```
|
||||
|
||||
## Using with Node.js < 22
|
||||
|
||||
For older Node.js versions, configure the WebSocket implementation:
|
||||
|
||||
```javascript
|
||||
import ws from "ws";
|
||||
|
||||
const connection = await createConnection({
|
||||
auth,
|
||||
createSocket: (auth) => {
|
||||
return new ws(auth.wsUrl, {
|
||||
rejectUnauthorized: false, // Only for self-signed certs
|
||||
});
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## Official Documentation
|
||||
|
||||
For complete library documentation, see:
|
||||
- https://github.com/home-assistant/home-assistant-js-websocket
|
||||
- https://developers.home-assistant.io/docs/api/websocket/
|
||||
649
skills/home-assistant/references/python_api.md
Normal file
649
skills/home-assistant/references/python_api.md
Normal file
@@ -0,0 +1,649 @@
|
||||
# Home Assistant Python API Reference
|
||||
|
||||
This document provides guidance on using the Home Assistant WebSocket and REST APIs with Python.
|
||||
|
||||
**RECOMMENDED**: Use the WebSocket API with automation engine commands for all interactions with Home Assistant. This is the most powerful and efficient approach.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Authentication](#authentication)
|
||||
2. [WebSocket API (RECOMMENDED)](#websocket-api-recommended)
|
||||
- [Connection Setup](#connection-setup)
|
||||
- [subscribe_trigger - Listen for Events](#subscribe_trigger---listen-for-events)
|
||||
- [test_condition - Test Conditions](#test_condition---test-conditions)
|
||||
- [execute_script - Execute Actions](#execute_script---execute-actions)
|
||||
- [subscribe_entities - Monitor All Entities](#subscribe_entities---monitor-all-entities)
|
||||
3. [REST API (Optional)](#rest-api-optional)
|
||||
- [Connection Validation](#connection-validation)
|
||||
- [Basic Queries](#basic-queries)
|
||||
4. [PEP 723 Inline Script Metadata](#pep-723-inline-script-metadata)
|
||||
5. [Official Documentation](#official-documentation)
|
||||
|
||||
## Authentication
|
||||
|
||||
All API requests require a Long-Lived Access Token. For WebSocket connections, you'll authenticate after connecting. For REST API requests, include the token in the Authorization header.
|
||||
|
||||
**WebSocket Authentication**: Handled automatically by the connection setup (see below)
|
||||
|
||||
**REST API Authentication**:
|
||||
```python
|
||||
import httpx
|
||||
|
||||
url = "http://homeassistant.local:8123"
|
||||
token = "YOUR_LONG_LIVED_ACCESS_TOKEN"
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {token}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
```
|
||||
|
||||
## WebSocket API (RECOMMENDED)
|
||||
|
||||
**This is the primary way to interact with Home Assistant.** The WebSocket API provides access to the automation engine, allowing you to use native Home Assistant syntax for triggers, conditions, and actions.
|
||||
|
||||
### Connection Setup
|
||||
|
||||
First, establish a WebSocket connection:
|
||||
|
||||
```python
|
||||
# /// script
|
||||
# requires-python = ">=3.8"
|
||||
# dependencies = [
|
||||
# "websocket-client>=1.6.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import websocket
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
|
||||
class HomeAssistantWebSocket:
|
||||
def __init__(self, url, token):
|
||||
self.url = url.replace("http://", "ws://").replace("https://", "wss://")
|
||||
self.token = token
|
||||
self.ws = None
|
||||
self.msg_id = 1
|
||||
self.callbacks = {}
|
||||
self.authenticated = False
|
||||
|
||||
def connect(self):
|
||||
"""Connect to Home Assistant WebSocket API."""
|
||||
self.ws = websocket.WebSocketApp(
|
||||
f"{self.url}/api/websocket",
|
||||
on_message=self._on_message,
|
||||
on_open=self._on_open,
|
||||
on_error=self._on_error
|
||||
)
|
||||
|
||||
# Run WebSocket in background thread
|
||||
wst = threading.Thread(target=self.ws.run_forever)
|
||||
wst.daemon = True
|
||||
wst.start()
|
||||
|
||||
# Wait for authentication
|
||||
timeout = 5
|
||||
start = time.time()
|
||||
while not self.authenticated and time.time() - start < timeout:
|
||||
time.sleep(0.1)
|
||||
|
||||
def _on_open(self, ws):
|
||||
"""Handle WebSocket connection open."""
|
||||
print("Connected to Home Assistant")
|
||||
|
||||
def _on_error(self, ws, error):
|
||||
"""Handle WebSocket errors."""
|
||||
print(f"WebSocket error: {error}")
|
||||
|
||||
def _on_message(self, ws, message):
|
||||
"""Handle incoming messages."""
|
||||
data = json.loads(message)
|
||||
|
||||
if data.get("type") == "auth_required":
|
||||
# Send authentication
|
||||
ws.send(json.dumps({
|
||||
"type": "auth",
|
||||
"access_token": self.token
|
||||
}))
|
||||
elif data.get("type") == "auth_ok":
|
||||
print("Authenticated successfully")
|
||||
self.authenticated = True
|
||||
elif data.get("type") == "auth_invalid":
|
||||
print("Authentication failed - check your token")
|
||||
elif data.get("id") in self.callbacks:
|
||||
# Call the registered callback
|
||||
self.callbacks[data["id"]](data)
|
||||
|
||||
def send_command(self, command, callback=None):
|
||||
"""Send a command and optionally register a callback."""
|
||||
msg_id = self.msg_id
|
||||
self.msg_id += 1
|
||||
|
||||
command["id"] = msg_id
|
||||
if callback:
|
||||
self.callbacks[msg_id] = callback
|
||||
|
||||
self.ws.send(json.dumps(command))
|
||||
return msg_id
|
||||
|
||||
# Usage
|
||||
ha = HomeAssistantWebSocket("http://homeassistant.local:8123", "YOUR_TOKEN")
|
||||
ha.connect()
|
||||
```
|
||||
|
||||
### subscribe_trigger - Listen for Events
|
||||
|
||||
**PREFERRED METHOD** for listening to specific state changes, time patterns, numeric thresholds, and more.
|
||||
|
||||
**Why use this**: Instead of filtering all state changes yourself, let Home Assistant's automation engine notify you only when your specific conditions are met.
|
||||
|
||||
```python
|
||||
# Subscribe to motion sensor state change
|
||||
def on_motion_detected(message):
|
||||
print(f"Motion detected! {message}")
|
||||
# Your logic here
|
||||
|
||||
ha.send_command({
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": {
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.motion_sensor",
|
||||
"to": "on"
|
||||
}
|
||||
}, on_motion_detected)
|
||||
```
|
||||
|
||||
**More trigger examples**:
|
||||
|
||||
```python
|
||||
# Time pattern - every 5 minutes
|
||||
ha.send_command({
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": {
|
||||
"platform": "time_pattern",
|
||||
"minutes": "/5"
|
||||
}
|
||||
}, lambda msg: print(f"5 minutes passed"))
|
||||
|
||||
# Numeric state - temperature above threshold
|
||||
ha.send_command({
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": {
|
||||
"platform": "numeric_state",
|
||||
"entity_id": "sensor.temperature",
|
||||
"above": 25
|
||||
}
|
||||
}, lambda msg: print(f"Temperature above 25°C!"))
|
||||
|
||||
# State change with duration
|
||||
ha.send_command({
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": {
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.motion",
|
||||
"to": "off",
|
||||
"for": {"minutes": 5}
|
||||
}
|
||||
}, lambda msg: print(f"No motion for 5 minutes"))
|
||||
|
||||
# Template trigger
|
||||
ha.send_command({
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": {
|
||||
"platform": "template",
|
||||
"value_template": "{{ states('sun.sun') == 'above_horizon' }}"
|
||||
}
|
||||
}, lambda msg: print(f"Sun is up!"))
|
||||
|
||||
# Multiple triggers
|
||||
ha.send_command({
|
||||
"type": "subscribe_trigger",
|
||||
"trigger": [
|
||||
{
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.door",
|
||||
"to": "on"
|
||||
},
|
||||
{
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.window",
|
||||
"to": "on"
|
||||
}
|
||||
]
|
||||
}, lambda msg: print(f"Door or window opened!"))
|
||||
```
|
||||
|
||||
### execute_script - Execute Actions
|
||||
|
||||
**MOST POWERFUL METHOD**: Execute sequences of actions using Home Assistant's native syntax.
|
||||
|
||||
**Why use this**:
|
||||
- Execute complex automation logic
|
||||
- Use `wait_for_trigger` to wait for events
|
||||
- Chain multiple actions together
|
||||
- Keep your script minimal - all logic is in HA syntax
|
||||
- Get response data from service calls
|
||||
|
||||
```python
|
||||
def on_complete(message):
|
||||
print(f"Script completed: {message}")
|
||||
|
||||
# Simple sequence
|
||||
ha.send_command({
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
{
|
||||
"service": "light.turn_on",
|
||||
"target": {"entity_id": "light.living_room"},
|
||||
"data": {"brightness": 255}
|
||||
},
|
||||
{
|
||||
"delay": {"seconds": 5}
|
||||
},
|
||||
{
|
||||
"service": "light.turn_off",
|
||||
"target": {"entity_id": "light.living_room"}
|
||||
}
|
||||
]
|
||||
}, on_complete)
|
||||
```
|
||||
|
||||
**Advanced: Using wait_for_trigger**
|
||||
|
||||
```python
|
||||
# Turn on light when motion detected, turn off after 5 minutes of no motion
|
||||
ha.send_command({
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
{
|
||||
"service": "light.turn_on",
|
||||
"target": {"entity_id": "light.living_room"}
|
||||
},
|
||||
{
|
||||
"wait_for_trigger": [
|
||||
{
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.motion",
|
||||
"to": "off",
|
||||
"for": {"minutes": 5}
|
||||
}
|
||||
],
|
||||
"timeout": {"hours": 2}
|
||||
},
|
||||
{
|
||||
"service": "light.turn_off",
|
||||
"target": {"entity_id": "light.living_room"}
|
||||
}
|
||||
]
|
||||
}, on_complete)
|
||||
```
|
||||
|
||||
**Getting response data from service calls**:
|
||||
|
||||
```python
|
||||
def on_weather(message):
|
||||
weather_data = message.get("result", {}).get("response_variable")
|
||||
print(f"Weather forecast: {weather_data}")
|
||||
|
||||
ha.send_command({
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
{
|
||||
"service": "weather.get_forecasts",
|
||||
"target": {"entity_id": "weather.home"},
|
||||
"data": {"type": "daily"},
|
||||
"response_variable": "weather_data"
|
||||
},
|
||||
{
|
||||
"stop": "Done",
|
||||
"response_variable": "weather_data"
|
||||
}
|
||||
]
|
||||
}, on_weather)
|
||||
```
|
||||
|
||||
**Complex automation example**:
|
||||
|
||||
```python
|
||||
# Full automation logic: turn on lights when dark, turn off after no motion
|
||||
ha.send_command({
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
# Check if it's dark
|
||||
{
|
||||
"condition": "numeric_state",
|
||||
"entity_id": "sensor.light_level",
|
||||
"below": 100
|
||||
},
|
||||
# Turn on lights at 50% brightness
|
||||
{
|
||||
"service": "light.turn_on",
|
||||
"target": {"area_id": "living_room"},
|
||||
"data": {"brightness_pct": 50}
|
||||
},
|
||||
# Wait for motion to stop for 10 minutes
|
||||
{
|
||||
"wait_for_trigger": [
|
||||
{
|
||||
"platform": "state",
|
||||
"entity_id": "binary_sensor.motion",
|
||||
"to": "off",
|
||||
"for": {"minutes": 10}
|
||||
}
|
||||
],
|
||||
"timeout": {"hours": 4}
|
||||
},
|
||||
# Turn off lights
|
||||
{
|
||||
"service": "light.turn_off",
|
||||
"target": {"area_id": "living_room"}
|
||||
}
|
||||
]
|
||||
}, on_complete)
|
||||
```
|
||||
|
||||
**Using conditions and choose**:
|
||||
|
||||
```python
|
||||
# Different actions based on time of day
|
||||
ha.send_command({
|
||||
"type": "execute_script",
|
||||
"sequence": [
|
||||
{
|
||||
"choose": [
|
||||
{
|
||||
"conditions": {
|
||||
"condition": "time",
|
||||
"after": "06:00:00",
|
||||
"before": "22:00:00"
|
||||
},
|
||||
"sequence": [
|
||||
{
|
||||
"service": "light.turn_on",
|
||||
"target": {"entity_id": "light.living_room"},
|
||||
"data": {"brightness_pct": 100}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"default": [
|
||||
{
|
||||
"service": "light.turn_on",
|
||||
"target": {"entity_id": "light.living_room"},
|
||||
"data": {"brightness_pct": 20}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}, on_complete)
|
||||
```
|
||||
|
||||
### test_condition - Test Conditions
|
||||
|
||||
Test conditions server-side without implementing logic in your code.
|
||||
|
||||
**Why use this**: Offload condition logic to Home Assistant. Your script stays simple while using HA's powerful condition engine.
|
||||
|
||||
```python
|
||||
def check_result(message):
|
||||
if message.get("result", {}).get("result"):
|
||||
print("Condition is true")
|
||||
else:
|
||||
print("Condition is false")
|
||||
|
||||
# Numeric state condition
|
||||
ha.send_command({
|
||||
"type": "test_condition",
|
||||
"condition": {
|
||||
"condition": "numeric_state",
|
||||
"entity_id": "sensor.temperature",
|
||||
"above": 20
|
||||
}
|
||||
}, check_result)
|
||||
```
|
||||
|
||||
**More condition examples**:
|
||||
|
||||
```python
|
||||
# State condition
|
||||
ha.send_command({
|
||||
"type": "test_condition",
|
||||
"condition": {
|
||||
"condition": "state",
|
||||
"entity_id": "light.living_room",
|
||||
"state": "on"
|
||||
}
|
||||
}, check_result)
|
||||
|
||||
# Time condition
|
||||
ha.send_command({
|
||||
"type": "test_condition",
|
||||
"condition": {
|
||||
"condition": "time",
|
||||
"after": "18:00:00",
|
||||
"before": "23:00:00"
|
||||
}
|
||||
}, check_result)
|
||||
|
||||
# Template condition
|
||||
ha.send_command({
|
||||
"type": "test_condition",
|
||||
"condition": {
|
||||
"condition": "template",
|
||||
"value_template": "{{ is_state('sun.sun', 'above_horizon') }}"
|
||||
}
|
||||
}, check_result)
|
||||
|
||||
# And/Or conditions
|
||||
ha.send_command({
|
||||
"type": "test_condition",
|
||||
"condition": {
|
||||
"condition": "and",
|
||||
"conditions": [
|
||||
{
|
||||
"condition": "state",
|
||||
"entity_id": "binary_sensor.motion",
|
||||
"state": "on"
|
||||
},
|
||||
{
|
||||
"condition": "numeric_state",
|
||||
"entity_id": "sensor.light_level",
|
||||
"below": 100
|
||||
}
|
||||
]
|
||||
}
|
||||
}, check_result)
|
||||
```
|
||||
|
||||
### subscribe_entities - Monitor All Entities
|
||||
|
||||
Subscribe to get real-time updates for all entity states. Useful for dashboards or monitoring applications.
|
||||
|
||||
```python
|
||||
def on_entities_update(message):
|
||||
# Get the event with updated entities
|
||||
if message.get("type") == "event":
|
||||
event = message.get("event", {})
|
||||
entities = event.get("a", {}) # 'a' contains added/updated entities
|
||||
|
||||
for entity_id, entity_data in entities.items():
|
||||
print(f"{entity_id}: {entity_data.get('s')} ({entity_data.get('a', {})})")
|
||||
|
||||
ha.send_command({
|
||||
"type": "subscribe_entities"
|
||||
}, on_entities_update)
|
||||
```
|
||||
|
||||
**Note**: For Python, you'll need to manually track the entity state map. For Node.js, `home-assistant-js-websocket` provides a built-in helper that maintains this for you.
|
||||
|
||||
### Registry Information
|
||||
|
||||
Get information about devices, areas, and floors:
|
||||
|
||||
```python
|
||||
def on_registry_response(message):
|
||||
items = message.get("result", [])
|
||||
for item in items:
|
||||
print(item)
|
||||
|
||||
# Get entity registry
|
||||
ha.send_command({
|
||||
"type": "config/entity_registry/list"
|
||||
}, on_registry_response)
|
||||
|
||||
# Get device registry
|
||||
ha.send_command({
|
||||
"type": "config/device_registry/list"
|
||||
}, on_registry_response)
|
||||
|
||||
# Get area registry
|
||||
ha.send_command({
|
||||
"type": "config/area_registry/list"
|
||||
}, on_registry_response)
|
||||
|
||||
# Get floor registry
|
||||
ha.send_command({
|
||||
"type": "config/floor_registry/list"
|
||||
}, on_registry_response)
|
||||
```
|
||||
|
||||
## REST API (Optional)
|
||||
|
||||
**Note**: For most use cases, prefer the WebSocket API above. Use REST API only for simple queries or when WebSocket is not available.
|
||||
|
||||
### Connection Validation
|
||||
|
||||
Validate connection and get Home Assistant version:
|
||||
|
||||
```python
|
||||
# /// script
|
||||
# requires-python = ">=3.8"
|
||||
# dependencies = [
|
||||
# "httpx>=0.27.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import httpx
|
||||
|
||||
url = "http://homeassistant.local:8123"
|
||||
token = "YOUR_LONG_LIVED_ACCESS_TOKEN"
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {token}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Get configuration and version
|
||||
response = httpx.get(f"{url}/api/config", headers=headers)
|
||||
config = response.json()
|
||||
|
||||
print(f"Home Assistant Version: {config['version']}")
|
||||
print(f"Location: {config['location_name']}")
|
||||
print(f"Time Zone: {config['time_zone']}")
|
||||
```
|
||||
|
||||
### Basic Queries
|
||||
|
||||
Simple REST queries for when you don't need real-time updates:
|
||||
|
||||
```python
|
||||
import httpx
|
||||
|
||||
# Get all entity states
|
||||
response = httpx.get(f"{url}/api/states", headers=headers)
|
||||
states = response.json()
|
||||
for state in states:
|
||||
print(f"{state['entity_id']}: {state['state']}")
|
||||
|
||||
# Get specific entity state
|
||||
entity_id = "light.living_room"
|
||||
response = httpx.get(f"{url}/api/states/{entity_id}", headers=headers)
|
||||
state = response.json()
|
||||
print(f"State: {state['state']}")
|
||||
print(f"Attributes: {state['attributes']}")
|
||||
|
||||
# Call a service (prefer execute_script via WebSocket instead)
|
||||
response = httpx.post(
|
||||
f"{url}/api/services/light/turn_on",
|
||||
headers=headers,
|
||||
json={
|
||||
"entity_id": "light.living_room",
|
||||
"brightness": 255
|
||||
}
|
||||
)
|
||||
print(f"Service called: {response.json()}")
|
||||
|
||||
# Get history
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(hours=1)
|
||||
|
||||
response = httpx.get(
|
||||
f"{url}/api/history/period/{start_time.isoformat()}",
|
||||
headers=headers,
|
||||
params={"filter_entity_id": "sensor.temperature"}
|
||||
)
|
||||
history = response.json()
|
||||
```
|
||||
|
||||
**Error handling with httpx**:
|
||||
|
||||
```python
|
||||
import httpx
|
||||
|
||||
try:
|
||||
response = httpx.get(f"{url}/api/config", headers=headers, timeout=10.0)
|
||||
response.raise_for_status()
|
||||
config = response.json()
|
||||
except httpx.HTTPStatusError as e:
|
||||
if e.response.status_code == 401:
|
||||
print("Authentication failed - check your token")
|
||||
elif e.response.status_code == 404:
|
||||
print("Endpoint not found")
|
||||
else:
|
||||
print(f"HTTP error: {e.response.status_code}")
|
||||
except httpx.TimeoutException:
|
||||
print("Request timed out")
|
||||
except httpx.RequestError as e:
|
||||
print(f"Connection failed: {e}")
|
||||
```
|
||||
|
||||
## PEP 723 Inline Script Metadata
|
||||
|
||||
When creating standalone Python scripts for users, always include inline script metadata at the top of the file using PEP 723 format. This allows tools like `uv` and `pipx` to automatically manage dependencies.
|
||||
|
||||
### Format
|
||||
|
||||
```python
|
||||
# /// script
|
||||
# requires-python = ">=3.8"
|
||||
# dependencies = [
|
||||
# "websocket-client>=1.6.0",
|
||||
# "httpx>=0.27.0",
|
||||
# ]
|
||||
# ///
|
||||
```
|
||||
|
||||
### Running Scripts
|
||||
|
||||
Users can run scripts with PEP 723 metadata using:
|
||||
|
||||
```bash
|
||||
# Using uv (recommended)
|
||||
uv run script.py
|
||||
|
||||
# Using pipx
|
||||
pipx run script.py
|
||||
|
||||
# Traditional approach
|
||||
pip install websocket-client httpx
|
||||
python script.py
|
||||
```
|
||||
|
||||
## Official Documentation
|
||||
|
||||
For complete API documentation, see:
|
||||
- https://developers.home-assistant.io/docs/api/websocket/
|
||||
- https://developers.home-assistant.io/docs/api/rest/
|
||||
450
skills/ollama/SKILL.md
Normal file
450
skills/ollama/SKILL.md
Normal file
@@ -0,0 +1,450 @@
|
||||
---
|
||||
name: ollama
|
||||
description: Use this if the user wants to connect to Ollama or leverage Ollama in any shape or form inside their project. Guide users integrating Ollama into their projects for local AI inference. Covers installation, connection setup, model management, and API usage for both Python and Node.js. Helps with text generation, chat interfaces, embeddings, streaming responses, and building AI-powered applications using local LLMs.
|
||||
---
|
||||
|
||||
# Ollama
|
||||
|
||||
## Overview
|
||||
|
||||
This skill helps users integrate Ollama into their projects for running large language models locally. The skill guides users through setup, connection validation, model management, and API integration for both Python and Node.js applications. Ollama provides a simple API for running models like Llama, Mistral, Gemma, and others locally without cloud dependencies.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when users want to:
|
||||
- Run large language models locally on their machine
|
||||
- Build AI-powered applications without cloud dependencies
|
||||
- Implement text generation, chat, or embeddings functionality
|
||||
- Stream LLM responses in real-time
|
||||
- Create RAG (Retrieval-Augmented Generation) systems
|
||||
- Integrate local AI capabilities into Python or Node.js projects
|
||||
- Manage Ollama models (pull, list, delete)
|
||||
- Validate Ollama connectivity and troubleshoot connection issues
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
### Step 1: Collect Ollama URL
|
||||
|
||||
**IMPORTANT**: Always ask users for their Ollama URL. Do not assume it's running locally.
|
||||
|
||||
Ask the user: "What is your Ollama server URL?"
|
||||
|
||||
Common scenarios:
|
||||
- **Local installation**: `http://localhost:11434` (default)
|
||||
- **Remote server**: `http://192.168.1.100:11434`
|
||||
- **Custom port**: `http://localhost:8080`
|
||||
- **Docker**: `http://localhost:11434` (if port mapped to 11434)
|
||||
|
||||
If the user says they're running Ollama locally or doesn't know the URL, suggest trying `http://localhost:11434`.
|
||||
|
||||
### Step 2: Check if Ollama is Installed
|
||||
|
||||
Before proceeding, verify if Ollama is installed and running at the provided URL. Users can check by visiting the URL in their browser or running:
|
||||
|
||||
```bash
|
||||
curl <OLLAMA_URL>/api/version
|
||||
```
|
||||
|
||||
If Ollama is not installed, guide users to install it:
|
||||
|
||||
**macOS/Linux:**
|
||||
```bash
|
||||
curl -fsSL https://ollama.com/install.sh | sh
|
||||
```
|
||||
|
||||
**Windows:**
|
||||
Download from https://ollama.com/download
|
||||
|
||||
**Docker:**
|
||||
```bash
|
||||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||
```
|
||||
|
||||
### Step 3: Start Ollama Service
|
||||
|
||||
Ensure Ollama is running:
|
||||
|
||||
**macOS/Linux:**
|
||||
```bash
|
||||
ollama serve
|
||||
```
|
||||
|
||||
**Docker:**
|
||||
```bash
|
||||
docker start ollama
|
||||
```
|
||||
|
||||
The service typically runs at `http://localhost:11434` by default.
|
||||
|
||||
### Step 4: Validate Connection
|
||||
|
||||
Use the validation script to test connectivity and list available models.
|
||||
|
||||
**IMPORTANT**: The script path is relative to the skill directory. When running the script, either:
|
||||
1. Use the full path from the skill directory (e.g., `/path/to/ollama/scripts/validate_connection.py`)
|
||||
2. Change to the skill directory first and then run `python scripts/validate_connection.py`
|
||||
|
||||
```bash
|
||||
# Run from the skill directory
|
||||
cd /path/to/ollama
|
||||
python scripts/validate_connection.py <OLLAMA_URL>
|
||||
```
|
||||
|
||||
Example with the user's Ollama URL:
|
||||
```bash
|
||||
cd /path/to/ollama
|
||||
python scripts/validate_connection.py http://192.168.1.100:11434
|
||||
```
|
||||
|
||||
The script will:
|
||||
- Normalize the URL (remove any path components)
|
||||
- Check if Ollama is accessible
|
||||
- Display the Ollama version
|
||||
- List all installed models with sizes
|
||||
- Provide troubleshooting guidance if connection fails
|
||||
|
||||
**Success output:**
|
||||
```
|
||||
✓ Connection successful!
|
||||
URL: http://localhost:11434
|
||||
Version: Ollama 0.1.0
|
||||
Models available: 2
|
||||
|
||||
Installed models:
|
||||
- llama3.2 (4.7 GB)
|
||||
- mistral (7.2 GB)
|
||||
```
|
||||
|
||||
**Failure output:**
|
||||
```
|
||||
✗ Connection failed: Connection refused
|
||||
URL: http://localhost:11434
|
||||
|
||||
Troubleshooting:
|
||||
1. Ensure Ollama is installed and running
|
||||
2. Check that the URL is correct
|
||||
3. Verify Ollama is accessible at the specified URL
|
||||
4. Try: curl http://localhost:11434/api/version
|
||||
```
|
||||
|
||||
## Model Management
|
||||
|
||||
### Pulling Models
|
||||
|
||||
Help users download models from the Ollama library. Common models include:
|
||||
|
||||
- `llama3.2` - Meta's Llama 3.2 (various sizes: 1B, 3B)
|
||||
- `llama3.1` - Meta's Llama 3.1 (8B, 70B, 405B)
|
||||
- `mistral` - Mistral 7B
|
||||
- `phi3` - Microsoft Phi-3
|
||||
- `gemma2` - Google Gemma 2
|
||||
|
||||
Users can pull models using:
|
||||
```bash
|
||||
ollama pull llama3.2
|
||||
```
|
||||
|
||||
Or programmatically using the API (examples in reference docs).
|
||||
|
||||
### Listing Models
|
||||
|
||||
Guide users to list installed models:
|
||||
```bash
|
||||
ollama list
|
||||
```
|
||||
|
||||
Or use the validation script to see models with detailed information.
|
||||
|
||||
### Removing Models
|
||||
|
||||
Help users delete models to free space:
|
||||
```bash
|
||||
ollama rm llama3.2
|
||||
```
|
||||
|
||||
### Model Selection Guidance
|
||||
|
||||
Help users choose appropriate models based on their needs:
|
||||
|
||||
- **Small models (1-3B)**: Fast, good for simple tasks, lower resource requirements
|
||||
- **Medium models (7-13B)**: Balanced performance and quality
|
||||
- **Large models (70B+)**: Best quality, require significant resources
|
||||
|
||||
## Implementation Guidance
|
||||
|
||||
### Python Projects
|
||||
|
||||
For Python-based projects, refer to the Python API reference:
|
||||
|
||||
- **File**: `references/python_api.md`
|
||||
- **Usage**: Load this reference when implementing Python integrations
|
||||
- **Contains**:
|
||||
- REST API examples using `urllib.request` (standard library)
|
||||
- Text generation with the Generate API
|
||||
- Conversational interfaces with the Chat API
|
||||
- **Streaming responses for real-time output (RECOMMENDED)**
|
||||
- Embeddings for semantic search
|
||||
- Complete RAG system example
|
||||
- Error handling patterns
|
||||
- PEP 723 inline script metadata for dependencies
|
||||
- **No dependencies required**: Uses only Python standard library
|
||||
|
||||
**IMPORTANT**: When creating Python scripts for users, include PEP 723 inline script metadata to declare dependencies. See the reference docs for examples.
|
||||
|
||||
**DEFAULT TO STREAMING**: When implementing text generation or chat, use streaming responses unless the user explicitly requests non-streaming.
|
||||
|
||||
Common Python use cases:
|
||||
```python
|
||||
# Streaming text generation (RECOMMENDED)
|
||||
for token in generate_stream("Explain quantum computing"):
|
||||
print(token, end="", flush=True)
|
||||
|
||||
# Streaming chat conversation (RECOMMENDED)
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "What is the capital of France?"}
|
||||
]
|
||||
for token in chat_stream(messages):
|
||||
print(token, end="", flush=True)
|
||||
|
||||
# Non-streaming (use only when needed)
|
||||
response = generate("Explain quantum computing")
|
||||
|
||||
# Embeddings for semantic search
|
||||
embedding = get_embeddings("Hello, world!")
|
||||
```
|
||||
|
||||
### Node.js Projects
|
||||
|
||||
For Node.js-based projects, refer to the Node.js API reference:
|
||||
|
||||
- **File**: `references/nodejs_api.md`
|
||||
- **Usage**: Load this reference when implementing Node.js integrations
|
||||
- **Contains**:
|
||||
- Official `ollama` npm package examples
|
||||
- Alternative Fetch API examples (Node.js 18+)
|
||||
- Text generation and chat APIs
|
||||
- **Streaming with async iterators (RECOMMENDED)**
|
||||
- Embeddings and semantic similarity
|
||||
- Complete RAG system example
|
||||
- Error handling and retry logic
|
||||
- TypeScript support examples
|
||||
|
||||
Installation:
|
||||
```bash
|
||||
npm install ollama
|
||||
```
|
||||
|
||||
**DEFAULT TO STREAMING**: When implementing text generation or chat, use streaming responses unless the user explicitly requests non-streaming.
|
||||
|
||||
Common Node.js use cases:
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
const ollama = new Ollama();
|
||||
|
||||
// Streaming text generation (RECOMMENDED)
|
||||
const stream = await ollama.generate({
|
||||
model: 'llama3.2',
|
||||
prompt: 'Explain quantum computing',
|
||||
stream: true
|
||||
});
|
||||
|
||||
for await (const chunk of stream) {
|
||||
process.stdout.write(chunk.response);
|
||||
}
|
||||
|
||||
// Streaming chat conversation (RECOMMENDED)
|
||||
const chatStream = await ollama.chat({
|
||||
model: 'llama3.2',
|
||||
messages: [
|
||||
{ role: 'system', content: 'You are a helpful assistant.' },
|
||||
{ role: 'user', content: 'What is the capital of France?' }
|
||||
],
|
||||
stream: true
|
||||
});
|
||||
|
||||
for await (const chunk of chatStream) {
|
||||
process.stdout.write(chunk.message.content);
|
||||
}
|
||||
|
||||
// Non-streaming (use only when needed)
|
||||
const response = await ollama.generate({
|
||||
model: 'llama3.2',
|
||||
prompt: 'Explain quantum computing'
|
||||
});
|
||||
|
||||
// Embeddings
|
||||
const embedding = await ollama.embeddings({
|
||||
model: 'llama3.2',
|
||||
prompt: 'Hello, world!'
|
||||
});
|
||||
```
|
||||
|
||||
## Common Integration Patterns
|
||||
|
||||
### Text Generation
|
||||
|
||||
Generate text completions from prompts. Use cases:
|
||||
- Content generation
|
||||
- Code completion
|
||||
- Question answering
|
||||
- Summarization
|
||||
|
||||
Guide users to use the Generate API with appropriate parameters (temperature, top_p, etc.) for their use case.
|
||||
|
||||
### Conversational Interfaces
|
||||
|
||||
Build chat applications with conversation history. Use cases:
|
||||
- Chatbots
|
||||
- Virtual assistants
|
||||
- Customer support
|
||||
- Interactive tutorials
|
||||
|
||||
Guide users to use the Chat API with message history management. Explain the importance of system prompts for behavior control.
|
||||
|
||||
### Embeddings & Semantic Search
|
||||
|
||||
Generate vector embeddings for text. Use cases:
|
||||
- Semantic search
|
||||
- Document similarity
|
||||
- RAG systems
|
||||
- Recommendation systems
|
||||
|
||||
Guide users to use the Embeddings API and implement cosine similarity for comparing embeddings.
|
||||
|
||||
### Streaming Responses
|
||||
|
||||
**RECOMMENDED APPROACH**: Always prefer streaming for better user experience.
|
||||
|
||||
Stream LLM output token-by-token. Use cases:
|
||||
- Real-time chat interfaces
|
||||
- Progressive content generation
|
||||
- Better user experience for long outputs
|
||||
- Immediate feedback to users
|
||||
|
||||
**When creating code for users, default to streaming API** unless they specifically request non-streaming responses.
|
||||
|
||||
Guide users to:
|
||||
- Enable `stream: true` in API calls
|
||||
- Handle async iteration (Node.js) or generators (Python)
|
||||
- Display tokens as they arrive for real-time feedback
|
||||
- Show progress indicators during generation
|
||||
|
||||
### RAG (Retrieval-Augmented Generation)
|
||||
|
||||
Combine document retrieval with generation. Use cases:
|
||||
- Question answering over documents
|
||||
- Knowledge base chatbots
|
||||
- Context-aware assistance
|
||||
|
||||
Guide users to:
|
||||
1. Generate embeddings for documents
|
||||
2. Store embeddings with associated text
|
||||
3. Search for relevant documents using query embeddings
|
||||
4. Inject retrieved context into prompts
|
||||
5. Generate answers with context
|
||||
|
||||
Both reference docs include complete RAG system examples.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
- Never hardcode sensitive information
|
||||
- Use environment variables for configuration
|
||||
- Validate and sanitize user inputs before sending to LLM
|
||||
|
||||
### Performance
|
||||
- Use streaming for long responses to improve perceived performance
|
||||
- Cache embeddings for documents that don't change
|
||||
- Choose appropriate model sizes for your use case
|
||||
- Consider response time requirements when selecting models
|
||||
|
||||
### Error Handling
|
||||
- Always implement proper error handling for network failures
|
||||
- Check model availability before making requests
|
||||
- Provide helpful error messages to users
|
||||
- Implement retry logic for transient failures
|
||||
|
||||
### Connection Management
|
||||
- Validate connections before proceeding with implementation
|
||||
- Handle connection timeouts gracefully
|
||||
- For remote Ollama instances, ensure network accessibility
|
||||
- Use the validation script during development
|
||||
|
||||
### Model Management
|
||||
- Check available disk space before pulling large models
|
||||
- Keep only models you actively use
|
||||
- Inform users about model download sizes
|
||||
- Provide model selection guidance based on requirements
|
||||
|
||||
### Context Management
|
||||
- For chat applications, manage conversation history to avoid token limits
|
||||
- Trim old messages when conversations get too long
|
||||
- Consider using summarization for long conversation histories
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Issues
|
||||
|
||||
If connection fails:
|
||||
1. Verify Ollama is installed: `ollama --version`
|
||||
2. Check if Ollama is running: `curl http://localhost:11434/api/version`
|
||||
3. Restart Ollama service: `ollama serve`
|
||||
4. Check firewall settings for remote connections
|
||||
5. Verify the URL format (should be `http://host:port` with no path)
|
||||
|
||||
### Model Not Found
|
||||
|
||||
If model is not available:
|
||||
1. List installed models: `ollama list`
|
||||
2. Pull the required model: `ollama pull model-name`
|
||||
3. Verify model name spelling (case-sensitive)
|
||||
|
||||
### Out of Memory
|
||||
|
||||
If running out of memory:
|
||||
1. Use a smaller model variant
|
||||
2. Close other applications
|
||||
3. Increase system swap space
|
||||
4. Consider using a machine with more RAM
|
||||
|
||||
### Slow Performance
|
||||
|
||||
If responses are slow:
|
||||
1. Use a smaller model
|
||||
2. Reduce `num_predict` parameter
|
||||
3. Check CPU/GPU usage
|
||||
4. Ensure Ollama is using GPU if available
|
||||
5. Close other resource-intensive applications
|
||||
|
||||
## Resources
|
||||
|
||||
### scripts/validate_connection.py
|
||||
Python script to validate Ollama connection and list available models. Normalizes URLs, tests connectivity, displays version information, and provides troubleshooting guidance.
|
||||
|
||||
### references/python_api.md
|
||||
Comprehensive Python API reference with examples for:
|
||||
- Installation and setup
|
||||
- Connection verification
|
||||
- Model management (list, pull, delete)
|
||||
- Generate API for text completion
|
||||
- Chat API for conversations
|
||||
- Streaming responses
|
||||
- Embeddings and semantic search
|
||||
- Complete RAG system implementation
|
||||
- Error handling patterns
|
||||
- Best practices
|
||||
|
||||
### references/nodejs_api.md
|
||||
Comprehensive Node.js API reference with examples for:
|
||||
- Installation using npm
|
||||
- Official `ollama` package usage
|
||||
- Alternative Fetch API examples
|
||||
- Model management
|
||||
- Generate and Chat APIs
|
||||
- Streaming with async iterators
|
||||
- Embeddings and semantic similarity
|
||||
- Complete RAG system implementation
|
||||
- Error handling and retry logic
|
||||
- TypeScript support
|
||||
- Best practices
|
||||
507
skills/ollama/references/nodejs_api.md
Normal file
507
skills/ollama/references/nodejs_api.md
Normal file
@@ -0,0 +1,507 @@
|
||||
# Ollama Node.js API Reference
|
||||
|
||||
This reference provides comprehensive examples for integrating Ollama into Node.js projects using the official `ollama` npm package.
|
||||
|
||||
**IMPORTANT**: Always use streaming responses for better user experience.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Package Setup](#package-setup)
|
||||
2. [Installation & Setup](#installation--setup)
|
||||
3. [Verifying Ollama Connection](#verifying-ollama-connection)
|
||||
4. [Model Selection](#model-selection)
|
||||
5. [Generate API (Text Completion)](#generate-api-text-completion)
|
||||
6. [Chat API (Conversational)](#chat-api-conversational)
|
||||
7. [Embeddings](#embeddings)
|
||||
8. [Error Handling](#error-handling)
|
||||
|
||||
## Package Setup
|
||||
|
||||
### ES Modules (package.json)
|
||||
|
||||
When creating Node.js scripts for users, always use ES modules. Create a `package.json` with:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"ollama": "^0.5.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This allows using modern `import` syntax instead of `require`.
|
||||
|
||||
### Running Scripts
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Run script
|
||||
node script.js
|
||||
```
|
||||
|
||||
## Installation & Setup
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
npm install ollama
|
||||
```
|
||||
|
||||
### Import
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
**IMPORTANT**: Always ask users for their Ollama URL. Do not assume localhost.
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
// Create client with custom URL
|
||||
const ollama = new Ollama({ host: 'http://localhost:11434' });
|
||||
|
||||
// Or for remote Ollama instance
|
||||
// const ollama = new Ollama({ host: 'http://192.168.1.100:11434' });
|
||||
```
|
||||
|
||||
## Verifying Ollama Connection
|
||||
|
||||
### Check Connection (Development)
|
||||
|
||||
During development, verify Ollama is running and check available models using curl:
|
||||
|
||||
```bash
|
||||
# Check Ollama is running and get version
|
||||
curl http://localhost:11434/api/version
|
||||
|
||||
# List available models
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
### Check Ollama Version (Node.js)
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function checkOllama() {
|
||||
try {
|
||||
// Simple way to verify connection
|
||||
const models = await ollama.list();
|
||||
console.log('✓ Connected to Ollama');
|
||||
console.log(` Available models: ${models.models.length}`);
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.log(`✗ Failed to connect to Ollama: ${error.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
await checkOllama();
|
||||
```
|
||||
|
||||
## Model Selection
|
||||
|
||||
**IMPORTANT**: Always ask users which model they want to use. Don't assume a default.
|
||||
|
||||
### Listing Available Models
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function listAvailableModels() {
|
||||
const { models } = await ollama.list();
|
||||
return models.map(m => m.name);
|
||||
}
|
||||
|
||||
// Usage - show available models to user
|
||||
const available = await listAvailableModels();
|
||||
console.log('Available models:');
|
||||
available.forEach(model => {
|
||||
console.log(` - ${model}`);
|
||||
});
|
||||
```
|
||||
|
||||
### Finding Models
|
||||
|
||||
If the user doesn't have a model installed or wants to use a different one:
|
||||
- **Browse models**: Direct them to https://ollama.com/search
|
||||
- **Popular choices**: llama3.2, llama3.1, mistral, phi3, qwen2.5
|
||||
- **Specialized models**: codellama (coding), llava (vision), nomic-embed-text (embeddings)
|
||||
|
||||
### Model Selection Flow
|
||||
|
||||
```javascript
|
||||
async function selectModel() {
|
||||
const available = await listAvailableModels();
|
||||
|
||||
if (available.length === 0) {
|
||||
console.log('No models installed!');
|
||||
console.log('Visit https://ollama.com/search to find models');
|
||||
console.log('Then run: ollama pull <model-name>');
|
||||
return null;
|
||||
}
|
||||
|
||||
console.log('Available models:');
|
||||
available.forEach((model, i) => {
|
||||
console.log(` ${i + 1}. ${model}`);
|
||||
});
|
||||
|
||||
// In practice, you'd ask the user to choose
|
||||
return available[0]; // Default to first available
|
||||
}
|
||||
```
|
||||
|
||||
## Generate API (Text Completion)
|
||||
|
||||
### Streaming Text Generation
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function generateStream(prompt, model = 'llama3.2') {
|
||||
const response = await ollama.generate({
|
||||
model: model,
|
||||
prompt: prompt,
|
||||
stream: true
|
||||
});
|
||||
|
||||
for await (const chunk of response) {
|
||||
process.stdout.write(chunk.response);
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
process.stdout.write('Response: ');
|
||||
await generateStream('Why is the sky blue?', 'llama3.2');
|
||||
process.stdout.write('\n');
|
||||
```
|
||||
|
||||
### With Options (Temperature, Top-P, etc.)
|
||||
|
||||
```javascript
|
||||
async function generateWithOptions(prompt, model = 'llama3.2') {
|
||||
const response = await ollama.generate({
|
||||
model: model,
|
||||
prompt: prompt,
|
||||
stream: true,
|
||||
options: {
|
||||
temperature: 0.7,
|
||||
top_p: 0.9,
|
||||
top_k: 40,
|
||||
num_predict: 100 // Max tokens
|
||||
}
|
||||
});
|
||||
|
||||
for await (const chunk of response) {
|
||||
process.stdout.write(chunk.response);
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
process.stdout.write('Response: ');
|
||||
await generateWithOptions('Write a haiku about programming');
|
||||
process.stdout.write('\n');
|
||||
```
|
||||
|
||||
## Chat API (Conversational)
|
||||
|
||||
### Streaming Chat
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function chatStream(messages, model = 'llama3.2') {
|
||||
/*
|
||||
* Chat with a model using conversation history with streaming.
|
||||
*
|
||||
* Args:
|
||||
* messages: Array of message objects with 'role' and 'content'
|
||||
* role can be 'system', 'user', or 'assistant'
|
||||
*/
|
||||
const response = await ollama.chat({
|
||||
model: model,
|
||||
messages: messages,
|
||||
stream: true
|
||||
});
|
||||
|
||||
for await (const chunk of response) {
|
||||
process.stdout.write(chunk.message.content);
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
const messages = [
|
||||
{ role: 'system', content: 'You are a helpful assistant.' },
|
||||
{ role: 'user', content: 'What is the capital of France?' }
|
||||
];
|
||||
|
||||
process.stdout.write('Response: ');
|
||||
await chatStream(messages);
|
||||
process.stdout.write('\n');
|
||||
```
|
||||
|
||||
### Multi-turn Conversation
|
||||
|
||||
```javascript
|
||||
import * as readline from 'readline';
|
||||
|
||||
async function conversationLoop(model = 'llama3.2') {
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin,
|
||||
output: process.stdout
|
||||
});
|
||||
|
||||
const messages = [
|
||||
{ role: 'system', content: 'You are a helpful assistant.' }
|
||||
];
|
||||
|
||||
const askQuestion = () => {
|
||||
rl.question('\nYou: ', async (input) => {
|
||||
if (input.toLowerCase() === 'exit' || input.toLowerCase() === 'quit') {
|
||||
rl.close();
|
||||
return;
|
||||
}
|
||||
|
||||
// Add user message
|
||||
messages.push({ role: 'user', content: input });
|
||||
|
||||
// Stream response
|
||||
process.stdout.write('Assistant: ');
|
||||
let fullResponse = '';
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: model,
|
||||
messages: messages,
|
||||
stream: true
|
||||
});
|
||||
|
||||
for await (const chunk of response) {
|
||||
const content = chunk.message.content;
|
||||
process.stdout.write(content);
|
||||
fullResponse += content;
|
||||
}
|
||||
process.stdout.write('\n');
|
||||
|
||||
// Add assistant response to history
|
||||
messages.push({ role: 'assistant', content: fullResponse });
|
||||
|
||||
askQuestion();
|
||||
});
|
||||
};
|
||||
|
||||
askQuestion();
|
||||
}
|
||||
|
||||
// Usage
|
||||
await conversationLoop();
|
||||
```
|
||||
|
||||
## Embeddings
|
||||
|
||||
### Generate Embeddings
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function getEmbeddings(text, model = 'nomic-embed-text') {
|
||||
/*
|
||||
* Generate embeddings for text.
|
||||
*
|
||||
* Note: Use an embedding-specific model like 'nomic-embed-text'
|
||||
* Regular models can generate embeddings, but dedicated models work better.
|
||||
*/
|
||||
const response = await ollama.embeddings({
|
||||
model: model,
|
||||
prompt: text
|
||||
});
|
||||
|
||||
return response.embedding;
|
||||
}
|
||||
|
||||
// Usage
|
||||
const embedding = await getEmbeddings('Hello, world!');
|
||||
console.log(`Embedding dimension: ${embedding.length}`);
|
||||
console.log(`First 5 values: ${embedding.slice(0, 5)}`);
|
||||
```
|
||||
|
||||
### Semantic Similarity
|
||||
|
||||
```javascript
|
||||
function cosineSimilarity(vec1, vec2) {
|
||||
const dotProduct = vec1.reduce((sum, val, i) => sum + val * vec2[i], 0);
|
||||
const magnitude1 = Math.sqrt(vec1.reduce((sum, val) => sum + val * val, 0));
|
||||
const magnitude2 = Math.sqrt(vec2.reduce((sum, val) => sum + val * val, 0));
|
||||
return dotProduct / (magnitude1 * magnitude2);
|
||||
}
|
||||
|
||||
// Usage
|
||||
const text1 = 'The cat sat on the mat';
|
||||
const text2 = 'A feline rested on a rug';
|
||||
const text3 = 'JavaScript is a programming language';
|
||||
|
||||
const emb1 = await getEmbeddings(text1);
|
||||
const emb2 = await getEmbeddings(text2);
|
||||
const emb3 = await getEmbeddings(text3);
|
||||
|
||||
console.log(`Similarity 1-2: ${cosineSimilarity(emb1, emb2).toFixed(3)}`); // High
|
||||
console.log(`Similarity 1-3: ${cosineSimilarity(emb1, emb3).toFixed(3)}`); // Low
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Comprehensive Error Handling
|
||||
|
||||
```javascript
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function* safeGenerateStream(prompt, model = 'llama3.2') {
|
||||
try {
|
||||
const response = await ollama.generate({
|
||||
model: model,
|
||||
prompt: prompt,
|
||||
stream: true
|
||||
});
|
||||
|
||||
for await (const chunk of response) {
|
||||
yield chunk.response;
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
// Model not found or other API errors
|
||||
if (error.message.toLowerCase().includes('not found')) {
|
||||
console.log(`\n✗ Model '${model}' not found`);
|
||||
console.log(` Run: ollama pull ${model}`);
|
||||
console.log(` Or browse models at: https://ollama.com/search`);
|
||||
} else if (error.code === 'ECONNREFUSED') {
|
||||
console.log('\n✗ Connection failed. Is Ollama running?');
|
||||
console.log(' Start Ollama with: ollama serve');
|
||||
} else {
|
||||
console.log(`\n✗ Unexpected error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
process.stdout.write('Response: ');
|
||||
for await (const token of safeGenerateStream('Hello, world!', 'llama3.2')) {
|
||||
process.stdout.write(token);
|
||||
}
|
||||
process.stdout.write('\n');
|
||||
```
|
||||
|
||||
### Checking Model Availability
|
||||
|
||||
```javascript
|
||||
async function ensureModelAvailable(model) {
|
||||
try {
|
||||
const { models } = await ollama.list();
|
||||
const modelNames = models.map(m => m.name);
|
||||
|
||||
if (!modelNames.includes(model)) {
|
||||
console.log(`Model '${model}' not available locally`);
|
||||
console.log(`Available models: ${modelNames.join(', ')}`);
|
||||
console.log(`\nTo download: ollama pull ${model}`);
|
||||
console.log(`Browse models: https://ollama.com/search`);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
} catch (error) {
|
||||
console.log(`Failed to check models: ${error.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
if (await ensureModelAvailable('llama3.2')) {
|
||||
// Proceed with using the model
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always Use Streaming**: Stream responses for better user experience
|
||||
2. **Ask About Models**: Don't assume models - ask users which model they want to use
|
||||
3. **Verify Connection**: Check Ollama connection during development with curl
|
||||
4. **Error Handling**: Handle model not found and connection errors gracefully
|
||||
5. **Context Management**: Manage conversation history to avoid token limits
|
||||
6. **Model Selection**: Direct users to https://ollama.com/search to find models
|
||||
7. **Custom Hosts**: Always ask users for their Ollama URL, don't assume localhost
|
||||
8. **ES Modules**: Use `"type": "module"` in package.json for modern import syntax
|
||||
|
||||
## Complete Example Script
|
||||
|
||||
```javascript
|
||||
// script.js
|
||||
import { Ollama } from 'ollama';
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
async function main() {
|
||||
const model = 'llama3.2';
|
||||
|
||||
// Check connection
|
||||
try {
|
||||
await ollama.list();
|
||||
} catch (error) {
|
||||
console.log(`Error: Cannot connect to Ollama - ${error.message}`);
|
||||
console.log('Make sure Ollama is running: ollama serve');
|
||||
return;
|
||||
}
|
||||
|
||||
// Stream a response
|
||||
console.log('Asking about JavaScript...\n');
|
||||
|
||||
const response = await ollama.generate({
|
||||
model: model,
|
||||
prompt: 'Explain JavaScript in one sentence',
|
||||
stream: true
|
||||
});
|
||||
|
||||
process.stdout.write('Response: ');
|
||||
for await (const chunk of response) {
|
||||
process.stdout.write(chunk.response);
|
||||
}
|
||||
process.stdout.write('\n');
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
### package.json
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"ollama": "^0.5.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
```bash
|
||||
npm install
|
||||
node script.js
|
||||
```
|
||||
454
skills/ollama/references/python_api.md
Normal file
454
skills/ollama/references/python_api.md
Normal file
@@ -0,0 +1,454 @@
|
||||
# Ollama Python API Reference
|
||||
|
||||
This reference provides comprehensive examples for integrating Ollama into Python projects using the official `ollama` Python library.
|
||||
|
||||
**IMPORTANT**: Always use streaming responses for better user experience.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [PEP 723 Inline Script Metadata](#pep-723-inline-script-metadata)
|
||||
2. [Installation & Setup](#installation--setup)
|
||||
3. [Verifying Ollama Connection](#verifying-ollama-connection)
|
||||
4. [Model Selection](#model-selection)
|
||||
5. [Generate API (Text Completion)](#generate-api-text-completion)
|
||||
6. [Chat API (Conversational)](#chat-api-conversational)
|
||||
7. [Embeddings](#embeddings)
|
||||
8. [Error Handling](#error-handling)
|
||||
|
||||
## Installation & Setup
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install ollama
|
||||
```
|
||||
|
||||
### Import
|
||||
|
||||
```python
|
||||
import ollama
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
**IMPORTANT**: Always ask users for their Ollama URL. Do not assume localhost.
|
||||
|
||||
```python
|
||||
# Create client with custom URL
|
||||
client = ollama.Client(host='http://localhost:11434')
|
||||
|
||||
# Or for remote Ollama instance
|
||||
# client = ollama.Client(host='http://192.168.1.100:11434')
|
||||
```
|
||||
|
||||
## Verifying Ollama Connection
|
||||
|
||||
### Check Connection (Development)
|
||||
|
||||
During development, verify Ollama is running and check available models using curl:
|
||||
|
||||
```bash
|
||||
# Check Ollama is running and get version
|
||||
curl http://localhost:11434/api/version
|
||||
|
||||
# List available models
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
### Check Ollama Version (Python)
|
||||
|
||||
```python
|
||||
import ollama
|
||||
|
||||
def check_ollama():
|
||||
"""Check if Ollama is running."""
|
||||
try:
|
||||
# Simple way to verify connection
|
||||
models = ollama.list()
|
||||
print(f"✓ Connected to Ollama")
|
||||
print(f" Available models: {len(models.get('models', []))}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"✗ Failed to connect to Ollama: {e}")
|
||||
return False
|
||||
|
||||
# Usage
|
||||
check_ollama()
|
||||
```
|
||||
|
||||
## Model Selection
|
||||
|
||||
**IMPORTANT**: Always ask users which model they want to use. Don't assume a default.
|
||||
|
||||
### Listing Available Models
|
||||
|
||||
```python
|
||||
import ollama
|
||||
|
||||
def list_available_models():
|
||||
"""List all locally installed models."""
|
||||
models = ollama.list()
|
||||
return [model['name'] for model in models.get('models', [])]
|
||||
|
||||
# Usage - show available models to user
|
||||
available = list_available_models()
|
||||
print("Available models:")
|
||||
for model in available:
|
||||
print(f" - {model}")
|
||||
```
|
||||
|
||||
### Finding Models
|
||||
|
||||
If the user doesn't have a model installed or wants to use a different one:
|
||||
- **Browse models**: Direct them to https://ollama.com/search
|
||||
- **Popular choices**: llama3.2, llama3.1, mistral, phi3, qwen2.5
|
||||
- **Specialized models**: codellama (coding), llava (vision), nomic-embed-text (embeddings)
|
||||
|
||||
### Model Selection Flow
|
||||
|
||||
```python
|
||||
def select_model():
|
||||
"""Interactive model selection."""
|
||||
available = list_available_models()
|
||||
|
||||
if not available:
|
||||
print("No models installed!")
|
||||
print("Visit https://ollama.com/search to find models")
|
||||
print("Then run: ollama pull <model-name>")
|
||||
return None
|
||||
|
||||
print("Available models:")
|
||||
for i, model in enumerate(available, 1):
|
||||
print(f" {i}. {model}")
|
||||
|
||||
# In practice, you'd ask the user to choose
|
||||
return available[0] # Default to first available
|
||||
```
|
||||
|
||||
## Generate API (Text Completion)
|
||||
|
||||
### Streaming Text Generation
|
||||
|
||||
```python
|
||||
import ollama
|
||||
|
||||
def generate_stream(prompt, model="llama3.2"):
|
||||
"""Generate text with streaming (yields tokens as they arrive)."""
|
||||
stream = ollama.generate(
|
||||
model=model,
|
||||
prompt=prompt,
|
||||
stream=True
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
yield chunk['response']
|
||||
|
||||
# Usage
|
||||
print("Response: ", end="", flush=True)
|
||||
for token in generate_stream("Why is the sky blue?", model="llama3.2"):
|
||||
print(token, end="", flush=True)
|
||||
print()
|
||||
```
|
||||
|
||||
### With Options (Temperature, Top-P, etc.)
|
||||
|
||||
```python
|
||||
def generate_with_options(prompt, model="llama3.2"):
|
||||
"""Generate with custom sampling parameters."""
|
||||
stream = ollama.generate(
|
||||
model=model,
|
||||
prompt=prompt,
|
||||
stream=True,
|
||||
options={
|
||||
'temperature': 0.7,
|
||||
'top_p': 0.9,
|
||||
'top_k': 40,
|
||||
'num_predict': 100 # Max tokens
|
||||
}
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
yield chunk['response']
|
||||
|
||||
# Usage
|
||||
print("Response: ", end="", flush=True)
|
||||
for token in generate_with_options("Write a haiku about programming"):
|
||||
print(token, end="", flush=True)
|
||||
print()
|
||||
```
|
||||
|
||||
## Chat API (Conversational)
|
||||
|
||||
### Streaming Chat
|
||||
|
||||
```python
|
||||
import ollama
|
||||
|
||||
def chat_stream(messages, model="llama3.2"):
|
||||
"""
|
||||
Chat with a model using conversation history with streaming.
|
||||
|
||||
Args:
|
||||
messages: List of message dicts with 'role' and 'content'
|
||||
role can be 'system', 'user', or 'assistant'
|
||||
"""
|
||||
stream = ollama.chat(
|
||||
model=model,
|
||||
messages=messages,
|
||||
stream=True
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
yield chunk['message']['content']
|
||||
|
||||
# Usage
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "What is the capital of France?"}
|
||||
]
|
||||
|
||||
print("Response: ", end="", flush=True)
|
||||
for token in chat_stream(messages):
|
||||
print(token, end="", flush=True)
|
||||
print()
|
||||
```
|
||||
|
||||
### Multi-turn Conversation
|
||||
|
||||
```python
|
||||
def conversation_loop(model="llama3.2"):
|
||||
"""Interactive chat loop with streaming responses."""
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful assistant."}
|
||||
]
|
||||
|
||||
while True:
|
||||
user_input = input("\nYou: ")
|
||||
if user_input.lower() in ['exit', 'quit']:
|
||||
break
|
||||
|
||||
# Add user message
|
||||
messages.append({"role": "user", "content": user_input})
|
||||
|
||||
# Stream response
|
||||
print("Assistant: ", end="", flush=True)
|
||||
full_response = ""
|
||||
for token in chat_stream(messages, model):
|
||||
print(token, end="", flush=True)
|
||||
full_response += token
|
||||
print()
|
||||
|
||||
# Add assistant response to history
|
||||
messages.append({"role": "assistant", "content": full_response})
|
||||
|
||||
# Usage
|
||||
conversation_loop()
|
||||
```
|
||||
|
||||
|
||||
## Embeddings
|
||||
|
||||
### Generate Embeddings
|
||||
|
||||
```python
|
||||
import ollama
|
||||
|
||||
def get_embeddings(text, model="nomic-embed-text"):
|
||||
"""
|
||||
Generate embeddings for text.
|
||||
|
||||
Note: Use an embedding-specific model like 'nomic-embed-text'
|
||||
Regular models can generate embeddings, but dedicated models work better.
|
||||
"""
|
||||
response = ollama.embeddings(
|
||||
model=model,
|
||||
prompt=text
|
||||
)
|
||||
return response['embedding']
|
||||
|
||||
# Usage
|
||||
embedding = get_embeddings("Hello, world!")
|
||||
print(f"Embedding dimension: {len(embedding)}")
|
||||
print(f"First 5 values: {embedding[:5]}")
|
||||
```
|
||||
|
||||
### Semantic Similarity
|
||||
|
||||
```python
|
||||
import math
|
||||
|
||||
def cosine_similarity(vec1, vec2):
|
||||
"""Calculate cosine similarity between two vectors."""
|
||||
dot_product = sum(a * b for a, b in zip(vec1, vec2))
|
||||
magnitude1 = math.sqrt(sum(a * a for a in vec1))
|
||||
magnitude2 = math.sqrt(sum(b * b for b in vec2))
|
||||
return dot_product / (magnitude1 * magnitude2)
|
||||
|
||||
# Usage
|
||||
text1 = "The cat sat on the mat"
|
||||
text2 = "A feline rested on a rug"
|
||||
text3 = "Python is a programming language"
|
||||
|
||||
emb1 = get_embeddings(text1)
|
||||
emb2 = get_embeddings(text2)
|
||||
emb3 = get_embeddings(text3)
|
||||
|
||||
print(f"Similarity 1-2: {cosine_similarity(emb1, emb2):.3f}") # High
|
||||
print(f"Similarity 1-3: {cosine_similarity(emb1, emb3):.3f}") # Low
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Comprehensive Error Handling
|
||||
|
||||
```python
|
||||
import ollama
|
||||
|
||||
def safe_generate_stream(prompt, model="llama3.2"):
|
||||
"""Generate with comprehensive error handling."""
|
||||
try:
|
||||
stream = ollama.generate(
|
||||
model=model,
|
||||
prompt=prompt,
|
||||
stream=True
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
yield chunk['response']
|
||||
|
||||
except ollama.ResponseError as e:
|
||||
# Model not found or other API errors
|
||||
if "not found" in str(e).lower():
|
||||
print(f"\n✗ Model '{model}' not found")
|
||||
print(f" Run: ollama pull {model}")
|
||||
print(f" Or browse models at: https://ollama.com/search")
|
||||
else:
|
||||
print(f"\n✗ API Error: {e}")
|
||||
|
||||
except ConnectionError:
|
||||
print("\n✗ Connection failed. Is Ollama running?")
|
||||
print(" Start Ollama with: ollama serve")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n✗ Unexpected error: {e}")
|
||||
|
||||
# Usage
|
||||
print("Response: ", end="", flush=True)
|
||||
for token in safe_generate_stream("Hello, world!", model="llama3.2"):
|
||||
print(token, end="", flush=True)
|
||||
print()
|
||||
```
|
||||
|
||||
### Checking Model Availability
|
||||
|
||||
```python
|
||||
def ensure_model_available(model):
|
||||
"""Check if model is available, provide guidance if not."""
|
||||
try:
|
||||
available = ollama.list()
|
||||
model_names = [m['name'] for m in available.get('models', [])]
|
||||
|
||||
if model not in model_names:
|
||||
print(f"Model '{model}' not available locally")
|
||||
print(f"Available models: {', '.join(model_names)}")
|
||||
print(f"\nTo download: ollama pull {model}")
|
||||
print(f"Browse models: https://ollama.com/search")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to check models: {e}")
|
||||
return False
|
||||
|
||||
# Usage
|
||||
if ensure_model_available("llama3.2"):
|
||||
# Proceed with using the model
|
||||
pass
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always Use Streaming**: Stream responses for better user experience
|
||||
2. **Ask About Models**: Don't assume models - ask users which model they want to use
|
||||
3. **Verify Connection**: Check Ollama connection during development with curl
|
||||
4. **Error Handling**: Handle model not found and connection errors gracefully
|
||||
5. **Context Management**: Manage conversation history to avoid token limits
|
||||
6. **Model Selection**: Direct users to https://ollama.com/search to find models
|
||||
7. **Custom Hosts**: Always ask users for their Ollama URL, don't assume localhost
|
||||
|
||||
## PEP 723 Inline Script Metadata
|
||||
|
||||
When creating standalone Python scripts for users, always include inline script metadata at the top of the file using PEP 723 format. This allows tools like `uv` and `pipx` to automatically manage dependencies.
|
||||
|
||||
### Format
|
||||
|
||||
```python
|
||||
# /// script
|
||||
# requires-python = ">=3.8"
|
||||
# dependencies = [
|
||||
# "ollama>=0.1.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import ollama
|
||||
|
||||
# Your code here
|
||||
```
|
||||
|
||||
### Running Scripts
|
||||
|
||||
Users can run scripts with PEP 723 metadata using:
|
||||
|
||||
```bash
|
||||
# Using uv (recommended)
|
||||
uv run script.py
|
||||
|
||||
# Using pipx
|
||||
pipx run script.py
|
||||
|
||||
# Traditional approach
|
||||
pip install ollama
|
||||
python script.py
|
||||
```
|
||||
|
||||
### Complete Example Script
|
||||
|
||||
```python
|
||||
# /// script
|
||||
# requires-python = ">=3.8"
|
||||
# dependencies = [
|
||||
# "ollama>=0.1.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import ollama
|
||||
|
||||
def main():
|
||||
"""Simple streaming chat example."""
|
||||
model = "llama3.2"
|
||||
|
||||
# Check connection
|
||||
try:
|
||||
ollama.list()
|
||||
except Exception as e:
|
||||
print(f"Error: Cannot connect to Ollama - {e}")
|
||||
print("Make sure Ollama is running: ollama serve")
|
||||
return
|
||||
|
||||
# Stream a response
|
||||
print("Asking about Python...\n")
|
||||
stream = ollama.generate(
|
||||
model=model,
|
||||
prompt="Explain Python in one sentence",
|
||||
stream=True
|
||||
)
|
||||
|
||||
print("Response: ", end="", flush=True)
|
||||
for chunk in stream:
|
||||
print(chunk['response'], end="", flush=True)
|
||||
print()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
Reference in New Issue
Block a user