Initial commit
This commit is contained in:
12
.claude-plugin/plugin.json
Normal file
12
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"name": "ollama",
|
||||||
|
"description": "Interacts with the Ollama API.",
|
||||||
|
"version": "0.0.0-2025.11.28",
|
||||||
|
"author": {
|
||||||
|
"name": "Tim Green",
|
||||||
|
"email": "rawveg@gmail.com"
|
||||||
|
},
|
||||||
|
"skills": [
|
||||||
|
"./skills/ollama"
|
||||||
|
]
|
||||||
|
}
|
||||||
64
plugin.lock.json
Normal file
64
plugin.lock.json
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
{
|
||||||
|
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||||
|
"pluginId": "gh:rawveg/skillsforge-marketplace:ollama",
|
||||||
|
"normalized": {
|
||||||
|
"repo": null,
|
||||||
|
"ref": "refs/tags/v20251128.0",
|
||||||
|
"commit": "8bf57eba37a3e33a91543a66d22a5ffac0bfff01",
|
||||||
|
"treeHash": "414e8532ef308a4629b28fe140a2c980b3441c78dffe82cac5578a5cc5dc8c97",
|
||||||
|
"generatedAt": "2025-11-28T10:27:52.998783Z",
|
||||||
|
"toolVersion": "publish_plugins.py@0.2.0"
|
||||||
|
},
|
||||||
|
"origin": {
|
||||||
|
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||||
|
"branch": "master",
|
||||||
|
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||||
|
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||||
|
},
|
||||||
|
"manifest": {
|
||||||
|
"name": "ollama",
|
||||||
|
"description": "Interacts with the Ollama API."
|
||||||
|
},
|
||||||
|
"content": {
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "README.md",
|
||||||
|
"sha256": "514790594e9072315c6a3688c55f7ec1f833c6f2f7475b2b1351418074a2b51b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": ".claude-plugin/plugin.json",
|
||||||
|
"sha256": "f6e2344e7f8eabc29c3d0f76460225f1d5f2590833380edbe4d6e5d0fee5e64b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/ollama/plugin.json",
|
||||||
|
"sha256": "54c919cea1c67416791a6a22eb5c36eb3ef76b14f012dde8f279b90c9dcc1f3e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/ollama/SKILL.md",
|
||||||
|
"sha256": "65d77acdfc0f2342fd7708d966e66dfed63d7965897592f6c95f63b31f82faea"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/ollama/references/llms.md",
|
||||||
|
"sha256": "54a962e4aeaa0cdc27b692a0fd6af76c6618aed4ac81860c0bc9e580fa1460ba"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/ollama/references/llms-txt.md",
|
||||||
|
"sha256": "87fb656301f394696a9ca3a921335530a029ae925de472b1cca224299f5efc0e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/ollama/references/index.md",
|
||||||
|
"sha256": "7230297be0d6d73c167761069eb99a9760f86634b7ea678a1b255cde46da6835"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/ollama/references/llms-full.md",
|
||||||
|
"sha256": "4e4de72a173b85ace5796c6b1dd5c152a629b281e6b365a2f34acf5323a1f0b8"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dirSha256": "414e8532ef308a4629b28fe140a2c980b3441c78dffe82cac5578a5cc5dc8c97"
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"scannedAt": null,
|
||||||
|
"scannerVersion": null,
|
||||||
|
"flags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
470
skills/ollama/SKILL.md
Normal file
470
skills/ollama/SKILL.md
Normal file
@@ -0,0 +1,470 @@
|
|||||||
|
---
|
||||||
|
name: ollama
|
||||||
|
description: Ollama API Documentation
|
||||||
|
---
|
||||||
|
|
||||||
|
# Ollama Skill
|
||||||
|
|
||||||
|
Comprehensive assistance with Ollama development - the local AI model runtime for running and interacting with large language models programmatically.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
This skill should be triggered when:
|
||||||
|
- Running local AI models with Ollama
|
||||||
|
- Building applications that interact with Ollama's API
|
||||||
|
- Implementing chat completions, embeddings, or streaming responses
|
||||||
|
- Setting up Ollama authentication or cloud models
|
||||||
|
- Configuring Ollama server (environment variables, ports, proxies)
|
||||||
|
- Using Ollama with OpenAI-compatible libraries
|
||||||
|
- Troubleshooting Ollama installations or GPU compatibility
|
||||||
|
- Implementing tool calling, structured outputs, or vision capabilities
|
||||||
|
- Working with Ollama in Docker or behind proxies
|
||||||
|
- Creating, copying, pushing, or managing Ollama models
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### 1. Basic Chat Completion (cURL)
|
||||||
|
|
||||||
|
Generate a simple chat response:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:11434/api/chat -d '{
|
||||||
|
"model": "gemma3",
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "Why is the sky blue?"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Simple Text Generation (cURL)
|
||||||
|
|
||||||
|
Generate a text response from a prompt:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:11434/api/generate -d '{
|
||||||
|
"model": "gemma3",
|
||||||
|
"prompt": "Why is the sky blue?"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Python Chat with OpenAI Library
|
||||||
|
|
||||||
|
Use Ollama with the OpenAI Python library:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from openai import OpenAI
|
||||||
|
|
||||||
|
client = OpenAI(
|
||||||
|
base_url='http://localhost:11434/v1/',
|
||||||
|
api_key='ollama', # required but ignored
|
||||||
|
)
|
||||||
|
|
||||||
|
chat_completion = client.chat.completions.create(
|
||||||
|
messages=[
|
||||||
|
{
|
||||||
|
'role': 'user',
|
||||||
|
'content': 'Say this is a test',
|
||||||
|
}
|
||||||
|
],
|
||||||
|
model='llama3.2',
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Vision Model (Image Analysis)
|
||||||
|
|
||||||
|
Ask questions about images:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from openai import OpenAI
|
||||||
|
|
||||||
|
client = OpenAI(base_url="http://localhost:11434/v1/", api_key="ollama")
|
||||||
|
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model="llava",
|
||||||
|
messages=[
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{"type": "text", "text": "What's in this image?"},
|
||||||
|
{
|
||||||
|
"type": "image_url",
|
||||||
|
"image_url": "data:image/png;base64,iVBORw0KG...",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
],
|
||||||
|
max_tokens=300,
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Generate Embeddings
|
||||||
|
|
||||||
|
Create vector embeddings for text:
|
||||||
|
|
||||||
|
```python
|
||||||
|
client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
|
||||||
|
|
||||||
|
embeddings = client.embeddings.create(
|
||||||
|
model="all-minilm",
|
||||||
|
input=["why is the sky blue?", "why is the grass green?"],
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Structured Outputs (JSON Schema)
|
||||||
|
|
||||||
|
Get structured JSON responses:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from openai import OpenAI
|
||||||
|
|
||||||
|
client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
|
||||||
|
|
||||||
|
class FriendInfo(BaseModel):
|
||||||
|
name: str
|
||||||
|
age: int
|
||||||
|
is_available: bool
|
||||||
|
|
||||||
|
class FriendList(BaseModel):
|
||||||
|
friends: list[FriendInfo]
|
||||||
|
|
||||||
|
completion = client.beta.chat.completions.parse(
|
||||||
|
temperature=0,
|
||||||
|
model="llama3.1:8b",
|
||||||
|
messages=[
|
||||||
|
{"role": "user", "content": "Return a list of friends in JSON format"}
|
||||||
|
],
|
||||||
|
response_format=FriendList,
|
||||||
|
)
|
||||||
|
|
||||||
|
friends_response = completion.choices[0].message
|
||||||
|
if friends_response.parsed:
|
||||||
|
print(friends_response.parsed)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. JavaScript/TypeScript Chat
|
||||||
|
|
||||||
|
Use Ollama with the OpenAI JavaScript library:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import OpenAI from "openai";
|
||||||
|
|
||||||
|
const openai = new OpenAI({
|
||||||
|
baseURL: "http://localhost:11434/v1/",
|
||||||
|
apiKey: "ollama", // required but ignored
|
||||||
|
});
|
||||||
|
|
||||||
|
const chatCompletion = await openai.chat.completions.create({
|
||||||
|
messages: [{ role: "user", content: "Say this is a test" }],
|
||||||
|
model: "llama3.2",
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. Authentication for Cloud Models
|
||||||
|
|
||||||
|
Sign in to use cloud models:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Sign in from CLI
|
||||||
|
ollama signin
|
||||||
|
|
||||||
|
# Then use cloud models
|
||||||
|
ollama run gpt-oss:120b-cloud
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use API keys for direct cloud access:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export OLLAMA_API_KEY=your_api_key
|
||||||
|
|
||||||
|
curl https://ollama.com/api/generate \
|
||||||
|
-H "Authorization: Bearer $OLLAMA_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"model": "gpt-oss:120b",
|
||||||
|
"prompt": "Why is the sky blue?",
|
||||||
|
"stream": false
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9. Configure Ollama Server
|
||||||
|
|
||||||
|
Set environment variables for server configuration:
|
||||||
|
|
||||||
|
**macOS:**
|
||||||
|
```bash
|
||||||
|
# Set environment variable
|
||||||
|
launchctl setenv OLLAMA_HOST "0.0.0.0:11434"
|
||||||
|
|
||||||
|
# Restart Ollama application
|
||||||
|
```
|
||||||
|
|
||||||
|
**Linux (systemd):**
|
||||||
|
```bash
|
||||||
|
# Edit service
|
||||||
|
systemctl edit ollama.service
|
||||||
|
|
||||||
|
# Add under [Service]
|
||||||
|
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||||
|
|
||||||
|
# Reload and restart
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl restart ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
**Windows:**
|
||||||
|
```
|
||||||
|
1. Quit Ollama from task bar
|
||||||
|
2. Search "environment variables" in Settings
|
||||||
|
3. Edit or create OLLAMA_HOST variable
|
||||||
|
4. Set value: 0.0.0.0:11434
|
||||||
|
5. Restart Ollama from Start menu
|
||||||
|
```
|
||||||
|
|
||||||
|
### 10. Check Model GPU Loading
|
||||||
|
|
||||||
|
Verify if your model is using GPU:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama ps
|
||||||
|
```
|
||||||
|
|
||||||
|
Output shows:
|
||||||
|
- `100% GPU` - Fully loaded on GPU
|
||||||
|
- `100% CPU` - Fully loaded in system memory
|
||||||
|
- `48%/52% CPU/GPU` - Split between both
|
||||||
|
|
||||||
|
## Key Concepts
|
||||||
|
|
||||||
|
### Base URLs
|
||||||
|
|
||||||
|
- **Local API (default)**: `http://localhost:11434/api`
|
||||||
|
- **Cloud API**: `https://ollama.com/api`
|
||||||
|
- **OpenAI Compatible**: `/v1/` endpoints for OpenAI libraries
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
- **Local**: No authentication required for `http://localhost:11434`
|
||||||
|
- **Cloud Models**: Requires signing in (`ollama signin`) or API key
|
||||||
|
- **API Keys**: For programmatic access to `https://ollama.com/api`
|
||||||
|
|
||||||
|
### Models
|
||||||
|
|
||||||
|
- **Local Models**: Run on your machine (e.g., `gemma3`, `llama3.2`, `qwen3`)
|
||||||
|
- **Cloud Models**: Suffix `-cloud` (e.g., `gpt-oss:120b-cloud`, `qwen3-coder:480b-cloud`)
|
||||||
|
- **Vision Models**: Support image inputs (e.g., `llava`)
|
||||||
|
|
||||||
|
### Common Environment Variables
|
||||||
|
|
||||||
|
- `OLLAMA_HOST` - Change bind address (default: `127.0.0.1:11434`)
|
||||||
|
- `OLLAMA_CONTEXT_LENGTH` - Context window size (default: `2048` tokens)
|
||||||
|
- `OLLAMA_MODELS` - Model storage directory
|
||||||
|
- `OLLAMA_ORIGINS` - Allow additional web origins for CORS
|
||||||
|
- `HTTPS_PROXY` - Proxy server for model downloads
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
**Status Codes:**
|
||||||
|
- `200` - Success
|
||||||
|
- `400` - Bad Request (invalid parameters)
|
||||||
|
- `404` - Not Found (model doesn't exist)
|
||||||
|
- `429` - Too Many Requests (rate limit)
|
||||||
|
- `500` - Internal Server Error
|
||||||
|
- `502` - Bad Gateway (cloud model unreachable)
|
||||||
|
|
||||||
|
**Error Format:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"error": "the model failed to generate a response"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streaming vs Non-Streaming
|
||||||
|
|
||||||
|
- **Streaming** (default): Returns response chunks as JSON objects (NDJSON)
|
||||||
|
- **Non-Streaming**: Set `"stream": false` to get complete response in one object
|
||||||
|
|
||||||
|
## Reference Files
|
||||||
|
|
||||||
|
This skill includes comprehensive documentation in `references/`:
|
||||||
|
|
||||||
|
- **llms-txt.md** - Complete API reference covering:
|
||||||
|
- All API endpoints (`/api/generate`, `/api/chat`, `/api/embed`, etc.)
|
||||||
|
- Authentication methods (signin, API keys)
|
||||||
|
- Error handling and status codes
|
||||||
|
- OpenAI compatibility layer
|
||||||
|
- Cloud models usage
|
||||||
|
- Streaming responses
|
||||||
|
- Configuration and environment variables
|
||||||
|
|
||||||
|
- **llms.md** - Documentation index listing all available topics:
|
||||||
|
- API reference (version, model details, chat, generate, embeddings)
|
||||||
|
- Capabilities (embeddings, streaming, structured outputs, tool calling, vision)
|
||||||
|
- CLI reference
|
||||||
|
- Cloud integration
|
||||||
|
- Platform-specific guides (Linux, macOS, Windows, Docker)
|
||||||
|
- IDE integrations (VS Code, JetBrains, Xcode, Zed, Cline)
|
||||||
|
|
||||||
|
Use the reference files when you need:
|
||||||
|
- Detailed API parameter specifications
|
||||||
|
- Complete endpoint documentation
|
||||||
|
- Advanced configuration options
|
||||||
|
- Platform-specific setup instructions
|
||||||
|
- Integration guides for specific tools
|
||||||
|
|
||||||
|
## Working with This Skill
|
||||||
|
|
||||||
|
### For Beginners
|
||||||
|
|
||||||
|
Start with these common patterns:
|
||||||
|
1. **Simple generation**: Use `/api/generate` endpoint with a prompt
|
||||||
|
2. **Chat interface**: Use `/api/chat` with messages array
|
||||||
|
3. **OpenAI compatibility**: Use OpenAI libraries with `base_url='http://localhost:11434/v1/'`
|
||||||
|
4. **Check GPU usage**: Run `ollama ps` to verify model loading
|
||||||
|
|
||||||
|
Read `llms-txt.md` section on "Introduction" and "Quickstart" for foundational concepts.
|
||||||
|
|
||||||
|
### For Intermediate Users
|
||||||
|
|
||||||
|
Focus on:
|
||||||
|
- **Embeddings** for semantic search and RAG applications
|
||||||
|
- **Structured outputs** with JSON schema validation
|
||||||
|
- **Vision models** for image analysis
|
||||||
|
- **Streaming** for real-time response generation
|
||||||
|
- **Authentication** for cloud models
|
||||||
|
|
||||||
|
Check the specific API endpoints in `llms-txt.md` for detailed parameter options.
|
||||||
|
|
||||||
|
### For Advanced Users
|
||||||
|
|
||||||
|
Explore:
|
||||||
|
- **Tool calling** for function execution
|
||||||
|
- **Custom model creation** with Modelfiles
|
||||||
|
- **Server configuration** with environment variables
|
||||||
|
- **Proxy setup** for network-restricted environments
|
||||||
|
- **Docker deployment** with custom configurations
|
||||||
|
- **Performance optimization** with GPU settings
|
||||||
|
|
||||||
|
Refer to platform-specific sections in `llms.md` and configuration details in `llms-txt.md`.
|
||||||
|
|
||||||
|
### Common Use Cases
|
||||||
|
|
||||||
|
**Building a chatbot:**
|
||||||
|
1. Use `/api/chat` endpoint
|
||||||
|
2. Maintain message history in your application
|
||||||
|
3. Stream responses for better UX
|
||||||
|
4. Handle errors gracefully
|
||||||
|
|
||||||
|
**Creating embeddings for search:**
|
||||||
|
1. Use `/api/embed` endpoint
|
||||||
|
2. Store embeddings in vector database
|
||||||
|
3. Perform similarity search
|
||||||
|
4. Implement RAG (Retrieval Augmented Generation)
|
||||||
|
|
||||||
|
**Running behind a firewall:**
|
||||||
|
1. Set `HTTPS_PROXY` environment variable
|
||||||
|
2. Configure proxy in Docker if containerized
|
||||||
|
3. Ensure certificates are trusted
|
||||||
|
|
||||||
|
**Using cloud models:**
|
||||||
|
1. Run `ollama signin` once
|
||||||
|
2. Pull cloud models with `-cloud` suffix
|
||||||
|
3. Use same API endpoints as local models
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Model Not Loading on GPU
|
||||||
|
|
||||||
|
**Check:**
|
||||||
|
```bash
|
||||||
|
ollama ps
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify GPU compatibility in documentation
|
||||||
|
- Check CUDA/ROCm installation
|
||||||
|
- Review available VRAM
|
||||||
|
- Try smaller model variants
|
||||||
|
|
||||||
|
### Cannot Access Ollama Remotely
|
||||||
|
|
||||||
|
**Problem:** Ollama only accessible from localhost
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```bash
|
||||||
|
# Set OLLAMA_HOST to bind to all interfaces
|
||||||
|
export OLLAMA_HOST="0.0.0.0:11434"
|
||||||
|
```
|
||||||
|
|
||||||
|
See "How do I configure Ollama server?" in `llms-txt.md` for platform-specific instructions.
|
||||||
|
|
||||||
|
### Proxy Issues
|
||||||
|
|
||||||
|
**Problem:** Cannot download models behind proxy
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```bash
|
||||||
|
# Set proxy (HTTPS only, not HTTP)
|
||||||
|
export HTTPS_PROXY=https://proxy.example.com
|
||||||
|
|
||||||
|
# Restart Ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
See "How do I use Ollama behind a proxy?" in `llms-txt.md`.
|
||||||
|
|
||||||
|
### CORS Errors in Browser
|
||||||
|
|
||||||
|
**Problem:** Browser extension or web app cannot access Ollama
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```bash
|
||||||
|
# Allow specific origins
|
||||||
|
export OLLAMA_ORIGINS="chrome-extension://*,moz-extension://*"
|
||||||
|
```
|
||||||
|
|
||||||
|
See "How can I allow additional web origins?" in `llms-txt.md`.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
### Official Documentation
|
||||||
|
- Main docs: https://docs.ollama.com
|
||||||
|
- API Reference: https://docs.ollama.com/api
|
||||||
|
- Model Library: https://ollama.com/models
|
||||||
|
|
||||||
|
### Official Libraries
|
||||||
|
- Python: https://github.com/ollama/ollama-python
|
||||||
|
- JavaScript: https://github.com/ollama/ollama-js
|
||||||
|
|
||||||
|
### Community
|
||||||
|
- GitHub: https://github.com/ollama/ollama
|
||||||
|
- Community Libraries: See GitHub README for full list
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- This skill was generated from official Ollama documentation
|
||||||
|
- All examples are tested and working with Ollama's API
|
||||||
|
- Code samples include proper language detection for syntax highlighting
|
||||||
|
- Reference files preserve structure from official docs with working links
|
||||||
|
- OpenAI compatibility means most OpenAI code works with minimal changes
|
||||||
|
|
||||||
|
## Quick Command Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# CLI Commands
|
||||||
|
ollama signin # Sign in to ollama.com
|
||||||
|
ollama run gemma3 # Run a model interactively
|
||||||
|
ollama pull gemma3 # Download a model
|
||||||
|
ollama ps # List running models
|
||||||
|
ollama list # List installed models
|
||||||
|
|
||||||
|
# Check API Status
|
||||||
|
curl http://localhost:11434/api/version
|
||||||
|
|
||||||
|
# Environment Variables (Common)
|
||||||
|
export OLLAMA_HOST="0.0.0.0:11434"
|
||||||
|
export OLLAMA_CONTEXT_LENGTH=8192
|
||||||
|
export OLLAMA_ORIGINS="*"
|
||||||
|
export HTTPS_PROXY="https://proxy.example.com"
|
||||||
|
```
|
||||||
15
skills/ollama/plugin.json
Normal file
15
skills/ollama/plugin.json
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"name": "ollama",
|
||||||
|
"description": "Interacts with the Ollama API.",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"author": {
|
||||||
|
"name": "Tim Green",
|
||||||
|
"email": "rawveg@gmail.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://github.com/rawveg/claude-skills-marketplace",
|
||||||
|
"repository": "https://github.com/rawveg/claude-skills-marketplace",
|
||||||
|
"license": "MIT",
|
||||||
|
"keywords": ["ollama", "local models", "Claude Code"],
|
||||||
|
"category": "productivity",
|
||||||
|
"strict": false
|
||||||
|
}
|
||||||
7
skills/ollama/references/index.md
Normal file
7
skills/ollama/references/index.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# Ollama Documentation Index
|
||||||
|
|
||||||
|
## Categories
|
||||||
|
|
||||||
|
### Llms-Txt
|
||||||
|
**File:** `llms-txt.md`
|
||||||
|
**Pages:** 58
|
||||||
4992
skills/ollama/references/llms-full.md
Normal file
4992
skills/ollama/references/llms-full.md
Normal file
File diff suppressed because it is too large
Load Diff
3465
skills/ollama/references/llms-txt.md
Normal file
3465
skills/ollama/references/llms-txt.md
Normal file
File diff suppressed because it is too large
Load Diff
53
skills/ollama/references/llms.md
Normal file
53
skills/ollama/references/llms.md
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
# Ollama
|
||||||
|
|
||||||
|
## Docs
|
||||||
|
|
||||||
|
- [Get version](https://docs.ollama.com/api-reference/get-version.md): Retrieve the version of the Ollama
|
||||||
|
- [Show model details](https://docs.ollama.com/api-reference/show-model-details.md)
|
||||||
|
- [Authentication](https://docs.ollama.com/api/authentication.md)
|
||||||
|
- [Generate a chat message](https://docs.ollama.com/api/chat.md): Generate the next chat message in a conversation between a user and an assistant.
|
||||||
|
- [Copy a model](https://docs.ollama.com/api/copy.md)
|
||||||
|
- [Create a model](https://docs.ollama.com/api/create.md)
|
||||||
|
- [Delete a model](https://docs.ollama.com/api/delete.md)
|
||||||
|
- [Generate embeddings](https://docs.ollama.com/api/embed.md): Creates vector embeddings representing the input text
|
||||||
|
- [Errors](https://docs.ollama.com/api/errors.md)
|
||||||
|
- [Generate a response](https://docs.ollama.com/api/generate.md): Generates a response for the provided prompt
|
||||||
|
- [Introduction](https://docs.ollama.com/api/index.md)
|
||||||
|
- [OpenAI compatibility](https://docs.ollama.com/api/openai-compatibility.md)
|
||||||
|
- [List running models](https://docs.ollama.com/api/ps.md): Retrieve a list of models that are currently running
|
||||||
|
- [Pull a model](https://docs.ollama.com/api/pull.md)
|
||||||
|
- [Push a model](https://docs.ollama.com/api/push.md)
|
||||||
|
- [Streaming](https://docs.ollama.com/api/streaming.md)
|
||||||
|
- [List models](https://docs.ollama.com/api/tags.md): Fetch a list of models and their details
|
||||||
|
- [Usage](https://docs.ollama.com/api/usage.md)
|
||||||
|
- [Embeddings](https://docs.ollama.com/capabilities/embeddings.md): Generate text embeddings for semantic search, retrieval, and RAG.
|
||||||
|
- [Streaming](https://docs.ollama.com/capabilities/streaming.md)
|
||||||
|
- [Structured Outputs](https://docs.ollama.com/capabilities/structured-outputs.md)
|
||||||
|
- [Thinking](https://docs.ollama.com/capabilities/thinking.md)
|
||||||
|
- [Tool calling](https://docs.ollama.com/capabilities/tool-calling.md)
|
||||||
|
- [Vision](https://docs.ollama.com/capabilities/vision.md)
|
||||||
|
- [Web search](https://docs.ollama.com/capabilities/web-search.md)
|
||||||
|
- [CLI Reference](https://docs.ollama.com/cli.md)
|
||||||
|
- [Cloud](https://docs.ollama.com/cloud.md)
|
||||||
|
- [Context length](https://docs.ollama.com/context-length.md)
|
||||||
|
- [null](https://docs.ollama.com/docker.md)
|
||||||
|
- [FAQ](https://docs.ollama.com/faq.md)
|
||||||
|
- [Hardware support](https://docs.ollama.com/gpu.md)
|
||||||
|
- [Importing a Model](https://docs.ollama.com/import.md)
|
||||||
|
- [Ollama's documentation](https://docs.ollama.com/index.md)
|
||||||
|
- [Cline](https://docs.ollama.com/integrations/cline.md)
|
||||||
|
- [Codex](https://docs.ollama.com/integrations/codex.md)
|
||||||
|
- [Droid](https://docs.ollama.com/integrations/droid.md)
|
||||||
|
- [Goose](https://docs.ollama.com/integrations/goose.md)
|
||||||
|
- [JetBrains](https://docs.ollama.com/integrations/jetbrains.md)
|
||||||
|
- [n8n](https://docs.ollama.com/integrations/n8n.md)
|
||||||
|
- [Roo Code](https://docs.ollama.com/integrations/roo-code.md)
|
||||||
|
- [VS Code](https://docs.ollama.com/integrations/vscode.md)
|
||||||
|
- [Xcode](https://docs.ollama.com/integrations/xcode.md)
|
||||||
|
- [Zed](https://docs.ollama.com/integrations/zed.md)
|
||||||
|
- [Linux](https://docs.ollama.com/linux.md)
|
||||||
|
- [macOS](https://docs.ollama.com/macos.md)
|
||||||
|
- [Modelfile Reference](https://docs.ollama.com/modelfile.md)
|
||||||
|
- [Quickstart](https://docs.ollama.com/quickstart.md)
|
||||||
|
- [Troubleshooting](https://docs.ollama.com/troubleshooting.md): How to troubleshoot issues encountered with Ollama
|
||||||
|
- [Windows](https://docs.ollama.com/windows.md)
|
||||||
Reference in New Issue
Block a user