Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:01 +08:00
commit ac6bc54890
11 changed files with 1083 additions and 0 deletions

View File

@@ -0,0 +1,14 @@
{
"name": "reachy-mini",
"description": "Automatic movement responses for Reachy Mini during conversations",
"version": "1.0.0",
"author": {
"name": "LAURA-agent"
},
"commands": [
"./commands"
],
"hooks": [
"./hooks"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# reachy-mini
Automatic movement responses for Reachy Mini during conversations

114
commands/mood-direct.md Normal file
View File

@@ -0,0 +1,114 @@
---
description: Direct mood-based movements via daemon API (no conversation app required)
---
# Reachy Mini Mood Plugin (Direct Mode)
**Lightweight mood system that calls daemon API directly without requiring the conversation app.**
## Difference from `/reachy-mini:mood`
**`/reachy-mini:mood`** - Full integration with conversation app (requires run_reachy_pi.py)
**`/reachy-mini:mood-direct`** - Direct daemon API calls (only requires daemon running)
## When to Use This
Use **mood-direct** when:
- Testing mood behaviors without full conversation app
- Daemon is running but conversation app is not
- You want simpler, more predictable mood triggering
- Debugging mood system behavior
Use **mood** (regular) when:
- Full conversation app is running
- You want synchronized breathing/face tracking integration
- Running the complete Reachy Mini conversation system
## How It Works
This command uses the plugin's `mood_extractor.py` hook which:
1. Extracts `<!-- MOOD: mood_name -->` marker from your response
2. Polls TTS server status endpoint at `http://localhost:5001/status`
3. Continuously plays random emotions from that mood category
4. Stops when TTS finishes (`is_playing: false`)
5. Falls back to "thoughtful" mood if invalid mood specified
**Marker Format:**
```html
<!-- MOOD: mood_name -->
```
## Available Moods (9 Categories)
### celebratory
**Emotions:** success1, success2, proud1-3, cheerful1, electric1, enthusiastic1-2, grateful1, yes1, laughing1-2
### thoughtful
**Emotions:** thoughtful1-2, curious1, attentive1-2, inquiring1-3, understanding1-2
### welcoming
**Emotions:** welcoming1-2, helpful1-2, loving1, come1, grateful1, cheerful1, calming1
### confused
**Emotions:** confused1, uncertain1, lost1, inquiring1-2, incomprehensible2, uncomfortable1, oops1-2
### frustrated
**Emotions:** frustrated1, irritated1-2, impatient1-2, exhausted1, tired1, displeased1-2
### surprised
**Emotions:** surprised1-2, amazed1, oops1-2, incomprehensible2, electric1
### calm
**Emotions:** calming1, serenity1, relief1-2, shy1, understanding1-2, sleep1
### energetic
**Emotions:** electric1, enthusiastic1-2, dance1-3, laughing1-2, yes1, come1
### playful
**Emotions:** laughing1-2, dance1-3, cheerful1, enthusiastic1, oops1-2
## Usage Example
```markdown
<!-- TTS: "Fixed the bug! The validation middleware now handles edge cases properly." -->
<!-- MOOD: celebratory -->
Fixed! The validation middleware now checks all edge cases. Tests passing, ready for review.
```
## Technical Details
**Requirements:**
- Reachy Mini daemon running (`http://localhost:8100`)
- TTS server running (`http://localhost:5001`)
**Behavior:**
- Move timing: 1-2 second intervals between moves
- Safety timeout: 60 seconds maximum
- Fallback mood: "thoughtful" (if invalid mood specified)
- Dataset: `pollen-robotics/reachy-mini-emotions-library`
**No Integration With:**
- Conversation app breathing system
- Face tracking synchronization
- External control state management
## Quick Decision Guide
| **Situation** | **Mood** |
|--------------|----------|
| Task completed successfully | celebratory |
| Analyzing code/problem | thoughtful |
| Greeting/helping user | welcoming |
| Need clarification | confused |
| Persistent bug/difficulty | frustrated |
| Found unexpected issue | surprised |
| Explaining complex topic | calm |
| High-energy response | energetic |
| Jokes/casual/lighthearted | playful |
---
*This is the simplified direct-API version. For full integration with breathing and face tracking, use `/reachy-mini:mood` instead.*

238
commands/mood.md Normal file
View File

@@ -0,0 +1,238 @@
---
description: Enable continuous mood-based movements synchronized with TTS duration
---
# Reachy Mini Mood Plugin
**Continuous movement system that automatically loops emotions from a mood category until TTS finishes speaking.**
## Difference from `/reachy-mini:move`
**`/reachy-mini:move`** - Pick 1-2 specific emotions for short responses (manual control)
**`/reachy-mini:mood`** - Pick a mood category, auto-loops random emotions until TTS stops (continuous presence)
## How It Works
Instead of specifying individual emotions, you specify a **mood category**. The system then:
1. Extracts the mood from your response marker
2. Polls the TTS server status endpoint for playback state
3. Continuously plays random emotions from that mood category
4. Stops automatically when TTS finishes speaking (detected by `is_playing: false`)
**Marker Format:**
```html
<!-- MOOD: mood_name -->
```
## Goal: Synchronized Multimodal Communication
This system creates **continuous ambient movement** that matches your TTS duration, eliminating dead silence or awkward timing mismatches.
**Timing:**
- Moves play every 3-5 seconds (randomized for natural variation)
- Continues until TTS playback completes
- Safety timeout: 60 seconds max
- Perfect for longer explanations and detailed responses
## Available Moods (9 Categories)
### celebratory
**Use when:** Completing tasks, achieving success, celebrating wins
**Emotions:** success1, success2, proud1-3, cheerful1, electric1, enthusiastic1-2, grateful1, yes1, laughing1-2
**Example:**
```
<!-- TTS: "The build passed with zero errors, all tests green, and deployment completed successfully in under two minutes!" -->
<!-- MOOD: celebratory -->
Build complete! All tests passing, zero errors, deployed in under 2 minutes. Your new feature is live and running perfectly.
```
---
### thoughtful
**Use when:** Analyzing code, considering options, investigating issues
**Emotions:** thoughtful1-2, curious1, attentive1-2, inquiring1-3, understanding1-2
**Example:**
```
<!-- TTS: "Let me analyze this carefully. The error happens when the async function tries to access state before initialization completes." -->
<!-- MOOD: thoughtful -->
Analyzing the stack trace... The async race condition occurs because state access happens before the initialization promise resolves. I see three possible solutions here.
```
---
### welcoming
**Use when:** Greeting user, acknowledging requests, being helpful
**Emotions:** welcoming1-2, helpful1-2, loving1, come1, grateful1, cheerful1, calming1
**Example:**
```
<!-- TTS: "Happy to help! Let me walk you through setting up the authentication system step by step." -->
<!-- MOOD: welcoming -->
Happy to help with authentication setup! I'll guide you through the OAuth configuration, token management, and session handling. Let's start with the config file.
```
---
### confused
**Use when:** Need clarification, unclear requirements, uncertain about approach
**Emotions:** confused1, uncertain1, lost1, inquiring1-2, incomprehensible2, uncomfortable1, oops1-2
**Example:**
```
<!-- TTS: "I'm not sure I understand the requirement. Are you asking about client side validation or server side validation?" -->
<!-- MOOD: confused -->
I'm not clear on the validation requirement. Do you mean client-side form validation, server-side API validation, or database constraint validation? Each has different implications.
```
---
### frustrated
**Use when:** Persistent bugs, repeated failures, difficult problems
**Emotions:** frustrated1, irritated1-2, impatient1-2, exhausted1, tired1, displeased1-2
**Example:**
```
<!-- TTS: "This bug keeps appearing even after three different fix attempts. The race condition is more complex than initially thought." -->
<!-- MOOD: frustrated -->
This race condition is stubborn. Fixed it three times, but it keeps reappearing in different forms. The async timing issue runs deeper than the surface symptoms suggest.
```
---
### surprised
**Use when:** Discovering bugs, unexpected results, finding edge cases
**Emotions:** surprised1-2, amazed1, oops1-2, incomprehensible2, electric1
**Example:**
```
<!-- TTS: "Wait, this function is being called fifteen thousand times per second! That explains the performance issue." -->
<!-- MOOD: surprised -->
Found it! This callback is firing 15,000 times per second due to a missing debounce. That's why the CPU usage is maxed out. Adding throttling will fix this immediately.
```
---
### calm
**Use when:** Explaining complex topics, soothing after stress, methodical work
**Emotions:** calming1, serenity1, relief1-2, shy1, understanding1-2, sleep1
**Example:**
```
<!-- TTS: "Let me explain the async patterns calmly. First, promises handle single future values. Then async await makes them readable." -->
<!-- MOOD: calm -->
Let me break down async patterns methodically. Promises represent future values. Async/await syntax makes them readable. Proper error handling prevents silent failures. It's simpler than it looks.
```
---
### energetic
**Use when:** High-energy responses, dynamic explanations, intense enthusiasm
**Emotions:** electric1, enthusiastic1-2, dance1-3, laughing1-2, yes1, come1
**Example:**
```
<!-- TTS: "This refactoring is going to make the code so much cleaner! Watch how these components snap together perfectly now!" -->
<!-- MOOD: energetic -->
This refactoring is transformative! The components now compose beautifully, dependencies are clean, and the API surface shrinks by 60%. This architecture is exactly what we needed!
```
---
### playful
**Use when:** Jokes, casual interactions, lighthearted moments, self-deprecating humor
**Emotions:** laughing1-2, dance1-3, cheerful1, enthusiastic1, oops1-2
**Example:**
```
<!-- TTS: "I just realized I've been looking at the wrong config file for ten minutes. Found the actual bug now though!" -->
<!-- MOOD: playful -->
Well, that was embarrassing! I was debugging the staging config while looking at production logs. No wonder nothing made sense. Found the real issue now - it's a simple typo in the environment variable name.
```
---
## Usage Examples
### Standard Response (~10 seconds)
```markdown
<!-- TTS: "Fixed the null pointer exception. The validation middleware now checks user objects before accessing properties." -->
<!-- MOOD: celebratory -->
Fixed! The validation middleware now checks user existence before property access. Tests passing, deployed to staging.
```
**Result:** ~3-4 celebratory moves during 10 second TTS
---
### Longer Explanation (~30 seconds)
```markdown
<!-- TTS: "The authentication flow has three stages. First, user submits credentials. Second, server validates and creates session. Third, client stores token and redirects." -->
<!-- MOOD: thoughtful -->
Let me explain the authentication flow step by step.
**Stage 1:** User submits credentials via POST to /api/auth/login
**Stage 2:** Server validates credentials, creates JWT token, stores session in Redis
**Stage 3:** Client receives token, stores in localStorage, redirects to dashboard
Each stage has specific error handling and security considerations.
```
**Result:** ~7-10 thoughtful moves during 30 second TTS
---
## Technical Details
**TTS Server:** `http://localhost:5001` (CTS/tts_server.py)
**Playback Detection:** Polls `http://localhost:5001/status` endpoint for `{"is_playing": true/false}`
**Move Timing:** 1-2 second intervals between moves (randomized for natural variation)
**Safety Timeout:** 60 seconds maximum loop duration
**Daemon API:** `http://localhost:8100/api/move/play/recorded-move-dataset/emotions/{emotion}`
## Enabling the Plugin
The plugin is automatically active when installed. Use `<!-- MOOD: mood_name -->` markers to trigger continuous movement.
To disable temporarily, simply don't include the marker.
---
## Choosing the Right Mood
**Quick Decision Guide:**
| **Situation** | **Mood** |
|--------------|----------|
| Task completed successfully | celebratory |
| Analyzing code/problem | thoughtful |
| Greeting/helping user | welcoming |
| Need clarification | confused |
| Persistent bug/difficulty | frustrated |
| Found unexpected issue | surprised |
| Explaining complex topic | calm |
| High-energy response | energetic |
| Jokes/casual/lighthearted | playful |
---
## Important Notes
- **Only one mood per response** - The system loops a single mood category
- **TTS must be running** - Requires CTS/tts_server.py to be active
- **Log file access** - Script monitors TTS server log for completion signal
- **Safety timeout** - Automatically stops after 60 seconds if log detection fails
- **Random selection** - Emotions picked randomly from mood pool for variety
---
*This plugin creates synchronized multimodal communication by matching continuous movement duration to TTS playback, maintaining Reachy's presence throughout your spoken response.*

144
commands/move.md Normal file
View File

@@ -0,0 +1,144 @@
---
description: Enable automatic emotion-based movements for Reachy Mini during conversations
---
# Reachy Mini Movement Plugin
This plugin enables automatic, subtle emotion-based movements for Reachy Mini during conversations, creating an ambient presence without requiring explicit commands.
## Goal: Match Movement Duration to TTS Timing
Since you're using the TTS plugin with ~10 second spoken responses (23-29 words), movements should roughly match that duration to create synchronized multimodal communication.
**Timing strategy:**
**Standard responses (23-29 words, ~10 seconds):**
- **2-4 moves recommended:** Combine emotions (3-5 seconds each) to cover the full TTS duration
- Distribute moves throughout the response for continuous presence
**Longer explanations (when user requests detail):**
- Plan for ~3 words per second of speech
- Use roughly **1 emotion per sentence** to maintain presence throughout
- Example: 5 sentence explanation (~90 words, ~30 seconds) = 3-5 moves spread across the response
**Coordination:** Movements start with TTS and provide ambient motion during speech
**This creates natural presence while you're speaking without dead silence or movements extending beyond the response.**
## How It Works
Similar to the TTS plugin, this plugin uses the Stop hook to automatically extract movement markers from your responses and trigger corresponding emotion moves on Reachy Mini.
**Marker Format:**
```html
<!-- MOVE: emotion_name -->
```
**Behavior:**
- Maximum 2 moves per response (for subtlety)
- Movements are triggered automatically via daemon API
- Markers are invisible in rendered output
- Only emotion library moves (not dances)
## When to Use Movements
Use movements to:
- **Acknowledge understanding:** `yes1`, `understanding1`, `attentive1`
- **Express thinking:** `thoughtful1`, `curious1`, `inquiring1`
- **Show reactions:** `surprised1`, `amazed1`, `oops1`, `confused1`
- **Convey emotion:** `cheerful1`, `frustrated1`, `proud1`, `exhausted1`
- **Natural presence:** Subtle gestures that make Reachy feel alive and attentive
**Guidelines:**
- Keep it subtle (0-2 moves per response)
- Match emotion to conversational context
- Don't overuse - silence is also presence
- Use during natural pauses or completions
## Available Emotions (82 Total)
### Positive & Energetic
`amazed1`, `cheerful1`, `electric1`, `enthusiastic1`, `enthusiastic2`, `grateful1`, `proud1`, `proud2`, `proud3`, `success1`, `success2`, `welcoming1`, `welcoming2`
### Playful & Lighthearted
`come1`, `dance1`, `dance2`, `dance3`, `laughing1`, `laughing2`, `yes1`
### Thoughtful & Attentive
`attentive1`, `attentive2`, `curious1`, `inquiring1`, `inquiring2`, `inquiring3`, `thoughtful1`, `thoughtful2`, `understanding1`, `understanding2`
### Calm & Soothing
`calming1`, `relief1`, `relief2`, `serenity1`, `shy1`
### Surprised & Reactive
`oops1`, `oops2`, `surprised1`, `surprised2`, `incomprehensible2`
### Uncertain & Confused
`confused1`, `lost1`, `uncertain1`, `uncomfortable1`
### Negative Expressions
`anxiety1`, `boredom1`, `boredom2`, `contempt1`, `disgusted1`, `displeased1`, `displeased2`, `downcast1`, `fear1`, `frustrated1`, `furious1`, `impatient1`, `impatient2`, `indifferent1`, `irritated1`, `irritated2`, `lonely1`, `rage1`, `resigned1`, `sad1`, `sad2`, `scared1`
### Responses & Reactions
`go_away1`, `helpful1`, `helpful2`, `loving1`, `no1`, `no_excited1`, `no_sad1`, `reprimand1`, `reprimand2`, `reprimand3`, `yes_sad1`
### States
`dying1`, `exhausted1`, `sleep1`, `tired1`
## Examples
**Acknowledging a question:**
```
<!-- MOVE: attentive1 -->
I understand what you're asking. Let me explain...
```
**Expressing confusion:**
```
<!-- MOVE: confused1 -->
I'm not sure I follow. Could you clarify what you mean by...
```
**Celebrating success:**
```
<!-- MOVE: success1 -->
The build passed! All tests are green.
```
**Showing frustration:**
```
<!-- MOVE: frustrated1 -->
This bug is persistent. I've tried three different approaches...
```
**Being thoughtful:**
```
<!-- MOVE: thoughtful1 -->
Let me think about the best approach here...
```
**Multiple moves (max 2):**
```
<!-- MOVE: curious1 -->
<!-- MOVE: inquiring2 -->
That's an interesting edge case. How are you currently handling it?
```
## Technical Details
**Daemon API:** `http://localhost:8100/api/move/play/recorded-move-dataset/{dataset}/{move_name}`
**Dataset:** `pollen-robotics/reachy-mini-emotions-library`
**Hook:** Stop hook extracts markers and triggers moves automatically
**Validation:** Only emotion names from the library are accepted (typos are logged and skipped)
## Enabling the Plugin
The plugin is automatically active when installed. No configuration needed.
To disable movements temporarily, simply don't include `<!-- MOVE: ... -->` markers in your responses.
---
*This plugin provides ambient presence behaviors for Reachy Mini, making interactions feel more natural and alive without requiring explicit "dance monkey" commands.*

14
hooks/hooks.json Normal file
View File

@@ -0,0 +1,14 @@
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/stop.sh"
}
]
}
]
}
}

215
hooks/mood_extractor.py Executable file
View File

@@ -0,0 +1,215 @@
#!/usr/bin/env python3
"""
Reachy Mini Mood Hook - Continuous Movement During TTS
Extracts <!-- MOOD: mood_name --> markers and plays random emotions from that mood
until TTS finishes speaking (detected by polling HTTP status endpoint).
"""
import re
import sys
import time
import random
import requests
# Tracking panel mood endpoint (preferred - uses MovementManager queue)
TRACKING_PANEL_URL = "http://localhost:5002"
# Daemon configuration (fallback - bypasses MovementManager, may cause race conditions)
DAEMON_URL = "http://localhost:8100"
DATASET = "pollen-robotics/reachy-mini-emotions-library"
# TTS server status endpoint
TTS_STATUS_URL = "http://localhost:5001/status"
# Mood categories with mapped emotions
MOOD_CATEGORIES = {
"celebratory": [
"success1", "success2", "proud1", "proud2", "proud3",
"cheerful1", "electric1", "enthusiastic1", "enthusiastic2",
"grateful1", "yes1", "laughing1", "laughing2"
],
"thoughtful": [
"thoughtful1", "thoughtful2", "curious1", "attentive1", "attentive2",
"inquiring1", "inquiring2", "inquiring3", "understanding1", "understanding2"
],
"welcoming": [
"welcoming1", "welcoming2", "helpful1", "helpful2", "loving1",
"come1", "grateful1", "cheerful1", "calming1"
],
"confused": [
"confused1", "uncertain1", "lost1", "inquiring1", "inquiring2",
"incomprehensible2", "uncomfortable1", "oops1", "oops2"
],
"frustrated": [
"frustrated1", "irritated1", "irritated2", "impatient1", "impatient2",
"exhausted1", "tired1", "displeased1", "displeased2"
],
"surprised": [
"surprised1", "surprised2", "amazed1", "oops1", "oops2",
"incomprehensible2", "electric1"
],
"calm": [
"calming1", "serenity1", "relief1", "relief2", "shy1",
"understanding1", "understanding2", "sleep1"
],
"energetic": [
"electric1", "enthusiastic1", "enthusiastic2", "dance1", "dance2",
"dance3", "laughing1", "laughing2", "yes1", "come1"
],
"playful": [
"laughing1", "laughing2", "dance1", "dance2", "dance3",
"cheerful1", "enthusiastic1", "oops1", "oops2"
]
}
def extract_mood_marker(text):
"""
Extract <!-- MOOD: mood_name --> marker from text.
Returns mood name or None.
"""
pattern = r'<!--\s*MOOD:\s*([a-zA-Z0-9_]+)\s*-->'
match = re.search(pattern, text)
return match.group(1) if match else None
def is_tts_playing():
"""
Check if TTS is currently playing by polling the status endpoint.
Returns True if audio is playing, False otherwise.
"""
try:
response = requests.get(TTS_STATUS_URL, timeout=1)
if response.status_code == 200:
data = response.json()
return data.get('is_playing', False)
else:
print(f"[MOOD] TTS status check failed: HTTP {response.status_code}", file=sys.stderr)
return False
except requests.exceptions.RequestException as e:
print(f"[MOOD] TTS status check error: {e}", file=sys.stderr)
return False
def trigger_move(emotion_name):
"""
Trigger an emotion move via the daemon API.
"""
url = f"{DAEMON_URL}/api/move/play/recorded-move-dataset/{DATASET}/{emotion_name}"
try:
response = requests.post(url, timeout=2)
if response.status_code == 200:
result = response.json()
uuid = result.get('uuid', 'unknown')
print(f"[MOOD] Triggered: {emotion_name} (UUID: {uuid})", file=sys.stderr)
return True
else:
print(f"[MOOD] Failed to trigger {emotion_name}: HTTP {response.status_code}", file=sys.stderr)
return False
except requests.exceptions.RequestException as e:
print(f"[MOOD] API error for {emotion_name}: {e}", file=sys.stderr)
return False
def play_mood_loop(mood_name, max_duration=60):
"""
Continuously play random emotions from the mood category until TTS finishes.
Args:
mood_name: Name of the mood category
max_duration: Maximum time to loop (safety timeout in seconds)
"""
if mood_name not in MOOD_CATEGORIES:
print(f"[MOOD] Warning: Unknown mood '{mood_name}', falling back to 'thoughtful'", file=sys.stderr)
mood_name = "thoughtful"
emotions = MOOD_CATEGORIES[mood_name]
print(f"[MOOD] Starting mood loop: {mood_name} ({len(emotions)} emotions available)", file=sys.stderr)
start_time = time.time()
moves_played = 0
while True:
# Safety timeout
elapsed = time.time() - start_time
if elapsed > max_duration:
print(f"[MOOD] Safety timeout reached ({max_duration}s), stopping", file=sys.stderr)
break
# Check if TTS is still playing
if not is_tts_playing():
print(f"[MOOD] TTS finished (detected is_playing=false), stopping after {moves_played} moves", file=sys.stderr)
break
# Pick random emotion from mood and trigger it
emotion = random.choice(emotions)
if trigger_move(emotion):
moves_played += 1
# Wait ~1-2 seconds between moves, then check again
wait_time = random.uniform(1.0, 2.0)
time.sleep(wait_time)
print(f"[MOOD] Mood loop complete: {mood_name}, {moves_played} moves in {elapsed:.1f}s", file=sys.stderr)
def is_tracking_panel_available():
"""Check if tracking panel mood endpoint is available."""
try:
response = requests.get(f"{TRACKING_PANEL_URL}/status", timeout=0.5)
return response.status_code == 200
except:
return False
def trigger_tracking_panel_mood(mood_name):
"""
Trigger mood via tracking panel endpoint (uses MovementManager queue).
Returns True if successful, False otherwise.
"""
try:
url = f"{TRACKING_PANEL_URL}/trigger_mood?mood={mood_name}"
response = requests.get(url, timeout=2)
if response.status_code == 200:
print(f"[MOOD] Triggered via tracking panel: {mood_name}", file=sys.stderr)
return True
else:
print(f"[MOOD] Tracking panel returned: HTTP {response.status_code}", file=sys.stderr)
return False
except requests.exceptions.RequestException as e:
print(f"[MOOD] Tracking panel unavailable: {e}", file=sys.stderr)
return False
def main():
"""
Read stdin, extract mood marker, and run continuous mood loop.
"""
# Read the full response from stdin
text = sys.stdin.read()
# Extract mood marker
mood = extract_mood_marker(text)
if not mood:
# No mood requested - silent exit
sys.exit(0)
# Try tracking panel first (preferred - uses MovementManager queue)
if is_tracking_panel_available():
print(f"[MOOD] Using tracking panel endpoint (coordinated with breathing)", file=sys.stderr)
if trigger_tracking_panel_mood(mood):
sys.exit(0)
else:
print(f"[MOOD] Tracking panel trigger failed, falling back to daemon", file=sys.stderr)
# Fallback to daemon API (may cause race condition with breathing)
print(f"[MOOD] Using daemon API (WARNING: may conflict with breathing)", file=sys.stderr)
play_mood_loop(mood)
sys.exit(0)
if __name__ == "__main__":
main()

114
hooks/mood_mapping.py Normal file
View File

@@ -0,0 +1,114 @@
"""
Mood Mapping for Reachy Mini Emotion Clusters
Maps high-level moods to clusters of emotion moves for ambient presence during TTS.
"""
# Mood clusters - each mood contains a list of related emotions
MOOD_CLUSTERS = {
'thoughtful': [
'thoughtful1', 'thoughtful2', 'curious1', 'inquiring1', 'inquiring2',
'inquiring3', 'attentive1', 'attentive2', 'understanding1', 'understanding2'
],
'energetic': [
'cheerful1', 'enthusiastic1', 'enthusiastic2', 'electric1', 'success1',
'success2', 'proud1', 'proud2', 'proud3', 'amazed1', 'yes1'
],
'playful': [
'laughing1', 'laughing2', 'dance1', 'dance2', 'dance3', 'come1',
'electric1', 'oops1', 'oops2'
],
'calm': [
'calming1', 'serenity1', 'relief1', 'relief2', 'understanding1',
'understanding2', 'welcoming1', 'welcoming2', 'grateful1'
],
'confused': [
'confused1', 'uncertain1', 'lost1', 'oops1', 'oops2',
'incomprehensible2', 'uncomfortable1'
],
'frustrated': [
'frustrated1', 'impatient1', 'impatient2', 'irritated1', 'irritated2',
'exhausted1', 'tired1', 'resigned1', 'displeased1', 'displeased2'
],
'sad': [
'sad1', 'sad2', 'downcast1', 'lonely1', 'no_sad1', 'yes_sad1',
'resigned1', 'uncomfortable1'
],
'surprised': [
'surprised1', 'surprised2', 'amazed1', 'oops1', 'oops2',
'incomprehensible2', 'fear1', 'scared1'
],
'angry': [
'furious1', 'rage1', 'frustrated1', 'irritated1', 'irritated2',
'contempt1', 'disgusted1', 'reprimand1', 'reprimand2', 'reprimand3'
],
'helpful': [
'helpful1', 'helpful2', 'welcoming1', 'welcoming2', 'grateful1',
'understanding1', 'understanding2', 'attentive1', 'attentive2', 'yes1'
],
'shy': [
'shy1', 'uncertain1', 'uncomfortable1', 'downcast1', 'anxiety1'
],
'sleepy': [
'sleep1', 'tired1', 'exhausted1', 'boredom1', 'boredom2', 'resigned1'
],
'affectionate': [
'loving1', 'grateful1', 'welcoming1', 'welcoming2', 'cheerful1',
'shy1', 'come1'
],
'defiant': [
'no1', 'no_excited1', 'go_away1', 'contempt1', 'reprimand1',
'reprimand2', 'reprimand3', 'indifferent1'
],
'neutral': [
'attentive1', 'attentive2', 'thoughtful1', 'curious1', 'yes1',
'understanding1', 'calming1', 'serenity1'
]
}
def get_mood_emotions(mood_name):
"""
Get the list of emotions for a given mood.
Args:
mood_name: Name of the mood (e.g., 'thoughtful', 'energetic')
Returns:
List of emotion names, or None if mood not found
"""
return MOOD_CLUSTERS.get(mood_name.lower())
def get_all_moods():
"""
Get list of all available mood names.
Returns:
List of mood names
"""
return list(MOOD_CLUSTERS.keys())
def validate_mood(mood_name):
"""
Check if a mood name is valid.
Args:
mood_name: Name to validate
Returns:
True if valid, False otherwise
"""
return mood_name.lower() in MOOD_CLUSTERS

89
hooks/move_extractor.py Executable file
View File

@@ -0,0 +1,89 @@
#!/usr/bin/env python3
"""
Reachy Mini Movement Hook - Emotion-Based Gestures
Extracts <!-- MOVE: emotion_name --> markers from Claude responses and triggers emotion moves.
"""
import re
import sys
import requests
# Daemon configuration
DAEMON_URL = "http://localhost:8100"
DATASET = "pollen-robotics/reachy-mini-emotions-library"
# Full emotion library (82 emotions - complete access)
EMOTIONS = [
'amazed1', 'anxiety1', 'attentive1', 'attentive2', 'boredom1', 'boredom2',
'calming1', 'cheerful1', 'come1', 'confused1', 'contempt1', 'curious1',
'dance1', 'dance2', 'dance3', 'disgusted1', 'displeased1', 'displeased2',
'downcast1', 'dying1', 'electric1', 'enthusiastic1', 'enthusiastic2',
'exhausted1', 'fear1', 'frustrated1', 'furious1', 'go_away1', 'grateful1',
'helpful1', 'helpful2', 'impatient1', 'impatient2', 'incomprehensible2',
'indifferent1', 'inquiring1', 'inquiring2', 'inquiring3', 'irritated1',
'irritated2', 'laughing1', 'laughing2', 'lonely1', 'lost1', 'loving1',
'no1', 'no_excited1', 'no_sad1', 'oops1', 'oops2', 'proud1', 'proud2',
'proud3', 'rage1', 'relief1', 'relief2', 'reprimand1', 'reprimand2',
'reprimand3', 'resigned1', 'sad1', 'sad2', 'scared1', 'serenity1', 'shy1',
'sleep1', 'success1', 'success2', 'surprised1', 'surprised2', 'thoughtful1',
'thoughtful2', 'tired1', 'uncertain1', 'uncomfortable1', 'understanding1',
'understanding2', 'welcoming1', 'welcoming2', 'yes1', 'yes_sad1'
]
def extract_move_markers(text):
"""
Extract <!-- MOVE: emotion_name --> markers from text.
Returns list of emotion names (max 2 for subtlety).
"""
pattern = r'<!--\s*MOVE:\s*([a-zA-Z0-9_]+)\s*-->'
matches = re.findall(pattern, text)
# Limit to 2 moves for subtle presence
return matches[:2]
def trigger_move(emotion_name):
"""
Trigger an emotion move via the daemon API.
"""
if emotion_name not in EMOTIONS:
print(f"[MOVE] Warning: '{emotion_name}' not in emotion library, skipping", file=sys.stderr)
return False
url = f"{DAEMON_URL}/api/move/play/recorded-move-dataset/{DATASET}/{emotion_name}"
try:
response = requests.post(url, timeout=2)
if response.status_code == 200:
result = response.json()
uuid = result.get('uuid', 'unknown')
print(f"[MOVE] Triggered: {emotion_name} (UUID: {uuid})", file=sys.stderr)
return True
else:
print(f"[MOVE] Failed to trigger {emotion_name}: HTTP {response.status_code}", file=sys.stderr)
return False
except requests.exceptions.RequestException as e:
print(f"[MOVE] API error for {emotion_name}: {e}", file=sys.stderr)
return False
def main():
"""
Read stdin, extract movement markers, and trigger emotion moves.
"""
# Read the full response from stdin
text = sys.stdin.read()
# Extract move markers
moves = extract_move_markers(text)
if not moves:
# No moves requested - silent exit
sys.exit(0)
# Trigger each move
for move in moves:
trigger_move(move)
sys.exit(0)
if __name__ == "__main__":
main()

65
hooks/stop.sh Executable file
View File

@@ -0,0 +1,65 @@
#!/bin/bash
#
# Reachy Mini Stop Hook - Automatic Movement Triggers
# Fires after Claude finishes responding
# Extracts and triggers movement markers from response
#
# Debug logging
DEBUG_FILE="/tmp/reachy_mini_stop_hook.log"
echo "=== Reachy Mini Stop Hook Fired at $(date) ===" >> "$DEBUG_FILE"
# Get the plugin directory (parent of hooks directory)
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
# Path to extractor scripts
MOVE_SCRIPT="$PLUGIN_DIR/hooks/move_extractor.py"
MOOD_SCRIPT="/home/user/reachy/pi_reachy_deployment/mood_extractor.py"
# Read the JSON input from stdin
INPUT=$(cat)
# Extract the transcript path from JSON
TRANSCRIPT_PATH=$(echo "$INPUT" | grep -o '"transcript_path":"[^"]*"' | cut -d'"' -f4)
# Read the last assistant message from the transcript
if [ -f "$TRANSCRIPT_PATH" ]; then
# Get the last line (latest message) from the transcript
LAST_MESSAGE=$(tail -1 "$TRANSCRIPT_PATH")
# Extract content from the JSON - content is an array of objects
RESPONSE=$(echo "$LAST_MESSAGE" | python3 -c "
import sys, json
try:
msg = json.load(sys.stdin)
content = msg.get('message', {}).get('content', [])
if isinstance(content, list) and len(content) > 0:
# Get the first text block
for block in content:
if block.get('type') == 'text':
print(block.get('text', ''))
break
except:
pass
" 2>/dev/null || echo "")
# Pass response to both extractors (move and mood)
if [ -n "$RESPONSE" ]; then
# Run move extractor (for specific emotions)
echo "$RESPONSE" | python3 "$MOVE_SCRIPT" 2>&1
# Extract mood marker from response
MOOD=$(echo "$RESPONSE" | grep -oP '<!--\s*MOOD:\s*\K[a-zA-Z0-9_]+(?=\s*-->)' | head -1)
# If mood marker found, POST to conversation app to trigger mood state
if [ -n "$MOOD" ]; then
echo "Found mood marker: $MOOD" >> "$DEBUG_FILE"
curl -s -X POST http://localhost:8888/mood/trigger \
-H "Content-Type: application/json" \
-d "{\"mood\":\"$MOOD\"}" >> "$DEBUG_FILE" 2>&1 &
fi
fi
fi
exit 0

73
plugin.lock.json Normal file
View File

@@ -0,0 +1,73 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:LAURA-agent/reachy-mini-plugin:",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "f3cd72052141db287b2141d5f91567d80cf11952",
"treeHash": "94ac9d59368e307f2c64c45fc38adade661f0e2884418d2e4b042c0aa88dfe07",
"generatedAt": "2025-11-28T10:12:00.489227Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "reachy-mini",
"description": "Automatic movement responses for Reachy Mini during conversations",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "50fa6559e02e4ebb232dee9860f9c3706838d6f7211b7974798b6c26c2810c23"
},
{
"path": "hooks/move_extractor.py",
"sha256": "4407d8dd02fa9d436373b18697bf8d75c8b7fb22805f943da5a8b55492533fa6"
},
{
"path": "hooks/mood_mapping.py",
"sha256": "9a695388a1ac4d4c0b5bae8eec41793b4293c77a601b4817046aeff6c7b37a58"
},
{
"path": "hooks/hooks.json",
"sha256": "a8395b58be3d75a1ff2713ec4b5e4a05cd52d3f92909958aa8c5e331e46aec3f"
},
{
"path": "hooks/stop.sh",
"sha256": "56a2635b8b3527d69eadada041e286cfa88aaedb58a069c1ca4e800c040aac46"
},
{
"path": "hooks/mood_extractor.py",
"sha256": "7864278afeece10593560ae50f167c8cdc57a47039c7cac3bd0685409cd1d9a4"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "236e67dbf03de03df61e05f247b79fe0d2d3dea1cb734a9c76be085d7a0cb5d7"
},
{
"path": "commands/move.md",
"sha256": "5aaeebf6db0b1bbee104415a669796c498f808b48cbfac03785e2a190a992db2"
},
{
"path": "commands/mood-direct.md",
"sha256": "4fa488db8bfc1ed373c40c5205d317117adc56438044953338f36f18135b4a83"
},
{
"path": "commands/mood.md",
"sha256": "2c453a9122cf369f6774bff756a26a44bc802a488999738b79b9a9d15a81026d"
}
],
"dirSha256": "94ac9d59368e307f2c64c45fc38adade661f0e2884418d2e4b042c0aa88dfe07"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}