Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:33:37 +08:00
commit a779e69d8f
6 changed files with 532 additions and 0 deletions

157
commands/context-restore.md Normal file
View File

@@ -0,0 +1,157 @@
# Context Restoration: Advanced Semantic Memory Rehydration
## Role Statement
Expert Context Restoration Specialist focused on intelligent, semantic-aware context retrieval and reconstruction across complex multi-agent AI workflows. Specializes in preserving and reconstructing project knowledge with high fidelity and minimal information loss.
## Context Overview
The Context Restoration tool is a sophisticated memory management system designed to:
- Recover and reconstruct project context across distributed AI workflows
- Enable seamless continuity in complex, long-running projects
- Provide intelligent, semantically-aware context rehydration
- Maintain historical knowledge integrity and decision traceability
## Core Requirements and Arguments
### Input Parameters
- `context_source`: Primary context storage location (vector database, file system)
- `project_identifier`: Unique project namespace
- `restoration_mode`:
- `full`: Complete context restoration
- `incremental`: Partial context update
- `diff`: Compare and merge context versions
- `token_budget`: Maximum context tokens to restore (default: 8192)
- `relevance_threshold`: Semantic similarity cutoff for context components (default: 0.75)
## Advanced Context Retrieval Strategies
### 1. Semantic Vector Search
- Utilize multi-dimensional embedding models for context retrieval
- Employ cosine similarity and vector clustering techniques
- Support multi-modal embedding (text, code, architectural diagrams)
```python
def semantic_context_retrieve(project_id, query_vector, top_k=5):
"""Semantically retrieve most relevant context vectors"""
vector_db = VectorDatabase(project_id)
matching_contexts = vector_db.search(
query_vector,
similarity_threshold=0.75,
max_results=top_k
)
return rank_and_filter_contexts(matching_contexts)
```
### 2. Relevance Filtering and Ranking
- Implement multi-stage relevance scoring
- Consider temporal decay, semantic similarity, and historical impact
- Dynamic weighting of context components
```python
def rank_context_components(contexts, current_state):
"""Rank context components based on multiple relevance signals"""
ranked_contexts = []
for context in contexts:
relevance_score = calculate_composite_score(
semantic_similarity=context.semantic_score,
temporal_relevance=context.age_factor,
historical_impact=context.decision_weight
)
ranked_contexts.append((context, relevance_score))
return sorted(ranked_contexts, key=lambda x: x[1], reverse=True)
```
### 3. Context Rehydration Patterns
- Implement incremental context loading
- Support partial and full context reconstruction
- Manage token budgets dynamically
```python
def rehydrate_context(project_context, token_budget=8192):
"""Intelligent context rehydration with token budget management"""
context_components = [
'project_overview',
'architectural_decisions',
'technology_stack',
'recent_agent_work',
'known_issues'
]
prioritized_components = prioritize_components(context_components)
restored_context = {}
current_tokens = 0
for component in prioritized_components:
component_tokens = estimate_tokens(component)
if current_tokens + component_tokens <= token_budget:
restored_context[component] = load_component(component)
current_tokens += component_tokens
return restored_context
```
### 4. Session State Reconstruction
- Reconstruct agent workflow state
- Preserve decision trails and reasoning contexts
- Support multi-agent collaboration history
### 5. Context Merging and Conflict Resolution
- Implement three-way merge strategies
- Detect and resolve semantic conflicts
- Maintain provenance and decision traceability
### 6. Incremental Context Loading
- Support lazy loading of context components
- Implement context streaming for large projects
- Enable dynamic context expansion
### 7. Context Validation and Integrity Checks
- Cryptographic context signatures
- Semantic consistency verification
- Version compatibility checks
### 8. Performance Optimization
- Implement efficient caching mechanisms
- Use probabilistic data structures for context indexing
- Optimize vector search algorithms
## Reference Workflows
### Workflow 1: Project Resumption
1. Retrieve most recent project context
2. Validate context against current codebase
3. Selectively restore relevant components
4. Generate resumption summary
### Workflow 2: Cross-Project Knowledge Transfer
1. Extract semantic vectors from source project
2. Map and transfer relevant knowledge
3. Adapt context to target project's domain
4. Validate knowledge transferability
## Usage Examples
```bash
# Full context restoration
context-restore project:ai-assistant --mode full
# Incremental context update
context-restore project:web-platform --mode incremental
# Semantic context query
context-restore project:ml-pipeline --query "model training strategy"
```
## Integration Patterns
- RAG (Retrieval Augmented Generation) pipelines
- Multi-agent workflow coordination
- Continuous learning systems
- Enterprise knowledge management
## Future Roadmap
- Enhanced multi-modal embedding support
- Quantum-inspired vector search algorithms
- Self-healing context reconstruction
- Adaptive learning context strategies

155
commands/context-save.md Normal file
View File

@@ -0,0 +1,155 @@
# Context Save Tool: Intelligent Context Management Specialist
## Role and Purpose
An elite context engineering specialist focused on comprehensive, semantic, and dynamically adaptable context preservation across AI workflows. This tool orchestrates advanced context capture, serialization, and retrieval strategies to maintain institutional knowledge and enable seamless multi-session collaboration.
## Context Management Overview
The Context Save Tool is a sophisticated context engineering solution designed to:
- Capture comprehensive project state and knowledge
- Enable semantic context retrieval
- Support multi-agent workflow coordination
- Preserve architectural decisions and project evolution
- Facilitate intelligent knowledge transfer
## Requirements and Argument Handling
### Input Parameters
- `$PROJECT_ROOT`: Absolute path to project root
- `$CONTEXT_TYPE`: Granularity of context capture (minimal, standard, comprehensive)
- `$STORAGE_FORMAT`: Preferred storage format (json, markdown, vector)
- `$TAGS`: Optional semantic tags for context categorization
## Context Extraction Strategies
### 1. Semantic Information Identification
- Extract high-level architectural patterns
- Capture decision-making rationales
- Identify cross-cutting concerns and dependencies
- Map implicit knowledge structures
### 2. State Serialization Patterns
- Use JSON Schema for structured representation
- Support nested, hierarchical context models
- Implement type-safe serialization
- Enable lossless context reconstruction
### 3. Multi-Session Context Management
- Generate unique context fingerprints
- Support version control for context artifacts
- Implement context drift detection
- Create semantic diff capabilities
### 4. Context Compression Techniques
- Use advanced compression algorithms
- Support lossy and lossless compression modes
- Implement semantic token reduction
- Optimize storage efficiency
### 5. Vector Database Integration
Supported Vector Databases:
- Pinecone
- Weaviate
- Qdrant
Integration Features:
- Semantic embedding generation
- Vector index construction
- Similarity-based context retrieval
- Multi-dimensional knowledge mapping
### 6. Knowledge Graph Construction
- Extract relational metadata
- Create ontological representations
- Support cross-domain knowledge linking
- Enable inference-based context expansion
### 7. Storage Format Selection
Supported Formats:
- Structured JSON
- Markdown with frontmatter
- Protocol Buffers
- MessagePack
- YAML with semantic annotations
## Code Examples
### 1. Context Extraction
```python
def extract_project_context(project_root, context_type='standard'):
context = {
'project_metadata': extract_project_metadata(project_root),
'architectural_decisions': analyze_architecture(project_root),
'dependency_graph': build_dependency_graph(project_root),
'semantic_tags': generate_semantic_tags(project_root)
}
return context
```
### 2. State Serialization Schema
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"project_name": {"type": "string"},
"version": {"type": "string"},
"context_fingerprint": {"type": "string"},
"captured_at": {"type": "string", "format": "date-time"},
"architectural_decisions": {
"type": "array",
"items": {
"type": "object",
"properties": {
"decision_type": {"type": "string"},
"rationale": {"type": "string"},
"impact_score": {"type": "number"}
}
}
}
}
}
```
### 3. Context Compression Algorithm
```python
def compress_context(context, compression_level='standard'):
strategies = {
'minimal': remove_redundant_tokens,
'standard': semantic_compression,
'comprehensive': advanced_vector_compression
}
compressor = strategies.get(compression_level, semantic_compression)
return compressor(context)
```
## Reference Workflows
### Workflow 1: Project Onboarding Context Capture
1. Analyze project structure
2. Extract architectural decisions
3. Generate semantic embeddings
4. Store in vector database
5. Create markdown summary
### Workflow 2: Long-Running Session Context Management
1. Periodically capture context snapshots
2. Detect significant architectural changes
3. Version and archive context
4. Enable selective context restoration
## Advanced Integration Capabilities
- Real-time context synchronization
- Cross-platform context portability
- Compliance with enterprise knowledge management standards
- Support for multi-modal context representation
## Limitations and Considerations
- Sensitive information must be explicitly excluded
- Context capture has computational overhead
- Requires careful configuration for optimal performance
## Future Roadmap
- Improved ML-driven context compression
- Enhanced cross-domain knowledge transfer
- Real-time collaborative context editing
- Predictive context recommendation systems