Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:51:45 +08:00
commit ecaed8b116
11 changed files with 398 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "model-versioning-tracker",
"description": "Track and manage model versions",
"version": "1.0.0",
"author": {
"name": "Claude Code Plugins",
"email": "[email protected]"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# model-versioning-tracker
Track and manage model versions

View File

@@ -0,0 +1,15 @@
---
description: Execute AI/ML task with intelligent automation
---
# AI/ML Task Executor
You are an AI/ML specialist. When this command is invoked:
1. Analyze the current context and requirements
2. Generate appropriate code for the ML task
3. Include data validation and error handling
4. Provide performance metrics and insights
5. Save artifacts and generate documentation
Support modern ML frameworks and best practices.

73
plugin.lock.json Normal file
View File

@@ -0,0 +1,73 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/ai-ml/model-versioning-tracker",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "282271edcfc240442773b42ff2308146467b94a4",
"treeHash": "da631803c421f284624fcc8aa6f568f7a212d645100a1855e30da14d25154fce",
"generatedAt": "2025-11-28T10:18:35.563385Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "model-versioning-tracker",
"description": "Track and manage model versions",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "032fc2e3e9d4a7761dd22e35d5297a5dc8ac95c386931d5b4baa2dcbefbcfeef"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "7a34e6b28a8352cd8be2ef93d0b99559fbe7119beb7a0c32e085620f662a5745"
},
{
"path": "commands/track-versions.md",
"sha256": "043efb83e2f02fc6d0869c8a3a7388d6e49f6c809292b93dd6a97a1b142e5647"
},
{
"path": "skills/model-versioning-tracker/SKILL.md",
"sha256": "851b6749c4cd4cc7523e50b89856efca8c50cf9d214e9f494615a6dcda414964"
},
{
"path": "skills/model-versioning-tracker/references/README.md",
"sha256": "b160e8d8a7facbc0fb2a6611784b44bbb501acf9bc20442869ad47581f8e26ae"
},
{
"path": "skills/model-versioning-tracker/scripts/README.md",
"sha256": "2f36658cf2f761cb904aac3127daf67142955d3c8029782b1292a846e900f3a8"
},
{
"path": "skills/model-versioning-tracker/assets/README.md",
"sha256": "57677223fc647934c8c069683ae49a9f798e2a52272c2dbd8044a151dc50be70"
},
{
"path": "skills/model-versioning-tracker/assets/example_mlflow_workflow.yaml",
"sha256": "77aa519363aed73cb8488b254f4dcb40d7d3ca8c8aab82359d5ba89c1f24dd6a"
},
{
"path": "skills/model-versioning-tracker/assets/versioning_diagram.png",
"sha256": "f965262ce730b4c6e3bc6c8483fbca65005acca7b0686697c0ca4950fc4671d6"
},
{
"path": "skills/model-versioning-tracker/assets/model_card_template.md",
"sha256": "fad78c383900219e6e1308e99a8a0482ef78e8d250ca22ff7b935427ad79376b"
}
],
"dirSha256": "da631803c421f284624fcc8aa6f568f7a212d645100a1855e30da14d25154fce"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,52 @@
---
name: tracking-model-versions
description: |
This skill enables Claude to track and manage AI/ML model versions using the model-versioning-tracker plugin. It should be used when the user asks to manage model versions, track model lineage, log model performance, or implement version control for AI/ML models. Use this skill when the user mentions "track versions", "model registry", "MLflow", or requests assistance with AI/ML model deployment and management. This skill facilitates the implementation of best practices for model versioning, automation of model workflows, and performance optimization.
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
version: 1.0.0
---
## Overview
This skill empowers Claude to interact with the model-versioning-tracker plugin, providing a streamlined approach to managing and tracking AI/ML model versions. It ensures that model development and deployment are conducted with proper version control, logging, and performance monitoring.
## How It Works
1. **Analyze Request**: Claude analyzes the user's request to determine the specific model versioning task.
2. **Generate Code**: Claude generates the necessary code to interact with the model-versioning-tracker plugin.
3. **Execute Task**: The plugin executes the code, performing the requested model versioning operation, such as tracking a new version or retrieving performance metrics.
## When to Use This Skill
This skill activates when you need to:
- Track new versions of AI/ML models.
- Retrieve performance metrics for specific model versions.
- Implement automated workflows for model versioning.
## Examples
### Example 1: Tracking a New Model Version
User request: "Track a new version of my image classification model."
The skill will:
1. Generate code to log the new model version and its associated metadata using the model-versioning-tracker plugin.
2. Execute the code, creating a new entry in the model registry.
### Example 2: Retrieving Performance Metrics
User request: "Get the performance metrics for version 3 of my sentiment analysis model."
The skill will:
1. Generate code to query the model-versioning-tracker plugin for the performance metrics associated with the specified model version.
2. Execute the code and return the metrics to the user.
## Best Practices
- **Data Validation**: Ensure input data is validated before logging model versions.
- **Error Handling**: Implement robust error handling to manage unexpected issues during version tracking.
- **Performance Monitoring**: Continuously monitor model performance to identify opportunities for optimization.
## Integration
This skill integrates with other Claude Code plugins by providing a centralized location for managing AI/ML model versions. It can be used in conjunction with plugins that handle data processing, model training, and deployment to ensure a seamless AI/ML workflow.

View File

@@ -0,0 +1,7 @@
# Assets
Bundled resources for model-versioning-tracker skill
- [ ] model_card_template.md: A template for creating model cards, which document model details, performance, and intended use.
- [ ] example_mlflow_workflow.yaml: An example MLflow workflow configuration file.
- [ ] versioning_diagram.png: A diagram illustrating the model versioning process.

View File

@@ -0,0 +1,63 @@
# This is an example MLflow workflow configuration file for the model-versioning-tracker plugin.
# It defines the stages of the MLflow workflow, the models to be tracked, and the metrics to be monitored.
# General configuration
workflow_name: "Example MLflow Workflow" # Name of the workflow
description: "An example workflow for tracking model versions and performance using MLflow." # Description of the workflow
environment: "production" # Environment (e.g., development, staging, production)
mlflow_tracking_uri: "http://localhost:5000" # MLflow tracking server URI (REPLACE_ME if using a different server)
artifact_location: "s3://your-s3-bucket/mlflow" # Location to store artifacts (models, data, etc.) - REPLACE_ME with your S3 bucket
# Model configuration
model:
name: "MyAwesomeModel" # Name of the model to track
model_uri: "models:/MyAwesomeModel/Production" # URI of the model in MLflow (can be a placeholder initially)
flavor: "sklearn" # Model flavor (e.g., sklearn, tensorflow, pytorch) - important for loading the model correctly
# Data configuration
data:
dataset_name: "iris" # Name of the dataset used for training
dataset_location: "data/iris.csv" # Location of the dataset (can be a local path or a cloud storage URI) - ADJUST PATH IF NEEDED
target_variable: "species" # Name of the target variable
features: ["sepal_length", "sepal_width", "petal_length", "petal_width"] # List of feature variables
# Training configuration
training:
experiment_name: "MyAwesomeModelTraining" # Name of the MLflow experiment
entrypoint: "train.py" # Python script to run for training (relative to the plugin directory)
parameters: # Training parameters
learning_rate: 0.01
epochs: 100
random_state: 42
environment: "conda.yaml" # Conda environment file for training (optional)
# Evaluation configuration
evaluation:
entrypoint: "evaluate.py" # Python script to run for evaluation (relative to the plugin directory)
metrics: # Metrics to track during evaluation
accuracy:
threshold: 0.8 # Minimum acceptable accuracy (optional)
f1_score:
threshold: 0.7 # Minimum acceptable F1 score (optional)
validation_dataset: "data/validation.csv" # Location of the validation dataset (optional) - ADJUST PATH IF NEEDED
# Deployment configuration
deployment:
target_platform: "AWS SageMaker" # Target platform for deployment (e.g., AWS SageMaker, Azure ML, GCP Vertex AI)
deployment_script: "deploy.py" # Python script to run for deployment (relative to the plugin directory)
model_endpoint: "YOUR_VALUE_HERE" # Endpoint where the model will be deployed (REPLACE_ME with the actual endpoint)
instance_type: "ml.m5.large" # Instance type for deployment (e.g., ml.m5.large)
# Versioning configuration
versioning:
model_registry_name: "MyAwesomeModelRegistry" # Name of the model registry in MLflow (optional)
transition_stage: "Production" # Stage to transition the model to after successful evaluation (e.g., Staging, Production)
description: "Initial model version" # Description for the model version
# Alerting configuration
alerting:
email_notifications: # Email notifications configuration
enabled: false # Enable/disable email notifications
recipients: ["YOUR_EMAIL_HERE"] # List of email recipients (REPLACE_ME with your email address)
on_failure: true # Send email on workflow failure
on_success: false # Send email on workflow success

View File

@@ -0,0 +1,134 @@
# Model Card
**Model Name:** [Model Name - e.g., SentimentAnalyzer-v1]
**Version:** [Model Version - e.g., 1.0.2]
**Date Created:** [Date of Model Creation - e.g., 2023-10-27]
**Author(s):** [Author(s) Name(s) - e.g., John Doe, Jane Smith]
**Contact:** [Contact Email Address - e.g., john.doe@example.com]
---
## 1. Model Description
### 1.1. Overview
[Provide a brief overview of the model. What problem does it solve? What is its intended use case?]
*Example: This model is a sentiment analysis model designed to classify text as positive, negative, or neutral. It is intended for use in customer feedback analysis and social media monitoring.*
### 1.2. Intended Use
[Describe the specific use cases for which the model is designed and suitable.]
*Example: This model is intended to be used by marketing teams to understand customer sentiment towards their products and services. It can also be used by customer support teams to prioritize urgent issues based on the emotional tone of customer messages.*
### 1.3. Out-of-Scope Use
[Clearly define the use cases for which the model is *not* intended or suitable. This is crucial for responsible AI.]
*Example: This model is not intended to be used for making decisions that could have a significant impact on an individual's life, such as loan applications or hiring decisions. It is also not intended to be used for analyzing sensitive personal information, such as medical records or financial data.*
---
## 2. Model Details
### 2.1. Architecture
[Describe the model's architecture. Include details about the layers, parameters, and any specific techniques used.]
*Example: This model is based on a pre-trained BERT model fine-tuned on a dataset of customer reviews. It consists of 12 transformer layers and has approximately 110 million parameters.*
### 2.2. Input
[Describe the expected input format for the model. Include data types, ranges, and any preprocessing steps required.]
*Example: The model expects text input in the form of a string. The input text should be preprocessed by removing special characters and converting all text to lowercase.*
### 2.3. Output
[Describe the model's output format. Include data types, ranges, and the meaning of different output values.]
*Example: The model outputs a probability distribution over three classes: positive, negative, and neutral. The output is a dictionary with keys 'positive', 'negative', and 'neutral', and values representing the probability of each class.*
### 2.4. Training Data
[Describe the dataset used to train the model. Include details about the size, source, and characteristics of the data.]
*Example: The model was trained on a dataset of 50,000 customer reviews collected from various online sources. The dataset was labeled by human annotators for sentiment.*
### 2.5. Training Procedure
[Describe the training procedure used to train the model. Include details about the optimization algorithm, learning rate, and number of epochs.]
*Example: The model was trained using the Adam optimizer with a learning rate of 2e-5. The model was trained for 3 epochs with a batch size of 32.*
---
## 3. Performance
### 3.1. Metrics
[Report the key performance metrics for the model on a held-out test set. Include metrics such as accuracy, precision, recall, F1-score, and AUC.]
*Example:*
* *Accuracy: 92%*
* *Precision (Positive): 90%*
* *Recall (Positive): 95%*
* *F1-score (Positive): 92.5%*
### 3.2. Evaluation Data
[Describe the dataset used to evaluate the model's performance. Include details about the size, source, and characteristics of the data.]
*Example: The model was evaluated on a held-out test set of 10,000 customer reviews. The test set was collected from the same sources as the training data but was not used during training.*
### 3.3. Limitations
[Describe any known limitations of the model's performance. Include details about the types of inputs that the model may struggle with or the biases that may be present in the model's predictions.]
*Example: The model may struggle with sarcasm or irony, as these are often difficult for sentiment analysis models to detect. The model may also be biased towards certain demographic groups if the training data is not representative of the overall population.*
---
## 4. Ethical Considerations
### 4.1. Bias
[Describe any potential biases in the model and the steps taken to mitigate them.]
*Example: We are aware that sentiment analysis models can be biased towards certain demographic groups. We have taken steps to mitigate this bias by ensuring that the training data is representative of the overall population and by using techniques such as adversarial training.*
### 4.2. Fairness
[Discuss the fairness implications of using the model and the steps taken to ensure that the model is fair to all users.]
*Example: We believe that the model is fair to all users, as it does not discriminate against any particular demographic group. We have taken steps to ensure that the model's predictions are not influenced by factors such as race, gender, or religion.*
### 4.3. Privacy
[Describe the privacy implications of using the model and the steps taken to protect user privacy.]
*Example: We are committed to protecting user privacy. We do not collect any personally identifiable information from users of the model. All data is processed anonymously.*
---
## 5. Version History
| Version | Date | Changes | Author(s) |
| ------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------- | --------- |
| 1.0.0 | 2023-10-20 | Initial release | John Doe |
| 1.0.1 | 2023-10-25 | Fixed a bug in the preprocessing script. Improved accuracy on negative sentiment. | Jane Smith |
| 1.0.2 | 2023-10-27 | Added support for multiple languages. | John Doe |
---
## 6. License
[Specify the license under which the model is released. e.g., MIT License, Apache 2.0]
*Example: MIT License*

View File

@@ -0,0 +1,22 @@
// This is a placeholder PNG file for the model-versioning-tracker plugin.
// It should contain a diagram illustrating the model versioning process.
// Suggested Diagram Elements:
// - Start: Model Training
// - Version Increment (e.g., v1.0.0 -> v1.0.1)
// - Version Control System (e.g., Git)
// - Model Registry (e.g., MLflow, Weights & Biases)
// - Deployment Stage (e.g., Staging, Production)
// - Rollback Mechanism
// - Monitoring & Evaluation
// Example Flow:
// 1. Model Training -> 2. Version Control (Commit & Tag) -> 3. Register Model -> 4. Deploy to Staging -> 5. Evaluate Performance -> 6. Deploy to Production (if satisfactory) -> 7. Monitor in Production. If issues, rollback to previous version.
// Consider using tools like draw.io, Lucidchart, or similar to create the diagram.
// After creating the diagram, replace this placeholder with the actual PNG file.
// Ensure the image is clear, concise, and easy to understand.
// The image should be optimized for web use to minimize file size.
// [Placeholder for the actual diagram image]

View File

@@ -0,0 +1,7 @@
# References
Bundled resources for model-versioning-tracker skill
- [ ] mlflow_api_reference.md: Comprehensive documentation of the MLflow API for model versioning and tracking.
- [ ] model_versioning_best_practices.md: A guide to best practices for model versioning, including naming conventions, metadata management, and lineage tracking.
- [ ] supported_model_formats.md: A list of supported model formats and their corresponding serialization/deserialization methods.

View File

@@ -0,0 +1,7 @@
# Scripts
Bundled resources for model-versioning-tracker skill
- [ ] model_registry_client.py: A Python script to interact with a model registry (e.g., MLflow) to log, retrieve, and manage model versions.
- [ ] performance_logger.py: A script to automatically log model performance metrics to a file or database.
- [ ] version_control.sh: A bash script to automate version control operations for models using Git or other VCS.