Initial commit
This commit is contained in:
15
.claude-plugin/plugin.json
Normal file
15
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"name": "transfer-learning-adapter",
|
||||
"description": "Transfer learning adaptation",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "Claude Code Plugins",
|
||||
"email": "[email protected]"
|
||||
},
|
||||
"skills": [
|
||||
"./skills"
|
||||
],
|
||||
"commands": [
|
||||
"./commands"
|
||||
]
|
||||
}
|
||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# transfer-learning-adapter
|
||||
|
||||
Transfer learning adaptation
|
||||
15
commands/adapt-transfer.md
Normal file
15
commands/adapt-transfer.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
description: Execute AI/ML task with intelligent automation
|
||||
---
|
||||
|
||||
# AI/ML Task Executor
|
||||
|
||||
You are an AI/ML specialist. When this command is invoked:
|
||||
|
||||
1. Analyze the current context and requirements
|
||||
2. Generate appropriate code for the ML task
|
||||
3. Include data validation and error handling
|
||||
4. Provide performance metrics and insights
|
||||
5. Save artifacts and generate documentation
|
||||
|
||||
Support modern ML frameworks and best practices.
|
||||
73
plugin.lock.json
Normal file
73
plugin.lock.json
Normal file
@@ -0,0 +1,73 @@
|
||||
{
|
||||
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/ai-ml/transfer-learning-adapter",
|
||||
"normalized": {
|
||||
"repo": null,
|
||||
"ref": "refs/tags/v20251128.0",
|
||||
"commit": "37242547ce1b65f27bb9c7e5303956f990cd4221",
|
||||
"treeHash": "af5987b597e8df0f49f40b2c1fa74e1ef14ff9cd2b65f76022ac76ba8dd24f67",
|
||||
"generatedAt": "2025-11-28T10:18:50.854981Z",
|
||||
"toolVersion": "publish_plugins.py@0.2.0"
|
||||
},
|
||||
"origin": {
|
||||
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||
"branch": "master",
|
||||
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||
},
|
||||
"manifest": {
|
||||
"name": "transfer-learning-adapter",
|
||||
"description": "Transfer learning adaptation",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
"content": {
|
||||
"files": [
|
||||
{
|
||||
"path": "README.md",
|
||||
"sha256": "06ba2853c52883fe75b5e4357d41a79f97a0811d167a8118236955b1afe54b1e"
|
||||
},
|
||||
{
|
||||
"path": ".claude-plugin/plugin.json",
|
||||
"sha256": "33241deb85b169b38f6478223a8d2e5367143737930d3df5243cf608235f2ce9"
|
||||
},
|
||||
{
|
||||
"path": "commands/adapt-transfer.md",
|
||||
"sha256": "043efb83e2f02fc6d0869c8a3a7388d6e49f6c809292b93dd6a97a1b142e5647"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/SKILL.md",
|
||||
"sha256": "cbb165aab2f671f85e365b81d14ed69b16234e8f652f45a890556622f2c0ca01"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/references/README.md",
|
||||
"sha256": "51bef2fa291e395ae4148b59584f0ffb730c2622a026c12f041eb1de48031c78"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/scripts/README.md",
|
||||
"sha256": "a343c82be1e73ea17f34df61304b7556ae2b98ebe05c44f0cf8e9a2ecce907ef"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/assets/data_preprocessing_example.py",
|
||||
"sha256": "fba0b32f7806ea4a7fc39505b84b3f00642079b9d7bfdb7d101771a9f3f8cc3d"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/assets/README.md",
|
||||
"sha256": "f431c7d77f1e86135c9e0b65e3d68b6a9059740e766ca53ede5f49c78a0ce39c"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/assets/model_architecture.png",
|
||||
"sha256": "910ded018acc3a4ecf8f2605f2c4c7b7ae87226c93e5240bff5cc1d534780efd"
|
||||
},
|
||||
{
|
||||
"path": "skills/transfer-learning-adapter/assets/example_config.json",
|
||||
"sha256": "ccb7682e28083c247b238124f7a298268cfe102b496870067d293fae89d2f0f0"
|
||||
}
|
||||
],
|
||||
"dirSha256": "af5987b597e8df0f49f40b2c1fa74e1ef14ff9cd2b65f76022ac76ba8dd24f67"
|
||||
},
|
||||
"security": {
|
||||
"scannedAt": null,
|
||||
"scannerVersion": null,
|
||||
"flags": []
|
||||
}
|
||||
}
|
||||
55
skills/transfer-learning-adapter/SKILL.md
Normal file
55
skills/transfer-learning-adapter/SKILL.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
name: adapting-transfer-learning-models
|
||||
description: |
|
||||
This skill automates the adaptation of pre-trained machine learning models using transfer learning techniques. It is triggered when the user requests assistance with fine-tuning a model, adapting a pre-trained model to a new dataset, or performing transfer learning. It analyzes the user's requirements, generates code for adapting the model, includes data validation and error handling, provides performance metrics, and saves artifacts with documentation. Use this skill when you need to leverage existing models for new tasks or datasets, optimizing for performance and efficiency.
|
||||
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This skill streamlines the process of adapting pre-trained machine learning models via transfer learning. It enables you to quickly fine-tune models for specific tasks, saving time and resources compared to training from scratch. It handles the complexities of model adaptation, data validation, and performance optimization.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Analyze Requirements**: Examines the user's request to understand the target task, dataset characteristics, and desired performance metrics.
|
||||
2. **Generate Adaptation Code**: Creates Python code using appropriate ML frameworks (e.g., TensorFlow, PyTorch) to fine-tune the pre-trained model on the new dataset. This includes data preprocessing steps and model architecture modifications if needed.
|
||||
3. **Implement Validation and Error Handling**: Adds code to validate the data, monitor the training process, and handle potential errors gracefully.
|
||||
4. **Provide Performance Metrics**: Calculates and reports key performance indicators (KPIs) such as accuracy, precision, recall, and F1-score to assess the model's effectiveness.
|
||||
5. **Save Artifacts and Documentation**: Saves the adapted model, training logs, performance metrics, and automatically generates documentation outlining the adaptation process and results.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill activates when you need to:
|
||||
- Fine-tune a pre-trained model for a specific task.
|
||||
- Adapt a pre-trained model to a new dataset.
|
||||
- Perform transfer learning to improve model performance.
|
||||
- Optimize an existing model for a particular application.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Adapting a Vision Model for Image Classification
|
||||
|
||||
User request: "Fine-tune a ResNet50 model to classify images of different types of flowers."
|
||||
|
||||
The skill will:
|
||||
1. Download the ResNet50 model and load a flower image dataset.
|
||||
2. Generate code to fine-tune the model on the flower dataset, including data augmentation and optimization techniques.
|
||||
|
||||
### Example 2: Adapting a Language Model for Sentiment Analysis
|
||||
|
||||
User request: "Adapt a BERT model to perform sentiment analysis on customer reviews."
|
||||
|
||||
The skill will:
|
||||
1. Download the BERT model and load a dataset of customer reviews with sentiment labels.
|
||||
2. Generate code to fine-tune the model on the review dataset, including tokenization, padding, and attention mechanisms.
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Data Preprocessing**: Ensure data is properly preprocessed and formatted to match the input requirements of the pre-trained model.
|
||||
- **Hyperparameter Tuning**: Experiment with different hyperparameters (e.g., learning rate, batch size) to optimize model performance.
|
||||
- **Regularization**: Apply regularization techniques (e.g., dropout, weight decay) to prevent overfitting.
|
||||
|
||||
## Integration
|
||||
|
||||
This skill can be integrated with other plugins for data loading, model evaluation, and deployment. For example, it can work with a data loading plugin to fetch datasets and a model deployment plugin to deploy the adapted model to a serving infrastructure.
|
||||
7
skills/transfer-learning-adapter/assets/README.md
Normal file
7
skills/transfer-learning-adapter/assets/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Assets
|
||||
|
||||
Bundled resources for transfer-learning-adapter skill
|
||||
|
||||
- [ ] example_config.json: Provides an example configuration file for adapting a pre-trained model to a new dataset.
|
||||
- [ ] model_architecture.png: A diagram illustrating the architecture of a commonly used pre-trained model.
|
||||
- [ ] data_preprocessing_example.py: A code snippet demonstrating how to preprocess the new dataset to be compatible with the pre-trained model.
|
||||
@@ -0,0 +1,158 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
"""
|
||||
This script demonstrates how to preprocess a new dataset for transfer learning.
|
||||
It focuses on ensuring compatibility with a pre-trained model, including
|
||||
handling image resizing, normalization, and label encoding.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from PIL import Image
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from sklearn.preprocessing import LabelEncoder
|
||||
from sklearn.model_selection import train_test_split
|
||||
|
||||
|
||||
def load_and_preprocess_images(image_dir, target_size=(224, 224), grayscale=False):
|
||||
"""
|
||||
Loads images from a directory, resizes them, and optionally converts them to grayscale.
|
||||
|
||||
Args:
|
||||
image_dir (str): Path to the directory containing the images.
|
||||
target_size (tuple): The desired size (width, height) of the images.
|
||||
grayscale (bool): Whether to convert images to grayscale.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing a list of preprocessed image arrays and a list of corresponding filenames.
|
||||
Returns None, None if an error occurs.
|
||||
"""
|
||||
|
||||
images = []
|
||||
filenames = []
|
||||
try:
|
||||
for filename in os.listdir(image_dir):
|
||||
if filename.endswith(('.jpg', '.jpeg', '.png')):
|
||||
image_path = os.path.join(image_dir, filename)
|
||||
try:
|
||||
img = Image.open(image_path)
|
||||
if grayscale:
|
||||
img = img.convert('L') # Convert to grayscale
|
||||
img = img.resize(target_size)
|
||||
img_array = np.array(img)
|
||||
|
||||
# Ensure images are 3-channel even if grayscale
|
||||
if grayscale and len(img_array.shape) == 2:
|
||||
img_array = np.stack([img_array] * 3, axis=-1)
|
||||
elif len(img_array.shape) == 2:
|
||||
img_array = np.stack([img_array] * 3, axis=-1)
|
||||
|
||||
|
||||
images.append(img_array)
|
||||
filenames.append(filename)
|
||||
except (IOError, OSError) as e:
|
||||
print(f"Error processing image {filename}: {e}")
|
||||
return images, filenames
|
||||
except OSError as e:
|
||||
print(f"Error accessing image directory {image_dir}: {e}")
|
||||
return None, None
|
||||
|
||||
|
||||
def normalize_images(images):
|
||||
"""
|
||||
Normalizes pixel values of images to the range [0, 1].
|
||||
|
||||
Args:
|
||||
images (list): A list of image arrays.
|
||||
|
||||
Returns:
|
||||
list: A list of normalized image arrays.
|
||||
"""
|
||||
normalized_images = [img / 255.0 for img in images]
|
||||
return normalized_images
|
||||
|
||||
|
||||
def encode_labels(labels):
|
||||
"""
|
||||
Encodes categorical labels into numerical values using LabelEncoder.
|
||||
|
||||
Args:
|
||||
labels (list): A list of categorical labels.
|
||||
|
||||
Returns:
|
||||
numpy.ndarray: An array of encoded labels.
|
||||
"""
|
||||
label_encoder = LabelEncoder()
|
||||
encoded_labels = label_encoder.fit_transform(labels)
|
||||
return encoded_labels
|
||||
|
||||
|
||||
def create_dataframe(images, labels, filenames):
|
||||
"""
|
||||
Creates a pandas DataFrame from the preprocessed images, labels, and filenames.
|
||||
|
||||
Args:
|
||||
images (list): A list of preprocessed image arrays.
|
||||
labels (numpy.ndarray): An array of encoded labels.
|
||||
filenames (list): A list of filenames.
|
||||
|
||||
Returns:
|
||||
pandas.DataFrame: A DataFrame containing the image data, labels, and filenames.
|
||||
"""
|
||||
df = pd.DataFrame({'image': images, 'label': labels, 'filename': filenames})
|
||||
return df
|
||||
|
||||
|
||||
def split_data(df, test_size=0.2, random_state=42):
|
||||
"""
|
||||
Splits the data into training and testing sets.
|
||||
|
||||
Args:
|
||||
df (pandas.DataFrame): The DataFrame containing the data.
|
||||
test_size (float): The proportion of the data to use for testing.
|
||||
random_state (int): The random state for reproducibility.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing the training and testing DataFrames.
|
||||
"""
|
||||
train_df, test_df = train_test_split(df, test_size=test_size, random_state=random_state)
|
||||
return train_df, test_df
|
||||
|
||||
|
||||
def main(image_dir):
|
||||
"""
|
||||
Main function to demonstrate the data preprocessing steps.
|
||||
|
||||
Args:
|
||||
image_dir (str): Path to the directory containing the images.
|
||||
"""
|
||||
images, filenames = load_and_preprocess_images(image_dir)
|
||||
|
||||
if images is None or filenames is None:
|
||||
print("Error loading images. Exiting.")
|
||||
return
|
||||
|
||||
# Example labels (replace with your actual labels)
|
||||
labels = [filename.split('_')[0] for filename in filenames] # Assuming filename format: label_image_id.jpg
|
||||
encoded_labels = encode_labels(labels)
|
||||
|
||||
normalized_images = normalize_images(images)
|
||||
|
||||
df = create_dataframe(normalized_images, encoded_labels, filenames)
|
||||
|
||||
train_df, test_df = split_data(df)
|
||||
|
||||
print("Training DataFrame shape:", train_df.shape)
|
||||
print("Testing DataFrame shape:", test_df.shape)
|
||||
print("First 5 rows of training DataFrame:")
|
||||
print(train_df.head())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) > 1:
|
||||
image_directory = sys.argv[1]
|
||||
main(image_directory)
|
||||
else:
|
||||
print("Please provide the image directory as a command-line argument.")
|
||||
print("Example: python data_preprocessing_example.py path/to/images")
|
||||
55
skills/transfer-learning-adapter/assets/example_config.json
Normal file
55
skills/transfer-learning-adapter/assets/example_config.json
Normal file
@@ -0,0 +1,55 @@
|
||||
{
|
||||
"_comment": "Example configuration for transfer learning adaptation.",
|
||||
"model_name": "bert-base-uncased",
|
||||
"_comment": "Pre-trained model to adapt. Choose from Hugging Face model hub.",
|
||||
"dataset_name": "glue",
|
||||
"_comment": "Dataset to fine-tune on. Choose from Hugging Face datasets or specify a local path.",
|
||||
"dataset_subset": "mrpc",
|
||||
"_comment": "Specific subset of the dataset to use (if applicable).",
|
||||
"train_file": null,
|
||||
"_comment": "Optional path to a custom training data file. Overrides dataset_name and dataset_subset if provided.",
|
||||
"validation_file": null,
|
||||
"_comment": "Optional path to a custom validation data file. Overrides dataset_name and dataset_subset if provided.",
|
||||
"output_dir": "./adapted_model",
|
||||
"_comment": "Directory to save the adapted model and training logs.",
|
||||
"num_epochs": 3,
|
||||
"_comment": "Number of training epochs.",
|
||||
"learning_rate": 2e-5,
|
||||
"_comment": "Learning rate for the AdamW optimizer.",
|
||||
"batch_size": 32,
|
||||
"_comment": "Batch size for training and evaluation.",
|
||||
"weight_decay": 0.01,
|
||||
"_comment": "Weight decay for regularization.",
|
||||
"seed": 42,
|
||||
"_comment": "Random seed for reproducibility.",
|
||||
"max_length": 128,
|
||||
"_comment": "Maximum sequence length for input tokens. Truncate or pad sequences as needed.",
|
||||
"task_name": "text_classification",
|
||||
"_comment": "Type of task for which the model is being adapted. Options: text_classification, token_classification, question_answering, sequence_to_sequence.",
|
||||
"metric": "accuracy",
|
||||
"_comment": "Primary metric to evaluate performance. Options depend on the task. Common examples: accuracy, f1, rouge, bleu.",
|
||||
"gradient_accumulation_steps": 1,
|
||||
"_comment": "Number of steps to accumulate gradients before performing a backward/update pass.",
|
||||
"fp16": true,
|
||||
"_comment": "Whether to use 16-bit floating point precision (Mixed Precision Training).",
|
||||
"evaluation_strategy": "epoch",
|
||||
"_comment": "Evaluation strategy to adopt during training. Options: steps, epoch",
|
||||
"save_strategy": "epoch",
|
||||
"_comment": "Save strategy to adopt during training. Options: steps, epoch",
|
||||
"logging_steps": 100,
|
||||
"_comment": "Log every X updates steps.",
|
||||
"push_to_hub": false,
|
||||
"_comment": "Whether to push the adapted model to the Hugging Face Hub.",
|
||||
"hub_model_id": null,
|
||||
"_comment": "The name of the repository to keep in sync with the local adapted model. It can be a path to an existing repository on the Hub or a new one. Overrides the repository id in the Trainer's config.",
|
||||
"hub_token": null,
|
||||
"_comment": "The token to use when pushing the adapted model to the Hub.",
|
||||
"device": "cuda",
|
||||
"_comment": "Device (cpu, cuda) on which the code should be run.",
|
||||
"tokenizer_name": null,
|
||||
"_comment": "Optional tokenizer name to use. If not provided, the tokenizer associated with the model_name will be used.",
|
||||
"do_train": true,
|
||||
"_comment": "Whether to perform training.",
|
||||
"do_eval": true,
|
||||
"_comment": "Whether to perform evaluation."
|
||||
}
|
||||
@@ -0,0 +1,28 @@
|
||||
(Binary file content for model_architecture.png - A placeholder image representing a typical deep learning model architecture suitable for transfer learning. This could be a simplified ResNet, VGG, or similar. The image should visually depict layers, connections, and the concept of freezing layers for transfer learning.)
|
||||
|
||||
(Image data would go here. A real PNG file would contain binary data defining the image.)
|
||||
|
||||
<!--
|
||||
Instructions for replacing this placeholder:
|
||||
|
||||
1. This file is a placeholder for a visual representation of a common deep learning model architecture.
|
||||
2. Use a tool like draw.io, Lucidchart, or similar to create a diagram.
|
||||
3. The diagram should clearly show:
|
||||
* Input layer
|
||||
* Multiple convolutional layers (or other relevant layer types)
|
||||
* Pooling layers (if applicable)
|
||||
* Fully connected layers (if applicable)
|
||||
* Output layer
|
||||
4. Highlight the layers that are typically frozen during transfer learning (e.g., the earlier convolutional layers). Use color or shading to differentiate these layers.
|
||||
5. Label the layers clearly.
|
||||
6. Save the diagram as a PNG file.
|
||||
7. Replace the placeholder binary data in this file with the actual PNG data. You can do this by opening the PNG file in a binary editor and copying the data, or by using a scripting language to read and write the binary data.
|
||||
|
||||
Example Architecture Considerations:
|
||||
|
||||
* ResNet: Shows residual connections and the concept of blocks.
|
||||
* VGG: Shows a deep stack of convolutional layers.
|
||||
* MobileNet: Focuses on efficient architectures.
|
||||
|
||||
The goal is to provide a visual aid to users understanding how transfer learning can be applied.
|
||||
-->
|
||||
7
skills/transfer-learning-adapter/references/README.md
Normal file
7
skills/transfer-learning-adapter/references/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# References
|
||||
|
||||
Bundled resources for transfer-learning-adapter skill
|
||||
|
||||
- [ ] transfer_learning_best_practices.md: Provides best practices for transfer learning, including model selection, hyperparameter tuning, and data augmentation techniques.
|
||||
- [ ] supported_models.md: Lists the pre-trained models supported by the skill and their corresponding architectures and performance characteristics.
|
||||
- [ ] data_format_requirements.md: Specifies the required data format for the new dataset, including data types, dimensions, and preprocessing steps.
|
||||
7
skills/transfer-learning-adapter/scripts/README.md
Normal file
7
skills/transfer-learning-adapter/scripts/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Scripts
|
||||
|
||||
Bundled resources for transfer-learning-adapter skill
|
||||
|
||||
- [ ] adapt_model.py: Automates the process of adapting a pre-trained model to a new dataset, including data loading, preprocessing, model fine-tuning, and evaluation.
|
||||
- [ ] validate_data.py: Performs data validation checks to ensure the new dataset is compatible with the pre-trained model.
|
||||
- [ ] evaluate_performance.py: Evaluates the performance of the adapted model on the new dataset and generates performance metrics.
|
||||
Reference in New Issue
Block a user