Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:51:15 +08:00
commit facb223eaa
9 changed files with 225 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "deep-learning-optimizer",
"description": "Deep learning optimization techniques",
"version": "1.0.0",
"author": {
"name": "Claude Code Plugins",
"email": "[email protected]"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# deep-learning-optimizer
Deep learning optimization techniques

15
commands/optimize-dl.md Normal file
View File

@@ -0,0 +1,15 @@
---
description: Execute AI/ML task with intelligent automation
---
# AI/ML Task Executor
You are an AI/ML specialist. When this command is invoked:
1. Analyze the current context and requirements
2. Generate appropriate code for the ML task
3. Include data validation and error handling
4. Provide performance metrics and insights
5. Save artifacts and generate documentation
Support modern ML frameworks and best practices.

65
plugin.lock.json Normal file
View File

@@ -0,0 +1,65 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/ai-ml/deep-learning-optimizer",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "00a8318dea588c9645d61c0c9f7a111715f9069f",
"treeHash": "ba75ffc47f50b3ca8ab8e9c1045e45ecef5a9ed90650651c82986807130e2fd6",
"generatedAt": "2025-11-28T10:18:22.662679Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "deep-learning-optimizer",
"description": "Deep learning optimization techniques",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "69270234eae2c8fda65e32d18f95ec34f6f1d36ce5995b2bb4ee08ba1d9eb8de"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "15aeeee111067b4447300043e546006f67e40326ed77eae16d407314e297a927"
},
{
"path": "commands/optimize-dl.md",
"sha256": "043efb83e2f02fc6d0869c8a3a7388d6e49f6c809292b93dd6a97a1b142e5647"
},
{
"path": "skills/deep-learning-optimizer/SKILL.md",
"sha256": "b2b307c3b5cff37b32405ad12059dffebc4072fa60a73044280602c252aa3c33"
},
{
"path": "skills/deep-learning-optimizer/references/README.md",
"sha256": "ffb585ce584483f8aa461e7f7589932e21f2797e49cbf669b2caf30907e82e4e"
},
{
"path": "skills/deep-learning-optimizer/scripts/README.md",
"sha256": "42f5aa82d6051bad5d86b77ba3660558062161f28470fe3f1a77d3b2d15d6af8"
},
{
"path": "skills/deep-learning-optimizer/assets/README.md",
"sha256": "11022c3eea7c9f428a2394d75f10ebb4edadd12cbd3c34da7f2ec7c81c2119f8"
},
{
"path": "skills/deep-learning-optimizer/assets/optimization_config.json",
"sha256": "9d24705f1c052ea85854e2878eeb62cfd8807fa84290a4c43646b229bf38a51c"
}
],
"dirSha256": "ba75ffc47f50b3ca8ab8e9c1045e45ecef5a9ed90650651c82986807130e2fd6"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,57 @@
---
name: optimizing-deep-learning-models
description: |
This skill optimizes deep learning models using various techniques. It is triggered when the user requests improvements to model performance, such as increasing accuracy, reducing training time, or minimizing resource consumption. The skill leverages advanced optimization algorithms like Adam, SGD, and learning rate scheduling. It analyzes the existing model architecture, training data, and performance metrics to identify areas for enhancement. The skill then automatically applies appropriate optimization strategies and generates optimized code. Use this skill when the user mentions "optimize deep learning model", "improve model accuracy", "reduce training time", or "optimize learning rate".
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
version: 1.0.0
---
## Overview
This skill empowers Claude to automatically optimize deep learning models, enhancing their performance and efficiency. It intelligently applies various optimization techniques based on the model's characteristics and the user's objectives.
## How It Works
1. **Analyze Model**: Examines the deep learning model's architecture, training data, and performance metrics.
2. **Identify Optimizations**: Determines the most effective optimization strategies based on the analysis, such as adjusting the learning rate, applying regularization techniques, or modifying the optimizer.
3. **Apply Optimizations**: Generates optimized code that implements the chosen strategies.
4. **Evaluate Performance**: Assesses the impact of the optimizations on model performance, providing metrics like accuracy, training time, and resource consumption.
## When to Use This Skill
This skill activates when you need to:
- Optimize the performance of a deep learning model.
- Reduce the training time of a deep learning model.
- Improve the accuracy of a deep learning model.
- Optimize the learning rate for a deep learning model.
- Reduce resource consumption during deep learning model training.
## Examples
### Example 1: Improving Model Accuracy
User request: "Optimize this deep learning model for improved image classification accuracy."
The skill will:
1. Analyze the model and identify potential areas for improvement, such as adjusting the learning rate or adding regularization.
2. Apply the selected optimization techniques and generate optimized code.
3. Evaluate the model's performance and report the improved accuracy.
### Example 2: Reducing Training Time
User request: "Reduce the training time of this deep learning model."
The skill will:
1. Analyze the model and identify bottlenecks in the training process.
2. Apply techniques like batch size adjustment or optimizer selection to reduce training time.
3. Evaluate the model's performance and report the reduced training time.
## Best Practices
- **Optimizer Selection**: Experiment with different optimizers (e.g., Adam, SGD) to find the best fit for the model and dataset.
- **Learning Rate Scheduling**: Implement learning rate scheduling to dynamically adjust the learning rate during training.
- **Regularization**: Apply regularization techniques (e.g., L1, L2 regularization) to prevent overfitting.
## Integration
This skill can be integrated with other plugins that provide model building and data preprocessing capabilities. It can also be used in conjunction with monitoring tools to track the performance of optimized models.

View File

@@ -0,0 +1,7 @@
# Assets
Bundled resources for deep-learning-optimizer skill
- [ ] optimization_config.json: Template for configuring optimization parameters.
- [ ] example_models/: Directory containing example deep learning models for testing and demonstration.
- [ ] visualization_templates/: Directory containing templates for visualizing model performance and optimization results.

View File

@@ -0,0 +1,47 @@
{
"_comment": "Optimization configuration template for deep learning models.",
"optimizer_name": "Adam",
"_comment": "Name of the optimization algorithm to use. Options: Adam, SGD, RMSprop, AdamW, etc.",
"learning_rate": 0.001,
"_comment": "Learning rate for the optimizer. A smaller value might be needed for complex models.",
"weight_decay": 0.0001,
"_comment": "L2 regularization strength. Helps prevent overfitting.",
"beta1": 0.9,
"_comment": "Beta1 parameter for Adam optimizer (exponential decay rate for the 1st moment estimates).",
"beta2": 0.999,
"_comment": "Beta2 parameter for Adam optimizer (exponential decay rate for the 2nd moment estimates).",
"epsilon": 1e-08,
"_comment": "Epsilon parameter for Adam optimizer (term added to the denominator to improve numerical stability).",
"momentum": 0.0,
"_comment": "Momentum factor for SGD optimizer. Typically a value between 0 and 1.",
"nesterov": false,
"_comment": "Whether to use Nesterov momentum for SGD optimizer.",
"learning_rate_scheduler": {
"enabled": true,
"_comment": "Enable or disable learning rate scheduling.",
"scheduler_type": "ReduceLROnPlateau",
"_comment": "Type of learning rate scheduler. Options: StepLR, MultiStepLR, ExponentialLR, ReduceLROnPlateau, CosineAnnealingLR, CyclicLR, etc.",
"factor": 0.1,
"_comment": "Factor by which the learning rate will be reduced.",
"patience": 10,
"_comment": "Number of epochs with no improvement after which learning rate will be reduced.",
"threshold": 0.0001,
"_comment": "Threshold for measuring the new optimum, to only focus on significant changes.",
"threshold_mode": "rel",
"_comment": "One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in 'max' mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode.",
"cooldown": 0,
"_comment": "Number of epochs to wait before resuming normal operation after lr has been reduced.",
"min_lr": 0,
"_comment": "A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively.",
"verbose": true
"_comment": "If True, prints a message to stdout for each update."
},
"gradient_clipping": {
"enabled": true,
"_comment": "Enable or disable gradient clipping.",
"clip_value": 1.0,
"_comment": "The clipping threshold. Gradients will be clipped to this value.",
"clip_norm_type": 2.0,
"_comment": "The type of the norm used for clipping. Can be 2.0 (L2 norm), inf (infinity norm), etc."
}
}

View File

@@ -0,0 +1,8 @@
# References
Bundled resources for deep-learning-optimizer skill
- [ ] optimization_techniques.md: Detailed documentation on various deep learning optimization techniques (Adam, SGD, etc.).
- [ ] model_architecture_analysis.md: Guidelines on analyzing model architecture for potential improvements.
- [ ] performance_metrics.md: Explanation of relevant performance metrics and how to interpret them.
- [ ] best_practices.md: Best practices for deep learning model optimization.

View File

@@ -0,0 +1,8 @@
# Scripts
Bundled resources for deep-learning-optimizer skill
- [ ] optimize_model.py: Script to execute model optimization techniques based on user input and configuration.
- [ ] analyze_model.py: Script to analyze the model architecture, training data, and performance metrics.
- [ ] learning_rate_scheduler.py: Script to implement various learning rate scheduling algorithms.
- [ ] model_validation.py: Script to validate the optimized model and ensure it meets the required performance criteria.