Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:51:00 +08:00
commit e9b082ff8d
9 changed files with 231 additions and 0 deletions

View File

@@ -0,0 +1,52 @@
---
name: building-classification-models
description: |
This skill enables Claude to construct and evaluate classification models using provided datasets or specifications. It leverages the classification-model-builder plugin to automate model creation, optimization, and reporting. Use this skill when the user requests to "build a classifier", "create a classification model", "train a classification model", or needs help with supervised learning tasks involving labeled data. The skill ensures best practices are followed, including data validation, error handling, and performance metric reporting.
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
version: 1.0.0
---
## Overview
This skill empowers Claude to efficiently build and deploy classification models. It automates the process of model selection, training, and evaluation, providing users with a robust and reliable classification solution. The skill also provides insights into model performance and suggests potential improvements.
## How It Works
1. **Context Analysis**: Claude analyzes the user's request, identifying the dataset, target variable, and any specific requirements for the classification model.
2. **Model Generation**: The skill utilizes the classification-model-builder plugin to generate code for training a classification model based on the identified dataset and requirements. This includes data preprocessing, feature selection, model selection, and hyperparameter tuning.
3. **Evaluation and Reporting**: The generated model is trained and evaluated using appropriate metrics (e.g., accuracy, precision, recall, F1-score). Performance metrics and insights are then provided to the user.
## When to Use This Skill
This skill activates when you need to:
- Build a classification model from a given dataset.
- Train a classifier to predict categorical outcomes.
- Evaluate the performance of a classification model.
## Examples
### Example 1: Building a Spam Classifier
User request: "Build a classifier to detect spam emails using this dataset."
The skill will:
1. Analyze the provided email dataset to identify features and the target variable (spam/not spam).
2. Generate Python code using the classification-model-builder plugin to train a spam classification model, including data cleaning, feature extraction, and model selection.
### Example 2: Predicting Customer Churn
User request: "Create a classification model to predict customer churn using customer data."
The skill will:
1. Analyze the customer data to identify relevant features and the churn status.
2. Generate code to build a classification model for churn prediction, including data validation, model training, and performance reporting.
## Best Practices
- **Data Quality**: Ensure the input data is clean and preprocessed before training the model.
- **Model Selection**: Choose the appropriate classification algorithm based on the characteristics of the data and the specific requirements of the task.
- **Hyperparameter Tuning**: Optimize the model's hyperparameters to achieve the best possible performance.
## Integration
This skill integrates with the classification-model-builder plugin to automate the model building process. It can also be used in conjunction with other plugins for data analysis and visualization.

View File

@@ -0,0 +1,7 @@
# Assets
Bundled resources for classification-model-builder skill
- [ ] model_config_template.json: A template JSON file for specifying model configurations, including hyperparameters and training parameters.
- [ ] example_dataset.csv: A sample CSV dataset that can be used for testing the classification model builder.
- [ ] report_template.html: An HTML template for generating the model performance report.

View File

@@ -0,0 +1,59 @@
{
"_comment": "Model configuration template for the classification model builder plugin.",
"model_name": "ExampleClassifier",
"_comment": "A descriptive name for your model.",
"model_type": "RandomForestClassifier",
"_comment": "The type of classification model to use (e.g., RandomForestClassifier, LogisticRegression, SVM).",
"data_path": "data/training_data.csv",
"_comment": "Path to the CSV file containing the training data.",
"target_column": "target",
"_comment": "The name of the column containing the target variable.",
"features": [
"feature1",
"feature2",
"feature3",
"feature4"
],
"_comment": "List of column names to use as features. If empty, all columns except the target_column will be used.",
"hyperparameters": {
"_comment": "Hyperparameters specific to the chosen model type.",
"n_estimators": 100,
"_comment": "Number of trees in the random forest (example for RandomForestClassifier).",
"max_depth": 10,
"_comment": "Maximum depth of the trees (example for RandomForestClassifier).",
"learning_rate": 0.1
"_comment": "Learning rate for gradient boosting models (example for GradientBoostingClassifier)."
},
"training_parameters": {
"_comment": "Parameters related to the training process.",
"test_size": 0.2,
"_comment": "The proportion of the data to use for testing.",
"random_state": 42,
"_comment": "A random seed for reproducibility.",
"stratify": true
"_comment": "Whether to stratify the target variable during train/test split."
},
"evaluation_metrics": [
"accuracy",
"precision",
"recall",
"f1-score",
"roc_auc"
],
"_comment": "List of evaluation metrics to compute on the test set.",
"model_save_path": "models/example_classifier.pkl",
"_comment": "Path to save the trained model.",
"feature_importance": true,
"_comment": "Boolean value to toggle feature importance calculation.",
"preprocessing": {
"_comment": "Configuration for data preprocessing steps.",
"handle_missing_values": "impute",
"_comment": "How to handle missing values (e.g., 'impute', 'remove', 'none').",
"missing_value_strategy": "mean",
"_comment": "Strategy for imputation (e.g., 'mean', 'median', 'most_frequent').",
"scale_features": true,
"_comment": "Whether to scale numerical features using StandardScaler or similar.",
"feature_scaling_method": "standard"
"_comment": "Method to use for feature scaling ('standard' or 'minmax')."
}
}

View File

@@ -0,0 +1,8 @@
# References
Bundled resources for classification-model-builder skill
- [ ] model_evaluation_metrics.md: Detailed explanations of various classification model evaluation metrics (accuracy, precision, recall, F1-score, AUC-ROC) and their interpretation.
- [ ] data_preprocessing_guide.md: Best practices for data preprocessing, including handling missing values, feature scaling, and encoding categorical variables.
- [ ] model_selection_guide.md: Guidelines for selecting the appropriate classification model based on the characteristics of the dataset and the problem being solved.
- [ ] hyperparameter_tuning.md: Techniques for hyperparameter tuning to optimize model performance.

View File

@@ -0,0 +1,7 @@
# Scripts
Bundled resources for classification-model-builder skill
- [ ] model_builder.py: Automates the process of building, training, and evaluating classification models. Takes dataset path and model configuration as input.
- [ ] data_validator.py: Validates the input dataset for common issues like missing values, incorrect data types, and imbalanced classes.
- [ ] report_generator.py: Generates a comprehensive report of the model's performance, including metrics like accuracy, precision, recall, and F1-score.