Initial commit
This commit is contained in:
15
.claude-plugin/plugin.json
Normal file
15
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"name": "data-preprocessing-pipeline",
|
||||
"description": "Automated data preprocessing and cleaning pipelines",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "Claude Code Plugins",
|
||||
"email": "[email protected]"
|
||||
},
|
||||
"skills": [
|
||||
"./skills"
|
||||
],
|
||||
"commands": [
|
||||
"./commands"
|
||||
]
|
||||
}
|
||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# data-preprocessing-pipeline
|
||||
|
||||
Automated data preprocessing and cleaning pipelines
|
||||
15
commands/preprocess.md
Normal file
15
commands/preprocess.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
description: Execute AI/ML task with intelligent automation
|
||||
---
|
||||
|
||||
# AI/ML Task Executor
|
||||
|
||||
You are an AI/ML specialist. When this command is invoked:
|
||||
|
||||
1. Analyze the current context and requirements
|
||||
2. Generate appropriate code for the ML task
|
||||
3. Include data validation and error handling
|
||||
4. Provide performance metrics and insights
|
||||
5. Save artifacts and generate documentation
|
||||
|
||||
Support modern ML frameworks and best practices.
|
||||
65
plugin.lock.json
Normal file
65
plugin.lock.json
Normal file
@@ -0,0 +1,65 @@
|
||||
{
|
||||
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/ai-ml/data-preprocessing-pipeline",
|
||||
"normalized": {
|
||||
"repo": null,
|
||||
"ref": "refs/tags/v20251128.0",
|
||||
"commit": "43251488312afc9ced38a42c7eefc09457dcc1d0",
|
||||
"treeHash": "c1f20cfcd2f487feaa0862c894bf2dfab74bb347e8d8ead18e817389885ef27f",
|
||||
"generatedAt": "2025-11-28T10:18:17.162347Z",
|
||||
"toolVersion": "publish_plugins.py@0.2.0"
|
||||
},
|
||||
"origin": {
|
||||
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||
"branch": "master",
|
||||
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||
},
|
||||
"manifest": {
|
||||
"name": "data-preprocessing-pipeline",
|
||||
"description": "Automated data preprocessing and cleaning pipelines",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
"content": {
|
||||
"files": [
|
||||
{
|
||||
"path": "README.md",
|
||||
"sha256": "8186847438eaabdbbc256a022e5c6e1498c1526a1a560612e8005f037a4588e0"
|
||||
},
|
||||
{
|
||||
"path": ".claude-plugin/plugin.json",
|
||||
"sha256": "c88bcf5d482b2a43365a498aa7fe6f9334a5f2b2cc251f0e11a522be146f4859"
|
||||
},
|
||||
{
|
||||
"path": "commands/preprocess.md",
|
||||
"sha256": "043efb83e2f02fc6d0869c8a3a7388d6e49f6c809292b93dd6a97a1b142e5647"
|
||||
},
|
||||
{
|
||||
"path": "skills/data-preprocessing-pipeline/SKILL.md",
|
||||
"sha256": "a29a61c634d81576ad2ea8d4eab94b0a1b3121b0eeb297b1f04d744fa046e72f"
|
||||
},
|
||||
{
|
||||
"path": "skills/data-preprocessing-pipeline/references/README.md",
|
||||
"sha256": "60f2244fc565bdae1704071233879ae6c3296eedadf31824e15cffb150f91e22"
|
||||
},
|
||||
{
|
||||
"path": "skills/data-preprocessing-pipeline/scripts/README.md",
|
||||
"sha256": "fd5c16db502db27b057683b7c73bd910c449ba04e973b4d1738fa09552489f37"
|
||||
},
|
||||
{
|
||||
"path": "skills/data-preprocessing-pipeline/assets/README.md",
|
||||
"sha256": "7fc1a8ef3ae645c91bce2c7a8721b531c9f38da565b32bdde1255fb5416dc67d"
|
||||
},
|
||||
{
|
||||
"path": "skills/data-preprocessing-pipeline/assets/example_data.csv",
|
||||
"sha256": "defec3a198c11071858ccfa10b17df42e55ac9538750499611becb225d7cb98b"
|
||||
}
|
||||
],
|
||||
"dirSha256": "c1f20cfcd2f487feaa0862c894bf2dfab74bb347e8d8ead18e817389885ef27f"
|
||||
},
|
||||
"security": {
|
||||
"scannedAt": null,
|
||||
"scannerVersion": null,
|
||||
"flags": []
|
||||
}
|
||||
}
|
||||
53
skills/data-preprocessing-pipeline/SKILL.md
Normal file
53
skills/data-preprocessing-pipeline/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: preprocessing-data-with-automated-pipelines
|
||||
description: |
|
||||
This skill empowers Claude to preprocess and clean data using automated pipelines. It is designed to streamline data preparation for machine learning tasks, implementing best practices for data validation, transformation, and error handling. Claude should use this skill when the user requests data preprocessing, data cleaning, ETL tasks, or mentions the need for automated pipelines for data preparation. Trigger terms include "preprocess data", "clean data", "ETL pipeline", "data transformation", and "data validation". The skill ensures data quality and prepares it for effective analysis and model training.
|
||||
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This skill enables Claude to construct and execute automated data preprocessing pipelines, ensuring data quality and readiness for machine learning. It streamlines the data preparation process by automating common tasks such as data cleaning, transformation, and validation.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Analyze Requirements**: Claude analyzes the user's request to understand the specific data preprocessing needs, including data sources, target format, and desired transformations.
|
||||
2. **Generate Pipeline Code**: Based on the requirements, Claude generates Python code for an automated data preprocessing pipeline using relevant libraries and best practices. This includes data validation and error handling.
|
||||
3. **Execute Pipeline**: The generated code is executed, performing the data preprocessing steps.
|
||||
4. **Provide Metrics and Insights**: Claude provides performance metrics and insights about the pipeline's execution, including data quality reports and potential issues encountered.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill activates when you need to:
|
||||
- Prepare raw data for machine learning models.
|
||||
- Automate data cleaning and transformation processes.
|
||||
- Implement a robust ETL (Extract, Transform, Load) pipeline.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Cleaning Customer Data
|
||||
|
||||
User request: "Preprocess the customer data from the CSV file to remove duplicates and handle missing values."
|
||||
|
||||
The skill will:
|
||||
1. Generate a Python script to read the CSV file, remove duplicate entries, and impute missing values using appropriate techniques (e.g., mean imputation).
|
||||
2. Execute the script and provide a summary of the changes made, including the number of duplicates removed and the number of missing values imputed.
|
||||
|
||||
### Example 2: Transforming Sensor Data
|
||||
|
||||
User request: "Create an ETL pipeline to transform the sensor data from the database into a format suitable for time series analysis."
|
||||
|
||||
The skill will:
|
||||
1. Generate a Python script to extract sensor data from the database, transform it into a time series format (e.g., resampling to a fixed frequency), and load it into a suitable storage location.
|
||||
2. Execute the script and provide performance metrics, such as the time taken for each step of the pipeline and the size of the transformed data.
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Data Validation**: Always include data validation steps to ensure data quality and catch potential errors early in the pipeline.
|
||||
- **Error Handling**: Implement robust error handling to gracefully handle unexpected issues during pipeline execution.
|
||||
- **Performance Optimization**: Optimize the pipeline for performance by using efficient algorithms and data structures.
|
||||
|
||||
## Integration
|
||||
|
||||
This skill can be integrated with other Claude Code skills for data analysis, model training, and deployment. It provides a standardized way to prepare data for these tasks, ensuring consistency and reliability.
|
||||
7
skills/data-preprocessing-pipeline/assets/README.md
Normal file
7
skills/data-preprocessing-pipeline/assets/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Assets
|
||||
|
||||
Bundled resources for data-preprocessing-pipeline skill
|
||||
|
||||
- [ ] example_data.csv: Example dataset to demonstrate the pipeline's functionality.
|
||||
- [ ] config.yaml: Configuration file for the data preprocessing pipeline.
|
||||
- [ ] data_dictionary.md: A data dictionary describing the fields in the dataset.
|
||||
35
skills/data-preprocessing-pipeline/assets/example_data.csv
Normal file
35
skills/data-preprocessing-pipeline/assets/example_data.csv
Normal file
@@ -0,0 +1,35 @@
|
||||
# example_data.csv
|
||||
# This CSV file provides sample data to demonstrate the functionality of the data_preprocessing_pipeline plugin.
|
||||
#
|
||||
# Column Descriptions:
|
||||
# - ID: Unique identifier for each record.
|
||||
# - Feature1: Numerical feature with some missing values.
|
||||
# - Feature2: Categorical feature with multiple categories and potential typos.
|
||||
# - Feature3: Date feature in string format.
|
||||
# - Target: Binary target variable (0 or 1).
|
||||
#
|
||||
# Placeholders:
|
||||
# - [MISSING_VALUE]: Represents a missing value to be handled by the pipeline.
|
||||
# - [TYPO_CATEGORY]: Represents a typo in a categorical value.
|
||||
#
|
||||
# Instructions:
|
||||
# - Feel free to modify this data to test different preprocessing scenarios.
|
||||
# - Ensure the data adheres to the expected format for each column.
|
||||
# - Use the `/preprocess` command to trigger the preprocessing pipeline on this data.
|
||||
|
||||
ID,Feature1,Feature2,Feature3,Target
|
||||
1,10.5,CategoryA,2023-01-15,1
|
||||
2,12.0,CategoryB,2023-02-20,0
|
||||
3,[MISSING_VALUE],CategoryC,2023-03-25,1
|
||||
4,15.2,CategoryA,2023-04-01,0
|
||||
5,9.8,CateogryB,[MISSING_VALUE],1
|
||||
6,11.3,CategoryC,2023-05-10,0
|
||||
7,13.7,CategoryA,2023-06-15,1
|
||||
8,[MISSING_VALUE],CategoryB,2023-07-20,0
|
||||
9,16.1,CategoryC,2023-08-25,1
|
||||
10,10.0,CategoryA,2023-09-01,0
|
||||
11,12.5,[TYPO_CATEGORY],2023-10-10,1
|
||||
12,14.9,CategoryB,2023-11-15,0
|
||||
13,11.8,CategoryC,2023-12-20,1
|
||||
14,13.2,CategoryA,2024-01-25,0
|
||||
15,9.5,CategoryB,2024-02-01,1
|
||||
|
8
skills/data-preprocessing-pipeline/references/README.md
Normal file
8
skills/data-preprocessing-pipeline/references/README.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# References
|
||||
|
||||
Bundled resources for data-preprocessing-pipeline skill
|
||||
|
||||
- [ ] data_validation_schemas.md: Documentation of data validation schemas used in the pipeline.
|
||||
- [ ] transformation_methods.md: Detailed explanation of transformation methods applied.
|
||||
- [ ] error_handling_strategies.md: Best practices for error handling in data preprocessing.
|
||||
- [ ] pipeline_configuration.md: Guide to configuring the data preprocessing pipeline.
|
||||
8
skills/data-preprocessing-pipeline/scripts/README.md
Normal file
8
skills/data-preprocessing-pipeline/scripts/README.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# Scripts
|
||||
|
||||
Bundled resources for data-preprocessing-pipeline skill
|
||||
|
||||
- [ ] validate_data.py: Script to validate data against predefined schemas or rules.
|
||||
- [ ] transform_data.py: Script to apply transformations to the data (e.g., normalization, scaling).
|
||||
- [ ] handle_errors.py: Script to manage and log errors during the preprocessing pipeline.
|
||||
- [ ] pipeline.py: Script to orchestrate the entire data preprocessing pipeline.
|
||||
Reference in New Issue
Block a user