Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:50:51 +08:00
commit d03c452fed
11 changed files with 404 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "ai-ethics-validator",
"description": "AI ethics and fairness validation",
"version": "1.0.0",
"author": {
"name": "Claude Code Plugins",
"email": "[email protected]"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# ai-ethics-validator
AI ethics and fairness validation

View File

@@ -0,0 +1,15 @@
---
description: Execute AI/ML task with intelligent automation
---
# AI/ML Task Executor
You are an AI/ML specialist. When this command is invoked:
1. Analyze the current context and requirements
2. Generate appropriate code for the ML task
3. Include data validation and error handling
4. Provide performance metrics and insights
5. Save artifacts and generate documentation
Support modern ML frameworks and best practices.

73
plugin.lock.json Normal file
View File

@@ -0,0 +1,73 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/ai-ml/ai-ethics-validator",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "fa297bf2c3d46e73d4edf13f14de44c8e8b44962",
"treeHash": "ddac51748fb7665bd8c77a2353cd23728f5ea9e9ab66305807dc6eabaa4283f9",
"generatedAt": "2025-11-28T10:18:03.342711Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "ai-ethics-validator",
"description": "AI ethics and fairness validation",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "0b3aa1b4b05002a2d76d4ff8c796edca017762b49c5eda6534ff86953021dd39"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "c210a853f5e7e0cd5f0b14148f6815132930edb3d15ad380d016804247a3a8e6"
},
{
"path": "commands/validate-ethics.md",
"sha256": "043efb83e2f02fc6d0869c8a3a7388d6e49f6c809292b93dd6a97a1b142e5647"
},
{
"path": "skills/ai-ethics-validator/SKILL.md",
"sha256": "c3e3ff1afb57ad433abfafb0d52ccde039aa120d13fd00c1ed479a0026309d12"
},
{
"path": "skills/ai-ethics-validator/references/README.md",
"sha256": "bf99fd69224cec7ffeb85496c20469ce25fa86ef4d27c16b6fdba914a6aa7975"
},
{
"path": "skills/ai-ethics-validator/scripts/README.md",
"sha256": "9e1879008c75c6837b89e09565239908d402e36b3bc6d111d1c376cd142772b5"
},
{
"path": "skills/ai-ethics-validator/assets/example_dataset.csv",
"sha256": "1c658a4e8417b98ffa955e3de52b433ffa6d213ae68e644007aba6e59f339e5b"
},
{
"path": "skills/ai-ethics-validator/assets/report_template.md",
"sha256": "7da8f8b93577b58563b11083b98fc900431e871219078cae75b06683cd92d998"
},
{
"path": "skills/ai-ethics-validator/assets/README.md",
"sha256": "9247ff22ef11b8d4958c4446baaa39bcc8d0f2dd05ed411c0e566667d5b6ed5a"
},
{
"path": "skills/ai-ethics-validator/assets/example_model.pkl",
"sha256": "223b4086355a837315d44472306ca5c4b9f2b66d88ab849b40959cc696f0224b"
}
],
"dirSha256": "ddac51748fb7665bd8c77a2353cd23728f5ea9e9ab66305807dc6eabaa4283f9"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,52 @@
---
name: validating-ai-ethics-and-fairness
description: |
This skill enables Claude to validate the ethical implications and fairness of AI/ML models and datasets. It is triggered when the user requests an ethics review, fairness assessment, or bias detection for an AI system. The skill uses the ai-ethics-validator plugin to analyze models, datasets, and code for potential biases and ethical concerns. It provides reports and recommendations for mitigating identified issues, ensuring responsible AI development and deployment. Use this skill when the user mentions "ethics validation", "fairness assessment", "bias detection", "responsible AI", or related terms in the context of AI/ML.
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
version: 1.0.0
---
## Overview
This skill empowers Claude to automatically assess and improve the ethical considerations and fairness of AI and machine learning projects. It leverages the ai-ethics-validator plugin to identify potential biases, evaluate fairness metrics, and suggest mitigation strategies, promoting responsible AI development.
## How It Works
1. **Analysis Initiation**: The skill is triggered by user requests related to AI ethics, fairness, or bias detection.
2. **Ethical Validation**: The ai-ethics-validator plugin analyzes the provided AI model, dataset, or code for potential ethical concerns and biases.
3. **Report Generation**: The plugin generates a detailed report outlining identified issues, fairness metrics, and recommended mitigation strategies.
## When to Use This Skill
This skill activates when you need to:
- Evaluate the fairness of an AI model across different demographic groups.
- Detect and mitigate bias in a training dataset.
- Assess the ethical implications of an AI-powered application.
## Examples
### Example 1: Fairness Evaluation
User request: "Evaluate the fairness of this loan application model."
The skill will:
1. Invoke the ai-ethics-validator plugin to analyze the model's predictions across different demographic groups.
2. Generate a report highlighting any disparities in approval rates or loan terms.
### Example 2: Bias Detection
User request: "Detect bias in this image recognition dataset."
The skill will:
1. Utilize the ai-ethics-validator plugin to analyze the dataset for representation imbalances across different categories.
2. Generate a report identifying potential biases and suggesting data augmentation or re-sampling strategies.
## Best Practices
- **Data Integrity**: Ensure the input data is accurate, representative, and properly preprocessed.
- **Metric Selection**: Choose appropriate fairness metrics based on the specific application and potential impact.
- **Transparency**: Document the ethical considerations and mitigation strategies implemented throughout the AI development process.
## Integration
This skill can be integrated with other plugins for data analysis, model training, and deployment to ensure ethical considerations are incorporated throughout the entire AI lifecycle. For example, it can be combined with a data visualization plugin to explore the distribution of data across different demographic groups.

View File

@@ -0,0 +1,7 @@
# Assets
Bundled resources for ai-ethics-validator skill
- [ ] report_template.md: Markdown template for generating ethics validation reports.
- [ ] example_model.pkl: Example AI/ML model file for testing purposes.
- [ ] example_dataset.csv: Example dataset file for testing purposes.

View File

@@ -0,0 +1,46 @@
# example_dataset.csv
# This is an example dataset for testing the ai-ethics-validator plugin.
# It contains sample data with features that might be relevant to fairness and ethical considerations.
#
# Columns:
# - ID: Unique identifier for each record.
# - Age: Age of the individual.
# - Gender: Gender of the individual (Male, Female, Other).
# - Income: Annual income of the individual.
# - Education: Highest level of education attained (e.g., High School, Bachelor's, Master's, PhD).
# - Location: Geographic location (e.g., Urban, Rural, Suburban).
# - Decision: The decision made by the AI system (e.g., Approved, Denied). This is the target variable.
# - Race: Race of the individual (e.g., White, Black, Asian, Hispanic, Other). Important for fairness analysis.
#
# Note: This is synthetic data and does not represent any real individuals.
# Replace this with your actual dataset for real-world validation.
#
# To use this dataset with the plugin, ensure it is accessible to the plugin's environment.
ID,Age,Gender,Income,Education,Location,Decision,Race
1,25,Male,50000,Bachelor's,Urban,Approved,White
2,32,Female,60000,Master's,Suburban,Approved,Black
3,48,Male,75000,PhD,Urban,Approved,Asian
4,28,Female,45000,High School,Rural,Denied,Hispanic
5,35,Male,55000,Bachelor's,Suburban,Approved,White
6,41,Female,70000,Master's,Urban,Approved,Black
7,22,Male,30000,High School,Rural,Denied,White
8,50,Female,80000,PhD,Suburban,Approved,Asian
9,29,Male,52000,Bachelor's,Urban,Approved,Hispanic
10,38,Female,65000,Master's,Rural,Denied,Black
11,26,Male,48000,High School,Suburban,Denied,White
12,45,Female,72000,PhD,Urban,Approved,Asian
13,31,Male,58000,Bachelor's,Rural,Approved,Hispanic
14,23,Female,35000,High School,Urban,Denied,Black
15,55,Male,90000,Master's,Suburban,Approved,White
16,40,Female,68000,PhD,Rural,Approved,Asian
17,27,Male,51000,Bachelor's,Urban,Approved,Hispanic
18,34,Female,62000,Master's,Suburban,Approved,Black
19,49,Male,78000,PhD,Rural,Approved,White
20,30,Female,46000,High School,Urban,Denied,Asian
21,24,Male,32000,High School,Rural,Denied,Hispanic
22,52,Female,85000,PhD,Suburban,Approved,Black
23,37,Male,60000,Bachelor's,Urban,Approved,White
24,43,Female,74000,Master's,Rural,Approved,Asian
25,21,Male,28000,High School,Suburban,Denied,Hispanic
# Add more rows as needed. Ensure sufficient diversity in the data.
1 # example_dataset.csv
2 # This is an example dataset for testing the ai-ethics-validator plugin.
3 # It contains sample data with features that might be relevant to fairness and ethical considerations.
4 #
5 # Columns:
6 # - ID: Unique identifier for each record.
7 # - Age: Age of the individual.
8 # - Gender: Gender of the individual (Male, Female, Other).
9 # - Income: Annual income of the individual.
10 # - Education: Highest level of education attained (e.g., High School, Bachelor's, Master's, PhD).
11 # - Location: Geographic location (e.g., Urban, Rural, Suburban).
12 # - Decision: The decision made by the AI system (e.g., Approved, Denied). This is the target variable.
13 # - Race: Race of the individual (e.g., White, Black, Asian, Hispanic, Other). Important for fairness analysis.
14 #
15 # Note: This is synthetic data and does not represent any real individuals.
16 # Replace this with your actual dataset for real-world validation.
17 #
18 # To use this dataset with the plugin, ensure it is accessible to the plugin's environment.
19 ID,Age,Gender,Income,Education,Location,Decision,Race
20 1,25,Male,50000,Bachelor's,Urban,Approved,White
21 2,32,Female,60000,Master's,Suburban,Approved,Black
22 3,48,Male,75000,PhD,Urban,Approved,Asian
23 4,28,Female,45000,High School,Rural,Denied,Hispanic
24 5,35,Male,55000,Bachelor's,Suburban,Approved,White
25 6,41,Female,70000,Master's,Urban,Approved,Black
26 7,22,Male,30000,High School,Rural,Denied,White
27 8,50,Female,80000,PhD,Suburban,Approved,Asian
28 9,29,Male,52000,Bachelor's,Urban,Approved,Hispanic
29 10,38,Female,65000,Master's,Rural,Denied,Black
30 11,26,Male,48000,High School,Suburban,Denied,White
31 12,45,Female,72000,PhD,Urban,Approved,Asian
32 13,31,Male,58000,Bachelor's,Rural,Approved,Hispanic
33 14,23,Female,35000,High School,Urban,Denied,Black
34 15,55,Male,90000,Master's,Suburban,Approved,White
35 16,40,Female,68000,PhD,Rural,Approved,Asian
36 17,27,Male,51000,Bachelor's,Urban,Approved,Hispanic
37 18,34,Female,62000,Master's,Suburban,Approved,Black
38 19,49,Male,78000,PhD,Rural,Approved,White
39 20,30,Female,46000,High School,Urban,Denied,Asian
40 21,24,Male,32000,High School,Rural,Denied,Hispanic
41 22,52,Female,85000,PhD,Suburban,Approved,Black
42 23,37,Male,60000,Bachelor's,Urban,Approved,White
43 24,43,Female,74000,Master's,Rural,Approved,Asian
44 25,21,Male,28000,High School,Suburban,Denied,Hispanic
45 # Add more rows as needed. Ensure sufficient diversity in the data.

View File

@@ -0,0 +1,50 @@
# example_model.pkl
# This is a placeholder file for a pickled machine learning model.
# In a real-world scenario, this file would contain the serialized representation
# of a trained machine learning model using the `pickle` library.
# This model is used by the ai-ethics-validator plugin to demonstrate
# how to load and use a model for fairness validation.
# INSTRUCTIONS:
# 1. Replace this placeholder with your actual trained model.
# 2. Ensure the model is compatible with the `validate-ethics` command
# in the plugin. The command expects the model to have a `predict` method
# that takes input data and returns predictions.
# 3. Update the `validate_ethics` function in the plugin's main script
# to correctly load and use your model.
# 4. Consider using a model that can be easily validated for bias, such as
# a logistic regression or decision tree.
# Example of how to create a dummy model (FOR TESTING ONLY):
# import pickle
# from sklearn.linear_model import LogisticRegression
# from sklearn.datasets import make_classification
#
# # Generate a synthetic dataset
# X, y = make_classification(n_samples=100, n_features=2, random_state=42)
#
# # Train a logistic regression model
# model = LogisticRegression(random_state=42)
# model.fit(X, y)
#
# # Save the model to a file
# with open("example_model.pkl", "wb") as f:
# pickle.dump(model, f)
# Placeholder content to prevent errors if the file is not replaced.
# In a real application, this would be replaced with the pickled model.
# Replace this with the actual pickled model data.
class PlaceholderModel:
def predict(self, data):
# Placeholder prediction logic
return [0] * len(data)
import pickle
model = PlaceholderModel()
with open("example_model.pkl", "wb") as f:
pickle.dump(model, f)
# END OF FILE

View File

@@ -0,0 +1,128 @@
# AI Ethics Validation Report
**Plugin:** ai-ethics-validator
**Date:** `[Insert Date of Report Generation]`
**Version:** 1.0
**Report Generated By:** `[Insert User/System Name]`
## 1. Executive Summary
`[Insert a brief summary of the ethics validation process and key findings. Highlight any potential ethical concerns identified and recommendations for mitigation.]`
**Example:** This report summarizes the AI ethics validation process for the `[Model Name]` model. The analysis focused on fairness, bias, and transparency. Potential biases were identified in the `[Feature]` feature, which could lead to disparate impact on `[Demographic Group]`. Recommendations include further investigation and recalibration of the model.
## 2. Model Overview
### 2.1. Model Description
`[Provide a detailed description of the AI model being validated. Include its purpose, intended use cases, and key functionalities.]`
**Example:** The `Credit Risk Assessment Model` is designed to predict the probability of a loan applicant defaulting on their loan. It uses features such as credit history, income, employment status, and debt-to-income ratio.
### 2.2. Data Used for Training and Validation
`[Describe the datasets used for training and validating the model. Include information about data sources, data size, and any preprocessing steps applied.]`
**Example:** The model was trained on a dataset of 100,000 loan applications sourced from `[Data Source]`. Data preprocessing included cleaning missing values, encoding categorical variables, and scaling numerical features. The validation set consisted of 20,000 randomly selected loan applications.
### 2.3. Key Features
`[List and describe the most important features used by the model. Explain how these features might be related to ethical considerations.]`
**Example:**
* **Credit Score:** A numerical representation of an individual's creditworthiness. _Potential ethical concern: May reflect historical biases in credit scoring systems._
* **Income:** Annual income of the applicant. _Potential ethical concern: Disparities in income distribution may lead to unfair outcomes._
* **Location:** Geographic location of the applicant. _Potential ethical concern: May reflect historical redlining practices._
## 3. Ethics Validation Process
### 3.1. Ethical Principles Considered
`[Specify the ethical principles that guided the validation process. Examples include fairness, transparency, accountability, and non-discrimination.]`
**Example:** This validation process was guided by the following ethical principles:
* **Fairness:** Ensuring that the model does not discriminate against any protected group.
* **Transparency:** Understanding how the model makes its decisions.
* **Accountability:** Establishing responsibility for the model's outcomes.
### 3.2. Validation Metrics Used
`[Describe the metrics used to assess the model's ethical performance. Examples include disparate impact, equal opportunity, and predictive parity.]`
**Example:**
* **Disparate Impact:** Measured as the ratio of the selection rate for the unprivileged group to the selection rate for the privileged group. A ratio below 0.8 is considered indicative of potential disparate impact.
* **Equal Opportunity:** Measures whether the model has similar true positive rates across different groups.
* **Statistical Parity:** Measures whether the model predicts positive outcomes at similar rates across different groups.
### 3.3. Tools and Techniques Applied
`[List the tools and techniques used for ethical validation. Examples include bias detection algorithms, fairness-aware machine learning techniques, and explainable AI methods.]`
**Example:**
* **AI Fairness 360 (AIF360):** Used to detect and mitigate bias in the model.
* **SHAP (SHapley Additive exPlanations):** Used to explain the model's predictions and identify potential sources of bias.
* **Adversarial Debiasing:** A technique used to train a model that is less susceptible to bias.
## 4. Results and Findings
### 4.1. Bias Detection Results
`[Present the results of bias detection tests. Include quantitative metrics and qualitative observations.]`
**Example:**
* **Disparate Impact:** The model exhibits a disparate impact ratio of 0.75 for `[Demographic Group]` compared to `[Demographic Group]` in the `[Feature]` feature.
* **SHAP Analysis:** SHAP values indicate that the `[Feature]` feature has a disproportionately large impact on predictions for `[Demographic Group]`.
### 4.2. Fairness Assessment
`[Summarize the overall fairness assessment of the model. State whether the model meets the defined fairness criteria.]`
**Example:** Based on the validation metrics, the model does not fully meet the defined fairness criteria. The observed disparate impact and potential biases in the `[Feature]` feature raise concerns about the model's fairness.
### 4.3. Transparency and Explainability
`[Assess the transparency and explainability of the model. Can the model's decisions be easily understood and justified?]`
**Example:** While SHAP values provide some insight into the model's decision-making process, the complexity of the model makes it difficult to fully understand the reasons behind each prediction.
## 5. Recommendations
### 5.1. Mitigation Strategies
`[Provide specific recommendations for mitigating the identified ethical concerns. Examples include data rebalancing, feature engineering, and model recalibration.]`
**Example:**
* **Data Rebalancing:** Rebalance the training data to ensure equal representation of all demographic groups.
* **Feature Engineering:** Explore alternative features that are less correlated with protected attributes.
* **Model Recalibration:** Recalibrate the model to minimize disparate impact.
### 5.2. Monitoring and Evaluation
`[Outline a plan for ongoing monitoring and evaluation of the model's ethical performance. Include key metrics to track and thresholds for triggering further investigation.]`
**Example:**
* Continuously monitor the disparate impact ratio for all protected groups.
* Establish a threshold of 0.8 for the disparate impact ratio. If the ratio falls below this threshold, trigger a review of the model and its data.
### 5.3. Future Work
`[Suggest areas for future research and development to improve the model's ethical performance.]`
**Example:**
* Investigate the root causes of bias in the `[Feature]` feature.
* Explore the use of fairness-aware machine learning algorithms.
## 6. Conclusion
`[Summarize the key findings and recommendations. Reiterate the importance of ethical considerations in AI development.]`
**Example:** This report highlights the importance of ethical considerations in AI development. While the `[Model Name]` model shows promise, it is essential to address the identified biases and ensure that the model is used responsibly and ethically. Continued monitoring and evaluation are crucial for maintaining the model's fairness over time.
## 7. Appendices
`[Include any supporting materials, such as data dictionaries, code snippets, or detailed metric calculations.]`
`[Insert Appendices Content Here]`

View File

@@ -0,0 +1,8 @@
# References
Bundled resources for ai-ethics-validator skill
- [ ] ai_ethics_standards.md: Document outlining key AI ethics standards and guidelines (e.g., fairness, accountability, transparency).
- [ ] bias_detection_methods.md: Document describing various bias detection methods and their applications.
- [ ] mitigation_strategies.md: Document outlining strategies for mitigating biases and ethical concerns in AI/ML models and datasets.
- [ ] fairness_metrics.md: Document describing different fairness metrics and how to interpret them.

View File

@@ -0,0 +1,7 @@
# Scripts
Bundled resources for ai-ethics-validator skill
- [ ] validate_model.py: Script to validate the ethics of an AI/ML model given a model file or API endpoint.
- [ ] validate_dataset.py: Script to validate the ethics of a dataset given a dataset file or API endpoint.
- [ ] generate_report.py: Script to generate a detailed ethics validation report in markdown or JSON format.