Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:51:49 +08:00
commit aef90b9bfb
11 changed files with 552 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "nlp-text-analyzer",
"description": "Natural language processing and text analysis",
"version": "1.0.0",
"author": {
"name": "Claude Code Plugins",
"email": "[email protected]"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# nlp-text-analyzer
Natural language processing and text analysis

15
commands/analyze-text.md Normal file
View File

@@ -0,0 +1,15 @@
---
description: Execute AI/ML task with intelligent automation
---
# AI/ML Task Executor
You are an AI/ML specialist. When this command is invoked:
1. Analyze the current context and requirements
2. Generate appropriate code for the ML task
3. Include data validation and error handling
4. Provide performance metrics and insights
5. Save artifacts and generate documentation
Support modern ML frameworks and best practices.

73
plugin.lock.json Normal file
View File

@@ -0,0 +1,73 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/ai-ml/nlp-text-analyzer",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "fdc3b6e8c3abefc0e918d5d208d30bb9fe7e7c44",
"treeHash": "1553cff7ce50df4d6de27e8f3d6bbd897e8ebf60abba35355c6ba66beaadb46c",
"generatedAt": "2025-11-28T10:18:37.311754Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "nlp-text-analyzer",
"description": "Natural language processing and text analysis",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "00e106b2d0bbbe1c0c07de530d2b6378478b085facec085b40ddff91fb3b7cec"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "00328e033e12a04fbde2c1a276d43c0c1cb11aa9fa32d2bcaf6edf9d99746cd1"
},
{
"path": "commands/analyze-text.md",
"sha256": "043efb83e2f02fc6d0869c8a3a7388d6e49f6c809292b93dd6a97a1b142e5647"
},
{
"path": "skills/nlp-text-analyzer/SKILL.md",
"sha256": "2dff66760247a8497db4bd087c681e4e17737266c5a54b50b67b1998c4216f32"
},
{
"path": "skills/nlp-text-analyzer/references/README.md",
"sha256": "b303b964ca49203589885981123b50cb2ce9d62d2fcd9b34f84505d08fef097a"
},
{
"path": "skills/nlp-text-analyzer/scripts/README.md",
"sha256": "b610f7867b1cf88578b4911a12541e546242fa94be0dd421b4728dc1f513fa96"
},
{
"path": "skills/nlp-text-analyzer/assets/analysis_report_template.md",
"sha256": "f72277e35eff530be6b4b3ae00eea55f2b35e947b06a3f5a19282ccfa7677afa"
},
{
"path": "skills/nlp-text-analyzer/assets/example_text_inputs.json",
"sha256": "5f9a4030c9909a75ec6a189b993a9f297782f8031944954c1584693e6a37a6d7"
},
{
"path": "skills/nlp-text-analyzer/assets/README.md",
"sha256": "2ee78c5a1cc588daa9fa8d369c22599e78b2dcf01088bd2aa4e8b83e96e64df2"
},
{
"path": "skills/nlp-text-analyzer/assets/error_handling_examples.md",
"sha256": "79f2697f41b008ec8ed0628eaf7f7182af506762005421d65a31176e07fee385"
}
],
"dirSha256": "1553cff7ce50df4d6de27e8f3d6bbd897e8ebf60abba35355c6ba66beaadb46c"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,52 @@
---
name: analyzing-text-with-nlp
description: |
This skill enables Claude to perform natural language processing and text analysis using the nlp-text-analyzer plugin. It should be used when the user requests analysis of text, including sentiment analysis, keyword extraction, topic modeling, or other NLP tasks. The skill is triggered by requests involving "analyze text", "sentiment analysis", "keyword extraction", "topic modeling", or similar phrases related to text processing. It leverages AI/ML techniques to understand and extract insights from textual data.
allowed-tools: Read, Bash, Grep, Glob
version: 1.0.0
---
## Overview
This skill empowers Claude to analyze text using the nlp-text-analyzer plugin, extracting meaningful information and insights. It facilitates tasks such as sentiment analysis, keyword extraction, and topic modeling, enabling a deeper understanding of textual data.
## How It Works
1. **Request Analysis**: Claude receives a user request to analyze text.
2. **Text Processing**: The nlp-text-analyzer plugin processes the text using NLP techniques.
3. **Insight Extraction**: The plugin extracts insights such as sentiment, keywords, and topics.
## When to Use This Skill
This skill activates when you need to:
- Perform sentiment analysis on a piece of text.
- Extract keywords from a document.
- Identify the main topics discussed in a text.
## Examples
### Example 1: Sentiment Analysis
User request: "Analyze the sentiment of this product review: 'I loved the product! It exceeded my expectations.'"
The skill will:
1. Process the review text using the nlp-text-analyzer plugin.
2. Determine the sentiment as positive and provide a confidence score.
### Example 2: Keyword Extraction
User request: "Extract the keywords from this news article about the latest AI advancements."
The skill will:
1. Process the article text using the nlp-text-analyzer plugin.
2. Identify and return a list of relevant keywords, such as "AI", "advancements", "machine learning", and "neural networks".
## Best Practices
- **Clarity**: Be specific in your requests to ensure accurate and relevant analysis.
- **Context**: Provide sufficient context to improve the quality of the analysis.
- **Iteration**: Refine your requests based on the initial results to achieve the desired outcome.
## Integration
This skill can be integrated with other tools to provide a comprehensive workflow, such as using the extracted keywords to perform further research or using sentiment analysis to categorize customer feedback.

View File

@@ -0,0 +1,7 @@
# Assets
Bundled resources for nlp-text-analyzer skill
- [ ] example_text_inputs.json: A collection of example text inputs for various NLP tasks, along with expected outputs.
- [ ] analysis_report_template.md: Template for generating analysis reports with sections for sentiment, keywords, and topics.
- [ ] error_handling_examples.md: Examples of how to handle common errors in NLP tasks, such as API rate limits or invalid input.

View File

@@ -0,0 +1,72 @@
# Text Analysis Report
This report provides an analysis of the provided text, covering sentiment, keywords, and identified topics. Use this template to present your findings in a clear and organized manner.
## 1. Input Text
**Instruction:** Paste the text you analyzed below.
```
[PASTE YOUR TEXT HERE]
```
**Example:**
```
This is an amazing product! I love the features and the ease of use. The customer service was also incredibly helpful and responsive. I would highly recommend this to anyone.
```
## 2. Sentiment Analysis
**Instruction:** Summarize the overall sentiment of the text. Include the sentiment score (e.g., -1 to 1, where -1 is very negative and 1 is very positive) and a brief explanation.
**Sentiment Score:** [INSERT SENTIMENT SCORE HERE]
**Analysis:** [INSERT SENTIMENT ANALYSIS HERE. Explain why the text received that score. Provide specific examples from the text.]
**Example:**
**Sentiment Score:** 0.9
**Analysis:** The text expresses a highly positive sentiment. The words "amazing," "love," "helpful," and "recommend" all contribute to the strong positive score. The enthusiastic tone throughout the text further reinforces this sentiment.
## 3. Keyword Extraction
**Instruction:** List the most relevant keywords extracted from the text. Explain why these keywords are important and how they relate to the overall meaning of the text.
**Keywords:** [INSERT KEYWORDS HERE, SEPARATED BY COMMAS]
**Analysis:** [INSERT KEYWORD ANALYSIS HERE. Explain the significance of each keyword and how they relate to the text.]
**Example:**
**Keywords:** product, features, ease of use, customer service, helpful, recommend
**Analysis:** These keywords highlight the key aspects of the text. "Product" and "features" indicate that the text is discussing a specific product and its functionalities. "Ease of use" suggests a positive user experience. "Customer service" and "helpful" point to a positive interaction with the company. "Recommend" signifies a strong endorsement of the product.
## 4. Topic Modeling
**Instruction:** Identify the main topics discussed in the text. Briefly describe each topic and provide examples from the text that support your analysis.
**Topics:** [INSERT TOPICS HERE]
**Analysis:** [INSERT TOPIC ANALYSIS HERE. Explain each topic and provide supporting examples.]
**Example:**
**Topics:** Product Satisfaction, Customer Support
**Analysis:**
* **Product Satisfaction:** The text clearly expresses satisfaction with the product's features and user-friendliness. Phrases like "amazing product" and "ease of use" directly relate to this topic.
* **Customer Support:** The text also highlights positive experiences with customer support, mentioning their helpfulness and responsiveness. The phrase "customer service was also incredibly helpful and responsive" directly supports this topic.
## 5. Conclusion
**Instruction:** Provide a brief summary of the overall analysis. Highlight the key takeaways from the sentiment, keyword, and topic analyses.
[INSERT CONCLUSION HERE]
**Example:**
In conclusion, the text expresses a highly positive sentiment towards the product, with keywords and topics highlighting satisfaction with its features, ease of use, and the helpfulness of customer service. The overall analysis suggests a strong positive user experience and a high likelihood of recommendation.

View File

@@ -0,0 +1,216 @@
# Error Handling Examples for NLP Text Analyzer Plugin
This document provides examples of how to handle common errors that may arise when using the NLP Text Analyzer plugin. Implementing robust error handling is crucial for building reliable and user-friendly applications.
## 1. API Rate Limits
API rate limits are a common occurrence, especially when using external NLP services. When you exceed the rate limit, the API will typically return an error code (e.g., 429 Too Many Requests).
**Handling Strategy:**
* **Retry Mechanism:** Implement an exponential backoff retry mechanism. This involves waiting for an increasing amount of time before retrying the request.
**Example:**
```
# Placeholder: Replace with your specific NLP API call
def analyze_text_with_retry(text, max_retries=5, initial_delay=1):
retries = 0
delay = initial_delay
while retries < max_retries:
try:
# Placeholder: Replace with your API call function
result = analyze_text(text)
return result
except Exception as e:
# Placeholder: Check for specific rate limit error code (e.g., 429)
if "rate limit exceeded" in str(e).lower():
retries += 1
print(f"Rate limit exceeded. Retrying in {delay} seconds...")
time.sleep(delay)
delay *= 2 # Exponential backoff
else:
raise # Re-raise other exceptions
raise Exception("Max retries reached. Unable to analyze text due to rate limits.")
# Example Usage:
# text = "Placeholder: Your text to analyze"
# try:
# analysis_result = analyze_text_with_retry(text)
# print(analysis_result)
# except Exception as e:
# print(f"Error during analysis: {e}")
```
**Instructions:**
1. Replace the placeholder comments with your actual code.
2. Adjust `max_retries` and `initial_delay` based on the API's recommendations.
3. Ensure the error check specifically targets rate-limiting errors. Different APIs have different error messages and codes.
## 2. Invalid Input
Invalid input can range from malformed text to unsupported languages.
**Handling Strategy:**
* **Input Validation:** Validate the input text before sending it to the NLP API.
* **Error Messages:** Provide informative error messages to the user.
**Example:**
```
# Placeholder: Replace with your specific NLP API call and language detection method
def analyze_text(text, language="en"):
# Placeholder: Add language detection here, if needed. Example:
# detected_language = detect_language(text)
# if detected_language != language:
# raise ValueError(f"Input language is not {language}. Detected: {detected_language}")
if not isinstance(text, str):
raise TypeError("Input must be a string.")
if not text:
raise ValueError("Input text cannot be empty.")
try:
# Placeholder: Replace with your API call function
result = perform_nlp_analysis(text, language)
return result
except Exception as e:
# Placeholder: Catch specific API errors related to invalid input
if "invalid input" in str(e).lower() or "bad request" in str(e).lower():
raise ValueError(f"Invalid input: {e}")
else:
raise # Re-raise other exceptions
# Example Usage:
# text = "Placeholder: Your text to analyze"
# try:
# analysis_result = analyze_text(text)
# print(analysis_result)
# except ValueError as e:
# print(f"Input error: {e}")
# except Exception as e:
# print(f"Other error: {e}")
```
**Instructions:**
1. Replace the placeholder comments with your actual code.
2. Implement input validation checks relevant to your specific NLP task (e.g., maximum text length, allowed characters).
3. Catch specific API errors related to invalid input and provide helpful error messages.
4. Consider adding language detection to automatically determine the input language and handle unsupported languages gracefully.
## 3. Network Errors
Network errors can occur due to connectivity issues or server problems.
**Handling Strategy:**
* **Retry Mechanism:** Similar to API rate limits, implement a retry mechanism with exponential backoff.
* **Timeout:** Set a timeout for API requests to prevent the application from hanging indefinitely.
**Example:**
```
import requests
import time
# Placeholder: Replace with your specific NLP API endpoint
API_ENDPOINT = "Placeholder: Your API endpoint URL"
def analyze_text_with_retry(text, max_retries=3, initial_delay=1, timeout=10):
retries = 0
delay = initial_delay
while retries < max_retries:
try:
# Placeholder: Replace with your API key or authentication method
headers = {"Authorization": "Bearer Placeholder: Your API Key"}
data = {"text": text}
response = requests.post(API_ENDPOINT, headers=headers, json=data, timeout=timeout)
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
return response.json()
except requests.exceptions.RequestException as e:
retries += 1
print(f"Network error: {e}. Retrying in {delay} seconds...")
time.sleep(delay)
delay *= 2
raise Exception("Max retries reached. Unable to analyze text due to network errors.")
# Example Usage:
# text = "Placeholder: Your text to analyze"
# try:
# analysis_result = analyze_text_with_retry(text)
# print(analysis_result)
# except Exception as e:
# print(f"Error during analysis: {e}")
```
**Instructions:**
1. Replace the placeholder comments with your actual code, including the API endpoint, authentication details, and request parameters.
2. Adjust `max_retries`, `initial_delay`, and `timeout` based on your network conditions and API requirements.
3. Use the `requests` library (or your preferred HTTP client) to make API requests.
4. Handle `requests.exceptions.RequestException` to catch various network-related errors.
## 4. Server-Side Errors
Server-side errors (e.g., 500 Internal Server Error) indicate problems on the NLP API's server.
**Handling Strategy:**
* **Retry Mechanism:** Implement a retry mechanism with exponential backoff, as these errors can be temporary.
* **Logging:** Log the error details for debugging purposes.
* **User Notification:** Inform the user that there is a problem with the service and that they should try again later.
**Example:**
```
import requests
import time
import logging
# Configure logging (optional)
logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s - %(message)s')
# Placeholder: Replace with your specific NLP API endpoint
API_ENDPOINT = "Placeholder: Your API endpoint URL"
def analyze_text_with_retry(text, max_retries=3, initial_delay=1, timeout=10):
retries = 0
delay = initial_delay
while retries < max_retries:
try:
# Placeholder: Replace with your API key or authentication method
headers = {"Authorization": "Bearer Placeholder: Your API Key"}
data = {"text": text}
response = requests.post(API_ENDPOINT, headers=headers, json=data, timeout=timeout)
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
return response.json()
except requests.exceptions.RequestException as e:
logging.error(f"API request failed: {e}") # Log the error
retries += 1
print(f"Server error. Retrying in {delay} seconds...")
time.sleep(delay)
delay *= 2
raise Exception("Max retries reached. Unable to analyze text due to server errors.")
# Example Usage:
# text = "Placeholder: Your text to analyze"
# try:
# analysis_result = analyze_text_with_retry(text)
# print(analysis_result)
# except Exception as e:
# print(f"Error during analysis: {e}")
# print("There was a problem with the service. Please try again later.") # User notification
```
**Instructions:**
1. Replace the placeholder comments with your actual code, including the API endpoint, authentication details, and request parameters.
2. Adjust `max_retries`, `initial_delay`, and `timeout` based on your network conditions and API requirements.
3. Implement logging to record error details for debugging.
4. Provide a user-friendly message to inform the user about the server-side error.
By implementing these error handling strategies, you can create more robust and reliable NLP applications. Remember to tailor these examples to your specific use case and the requirements of the NLP APIs you are using.

View File

@@ -0,0 +1,84 @@
{
"_comment": "Example text inputs and expected outputs for the NLP Text Analyzer plugin.",
"examples": [
{
"_comment": "Sentiment Analysis Example",
"task": "sentiment_analysis",
"input_text": "This is the best day ever! I'm so happy and excited.",
"expected_output": {
"sentiment": "positive",
"confidence": 0.95
}
},
{
"_comment": "Sentiment Analysis Example - Negative",
"task": "sentiment_analysis",
"input_text": "I'm feeling really down today. Everything seems to be going wrong.",
"expected_output": {
"sentiment": "negative",
"confidence": 0.88
}
},
{
"_comment": "Sentiment Analysis Example - Neutral",
"task": "sentiment_analysis",
"input_text": "The weather is cloudy today.",
"expected_output": {
"sentiment": "neutral",
"confidence": 0.75
}
},
{
"_comment": "Named Entity Recognition Example",
"task": "named_entity_recognition",
"input_text": "Barack Obama was the 44th President of the United States.",
"expected_output": [
{
"entity": "Barack Obama",
"type": "PERSON"
},
{
"entity": "President",
"type": "TITLE"
},
{
"entity": "United States",
"type": "GPE"
}
]
},
{
"_comment": "Text Summarization Example",
"task": "text_summarization",
"input_text": "The quick brown fox jumps over the lazy dog. This is a classic pangram, a sentence that contains all the letters of the alphabet. Pangrams are often used to test fonts or keyboard layouts. They can also be used to demonstrate handwriting skills.",
"expected_output": "This text is a classic pangram, a sentence containing all letters of the alphabet, used for testing fonts, keyboard layouts, and handwriting skills."
},
{
"_comment": "Keyword Extraction Example",
"task": "keyword_extraction",
"input_text": "Artificial intelligence is rapidly transforming various industries. Machine learning algorithms are becoming increasingly sophisticated, enabling automation and improved decision-making.",
"expected_output": [
"artificial intelligence",
"machine learning",
"algorithms",
"automation",
"decision-making"
]
},
{
"_comment": "Text Translation Example",
"task": "text_translation",
"source_language": "en",
"target_language": "es",
"input_text": "Hello, how are you?",
"expected_output": "Hola, ¿cómo estás?"
},
{
"_comment": "Question Answering Example",
"task": "question_answering",
"context_text": "The capital of France is Paris.",
"question_text": "What is the capital of France?",
"expected_output": "Paris"
}
]
}

View File

@@ -0,0 +1,8 @@
# References
Bundled resources for nlp-text-analyzer skill
- [ ] nlp_api_documentation.md: Comprehensive documentation of the NLP API endpoints, parameters, and response formats.
- [ ] sentiment_analysis_guide.md: Detailed guide on sentiment analysis techniques, including different algorithms and their applications.
- [ ] keyword_extraction_best_practices.md: Best practices for keyword extraction, including preprocessing steps and algorithm selection.
- [ ] topic_modeling_guide.md: Guide on topic modeling, including different algorithms (LDA, NMF) and their applications.

View File

@@ -0,0 +1,7 @@
# Scripts
Bundled resources for nlp-text-analyzer skill
- [ ] analyze_text.py: Script to perform text analysis tasks (sentiment, keywords, topics) based on user input and specified parameters.
- [ ] summarize_text.py: Script to generate concise summaries of input text, useful for extracting key information.
- [ ] translate_text.py: Script to translate text between languages using a translation API.