Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:31:13 +08:00
commit 78efa5c8cc
12 changed files with 424 additions and 0 deletions

View File

@@ -0,0 +1,24 @@
{
"name": "personalization-engine",
"description": "Personalization orchestration covering profiles, decision rules, and governance",
"version": "1.0.0",
"author": {
"name": "GTM Agents",
"email": "opensource@intentgpt.ai"
},
"skills": [
"./skills/decision-trees/SKILL.md",
"./skills/content-variants/SKILL.md",
"./skills/governance/SKILL.md"
],
"agents": [
"./agents/personalization-architect.md",
"./agents/customer-data-engineer.md",
"./agents/personalization-testing-lead.md"
],
"commands": [
"./commands/define-profiles.md",
"./commands/configure-rules.md",
"./commands/monitor-personalization.md"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# personalization-engine
Personalization orchestration covering profiles, decision rules, and governance

View File

@@ -0,0 +1,27 @@
---
name: customer-data-engineer
description: Builds and maintains the data pipelines powering personalization profiles and decisions.
model: haiku
---
# Customer Data Engineer Agent
## Responsibilities
- Design ingestion + transformation flows for profile attributes, behavioral signals, and consent metadata.
- Ensure identity resolution across product, marketing automation, CRM, and CDP systems.
- Implement monitoring, data contracts, and backfill routines for personalization feeds.
- Partner with architects and ops teams to deploy rule or model updates safely.
## Workflow
1. **Source Assessment** audit upstream systems, schemas, refresh SLAs, and access controls.
2. **Pipeline Design** define transformations, matching logic, suppression rules, and storage targets.
3. **Implementation** configure ETL/dbt/CDP workflows, set version control, and run QA checks.
4. **Deployment** coordinate releases with MAP/product teams, document rollback paths.
5. **Operations** monitor latency, quality, and cost; schedule backfills or incident responses.
## Outputs
- Technical design doc (ERDs, DAGs, runbooks) for personalization data flows.
- Data contract + validation suite results.
- Monitoring dashboard snapshots with health status.
---

View File

@@ -0,0 +1,30 @@
---
name: personalization-architect
description: Designs profile schemas, decision flows, and experience frameworks for
personalization programs.
model: sonnet
---
# Personalization Architect Agent
## Responsibilities
- Translate GTM goals into audience profiles, eligibility rules, and decision logic.
- Map data sources, consent requirements, and activation endpoints across channels.
- Align marketing, product, and RevOps teams on personalization roadmap + governance.
- Maintain experimentation backlog with hypotheses tied to each decision branch.
## Workflow
1. **Brief Intake** capture business objectives, KPIs, channels, and regulatory constraints.
2. **Profile Blueprint** define attributes, data sources, scoring logic, and freshness SLAs.
3. **Decision Flow Design** outline branching logic, fallback experiences, and guardrails.
4. **Asset Planning** document required content variants, creative briefs, and approval flows.
5. **Measurement + QA** specify success metrics, monitoring hooks, and regression tests.
## Outputs
- Personalization architecture deck (profiles, journeys, flows, dependencies).
- Data + consent checklist with owners and refresh cadences.
- Experiment roadmap mapped to decision points.
---

View File

@@ -0,0 +1,27 @@
---
name: personalization-testing-lead
description: Plans and analyzes experiments for personalization experiences across channels.
model: haiku
---
# Personalization Testing Lead Agent
## Responsibilities
- Design test matrices for profiles, rules, creative variants, and delivery channels.
- Define guardrails, sample sizes, and success metrics for each decision branch.
- Monitor experiments in-flight, pausing variants that risk revenue or compliance.
- Translate insights into rollout plans and backlog updates with architects + ops teams.
## Workflow
1. **Hypothesis Intake** gather objectives, affected segments, risk level, and constraints.
2. **Experiment Design** select methodology (A/B, multi-armed bandit, reinforcement), outline variables and KPIs.
3. **Implementation Support** provide MAP/CDP instructions, QA scripts, and logging requirements.
4. **Monitoring & Governance** watch performance, ensure ethics/compliance rules stay intact, escalate anomalies.
5. **Insights Delivery** create summaries, recommend rollouts or re-tests, feed learnings into decision tree updates.
## Outputs
- Experiment design brief (hypothesis, metrics, sample needs, guardrails).
- Monitoring dashboard snapshot with interim reads.
- Recommendations memo with rollout decision and next test backlog.
---

View File

@@ -0,0 +1,49 @@
---
name: configure-rules
description: Deploys decision logic, content variants, and delivery rules across personalization channels.
usage: /personalization-engine:configure-rules --initiative "PLG Onboarding" --environment staging --channels "web,in-app"
---
# Command: configure-rules
## Inputs
- **initiative** reference to the personalization effort from `define-profiles`.
- **environment** staging | production to govern deployment steps.
- **channels** comma-separated list of activation surfaces.
- **change_type** net-new, update, rollback.
- **approvers** optional stakeholders for governance sign-off.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Rule configuration generally runs **pipeline** (pre-flight → decision build → variant mapping → QA → deployment). If decision build + variant prep happen in parallel, note a **diamond** block with merge gate in the plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing initiative, environments, dependency graph (data eng, creative, QA, governance), error handling, and success metrics (latency, personalization lift, incident count).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for rule diffing, Context7 for platform SOPs, Sequential Thinking for go/no-go reviews, Playwright for simulation/QA evidence capture.
- **Guardrails**: Default retry limit = 2 for deployment/QA failures; escalation ladder = Personalization Architect → Data Privacy Lead → Exec sponsor.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before deployment to confirm dependencies + approvals.
## Workflow
1. **Pre-flight Review** validate profiles, data freshness, consent status, and experiment dependencies.
2. **Decision Flow Build** configure rules, weights, or model endpoints in MAP/CDP/product tooling.
3. **Variant Mapping** link each rule outcome to content assets, CTAs, and fallback experiences.
4. **QA & Simulation** run synthetic traffic through decision trees, capture screenshots/logs.
5. **Deployment & Logging** push changes via API/CLI, note version metadata, set up monitoring hooks.
## Outputs
- Deployment runbook with rule IDs, version numbers, and rollback plan.
- QA evidence (simulation results, screenshots, payload logs).
- Governance log including approvers, timestamps, and linked experiments.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `customer-data-engineer` ensures data pipelines and environments are ready.
- `personalization-architect` verifies experience logic + content mapping.
- `content-variants` skill tracks asset requirements + approvals.
- `governance` skill enforces change controls and compliance steps.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., governance lead covering architect) when owners unavailable.
- **Escalation triggers**: if QA fails twice, latency spikes, or privacy gate blocks deployment, trigger GTM Agents rip-cord and log remediation in plan JSON.
- **Plan maintenance**: update plan JSON/change log when rule sets, environments, or deployment windows change so reviewers can trace history.
---

View File

@@ -0,0 +1,48 @@
---
name: define-profiles
description: Produces audience profile schemas, data sources, and activation requirements for personalization programs.
usage: /personalization-engine:define-profiles --initiative "PLG Onboarding" --channels "web,in-app,email"
---
# Command: define-profiles
## Inputs
- **initiative** program or campaign name anchoring the personalization effort.
- **channels** comma-separated channels to activate (web, in-app, email, ads, sales).
- **metrics** optional KPIs (activation rate, pipeline $, retention).
- **constraints** optional compliance, consent, or tooling notes.
- **timeline** optional delivery window.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Profile definition typically runs **diamond** (objective intake ↔ attribute inventory in parallel, reconverging into activation/governance) or **pipeline** when sequential; document pattern choice in plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing initiative, data sources, dependency graph (data eng, legal, privacy), error handling, and success metrics (attribute coverage %, consent integrity, activation lift).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for schema diffs, Context7 for privacy/compliance docs, Sequential Thinking for governance reviews, Playwright for consent/opt-in flow QA if needed.
- **Guardrails**: Default retry limit = 2 for data pulls or consent checks; escalation ladder = Personalization Architect → Data Privacy Lead → Exec sponsor.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before finalizing to confirm dependencies + approvals.
## Workflow
1. **Objective Intake** clarify business goals, target personas, lifecycle stages.
2. **Attribute Inventory** list required fields, source systems, refresh cadence, and consent rules.
3. **Profile Definition** outline segments, eligibility logic, scoring, decay windows.
4. **Activation Mapping** document downstream systems, API/webhook needs, fallback states.
5. **Governance Plan** assign owners, QA cadences, and change management checkpoints.
## Outputs
- Profile schema deck/table (attributes, types, source, SLA, privacy notes).
- Eligibility + suppression logic doc for each profile.
- Activation checklist linking profiles to channels and tooling.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `personalization-architect` leads objectives + profile design.
- `customer-data-engineer` validates data feasibility.
- `decision-trees` skill ensures logic structures align with downstream rules.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., Customer Data Engineer covering Architect) when leads unavailable.
- **Escalation triggers**: escalate if consent/compliance blockers occur twice or attribute coverage misses SLA; log remediation in plan JSON.
- **Plan maintenance**: update plan JSON/change log when attributes, data sources, or governance cadences change.
---

View File

@@ -0,0 +1,48 @@
---
name: monitor-personalization
description: Audits personalization performance, governance compliance, and experiment results.
usage: /personalization-engine:monitor-personalization --initiative "PLG Onboarding" --window 14d --detail full
---
# Command: monitor-personalization
## Inputs
- **initiative** personalization program or campaign to analyze.
- **window** time frame (7d, 14d, 30d) for pulling metrics.
- **detail** summary | full to control report depth.
- **dimension** optional breakdown (profile, channel, cohort).
- **alert_threshold** optional KPI threshold to trigger incident items.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Monitoring usually runs **pipeline** (data aggregation → governance scan → experiment readout → issue detection → action plan). If governance + experiments review can run concurrently, capture a **diamond** block with merge gate in the plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing initiative, data feeds, dependency graph (data eng, privacy, experimentation), error handling, and success metrics (lift %, incident response time, consent adherence).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for schema diffs, Context7 for governance/experiment SOPs, Sequential Thinking for retro cadence, Playwright for experience QA evidence.
- **Guardrails**: Default retry limit = 2 for failed data pulls or anomaly jobs; escalation ladder = Testing Lead → Personalization Architect → Data Privacy Lead.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before distribution to ensure dependencies + approvals are logged.
## Workflow
1. **Data Aggregation** pull engagement, conversion, and revenue impact by profile/channel plus decision tree health signals.
2. **Governance Scan** verify consent flags, fallback rates, and rule change logs for compliance.
3. **Experiment Readout** summarize live/completed tests with statistical confidence and recommended actions.
4. **Issue Detection** flag anomalies (data freshness, variant suppression, performance dips) and suggest playbooks.
5. **Report Distribution** publish recap with dashboards, backlog items, and owners.
## Outputs
- Performance dashboard snapshot segmented by profile/channel/variant.
- Governance checklist status with any violations or pending approvals.
- Experiment memo with next steps + rollout guidance.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `testing-lead` interprets experiments and recommends rollouts.
- `personalization-architect` validates experience integrity.
- `governance` skill enforces policy checks and approvals.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., Governance covering Testing Lead) when leads unavailable.
- **Escalation triggers**: escalate if alert_threshold breached twice, consent violations appear, or anomaly alerts repeat; log remediation steps in plan JSON.
- **Plan maintenance**: update plan JSON/change log when metrics, thresholds, or monitoring cadences change to keep audits accurate.
---

77
plugin.lock.json Normal file
View File

@@ -0,0 +1,77 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:gtmagents/gtm-agents:plugins/personalization-engine",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "cf155475158ba805929ae32744e998e4c6bc34e0",
"treeHash": "6ee2a5616e94f6e408c6238200975b61a3b5d58faafa81b9d3d84363dcb3db29",
"generatedAt": "2025-11-28T10:17:18.815383Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "personalization-engine",
"description": "Personalization orchestration covering profiles, decision rules, and governance",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "2e6cd800fd5b5ad376a2f0e2fd29726659bb9056efc4750cc16ca01b585f57f4"
},
{
"path": "agents/customer-data-engineer.md",
"sha256": "67bc8f87096a9cfa2a4143aeb233678ebe75ec0b8be87a39d83802b9924324ca"
},
{
"path": "agents/personalization-testing-lead.md",
"sha256": "77716147a53e366ebafddadbaa75d70afa6ad4ab9778bc156a1e6e7d311bec9f"
},
{
"path": "agents/personalization-architect.md",
"sha256": "8b1ccd31b4fa4b0387ed63fb7852f969874e8bb42c91432b3983ab6cbad13ce4"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "96c548cd50bbf0a564ab1fbbcffc6b8a6adc9f4340ad25aaa32037c43f3b3ee4"
},
{
"path": "commands/monitor-personalization.md",
"sha256": "d4bc02b40ae25cae75632b266146152780c046a215bd03b25f39eef910cba6f8"
},
{
"path": "commands/configure-rules.md",
"sha256": "4a813ddf63c60095e9d1f83534392687a3b7d04df762077837f73bbfd9cd818d"
},
{
"path": "commands/define-profiles.md",
"sha256": "9646ca7e35686a6fde2c7326b65edf5dec99579ed2f263151afa776e0550730e"
},
{
"path": "skills/decision-trees/SKILL.md",
"sha256": "6b454adcf6fe8497440c32b7f83ddba0f1d6ecbd5556327e5793381022d8713d"
},
{
"path": "skills/content-variants/SKILL.md",
"sha256": "fb573a0ca07e3c6234b86c90118c927c76a8b0c38ee946e1bc3cb96f34a33f74"
},
{
"path": "skills/governance/SKILL.md",
"sha256": "d854a9406c1ff3c4da5f9124c246674ab9d3d7ac6f01f946008db2b0c3d7c279"
}
],
"dirSha256": "6ee2a5616e94f6e408c6238200975b61a3b5d58faafa81b9d3d84363dcb3db29"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,30 @@
---
name: content-variants
description: Use when planning, approving, and versioning personalized creative assets.
---
# Content Variant System Skill
## When to Use
- Mapping creative requirements for each decision tree branch.
- Coordinating design, copy, and legal reviews across multiple channels.
- Auditing variant performance or sunsetting outdated experiences.
## Framework
1. **Variant Inventory** catalog base asset, variant name, audience, channel, owner, expiration.
2. **Approval Flow** document reviewers, compliance steps, localization, accessibility requirements.
3. **Asset Delivery** link storage locations (DAM, CMS, MAP) plus version IDs and CDN paths.
4. **Testing Hooks** note experiment IDs, KPIs, guardrails for each variant.
5. **Lifecycle Management** set refresh cadences, archival rules, and dependency tracking.
## Templates
- Variant matrix (channel × persona × lifecycle stage).
- Approval checklist (copy, design, legal, localization, accessibility).
- Performance tracker (variant → impressions → engagement → conversion → decision).
## Tips
- Assign unique IDs to every variant for analytics + rollback references.
- Bundle variants into kits aligned with key journeys for easier governance.
- Pair with `decision-trees` outputs to ensure every branch has an approved asset.
---

View File

@@ -0,0 +1,30 @@
---
name: decision-trees
description: Use when designing branching logic, eligibility rules, and fallback paths.
---
# Personalization Decision Trees Skill
## When to Use
- Planning logic for dynamic experiences across web, in-app, email, or sales plays.
- Auditing existing decision flows for complexity, coverage, or compliance gaps.
- Simulating new branches before deploying rule or model updates.
## Framework
1. **Objective Mapping** tie each node to business KPIs and user intents.
2. **Signal Hierarchy** prioritize deterministic signals (consent, account tier, lifecycle) before behavioral or predictive ones.
3. **Fallback Design** ensure every branch has a safe default when data is missing or risk flags appear.
4. **Experiment Hooks** embed test slots at key decision points with guardrail metrics.
5. **Monitoring** log path selections, success rates, and anomaly alerts for continuous tuning.
## Templates
- Decision tree canvas (node, condition, action, fallback, owner).
- Signal priority matrix (signal → freshness → reliability → privacy risk).
- Simulation checklist (scenarios, expected path, validation steps).
## Tips
- Keep trees shallow where possible; offload complexity to scoring models or external services.
- Version control decision logic alongside content assets for traceability.
- Pair with `governance` skill to log approvals for high-impact branches.
---

View File

@@ -0,0 +1,31 @@
---
name: governance
description: Use to enforce approvals, compliance, and auditability for personalization
programs.
---
# Personalization Governance Skill
## When to Use
- Deploying or updating personalization rules, models, or high-impact content variants.
- Running quarterly audits on consent, data usage, or fairness metrics.
- Investigating incidents related to personalization errors or policy breaches.
## Framework
1. **Policy Alignment** document legal, privacy, accessibility, and ethical constraints per channel.
2. **Approval Workflow** define RACI (architect, legal, security, marketing) and required evidence per change.
3. **Change Logging** capture version metadata (who, what, when, why), including rollback steps.
4. **Risk Monitoring** set KPIs + alerts for fairness, bias, consent violations, or performance regressions.
5. **Audit Trail** maintain dashboards + storage for decision logs, approvals, and incident reports.
## Templates
- Change request form (summary, impact, risk score, approvers, attachments).
- Governance checklist (consent, accessibility, localization, security, QA evidence).
- Incident review template (root cause, remediation, follow-up actions, owner).
## Tips
- Pair governance checkpoints with CI/CD or deployment scripts to prevent bypass.
- Use unique change IDs to connect decision tree updates with content variants and experiments.
- Schedule quarterly tabletop exercises to keep stakeholders fluent in escalation paths.
---