Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:31:13 +08:00
commit 78efa5c8cc
12 changed files with 424 additions and 0 deletions

View File

@@ -0,0 +1,49 @@
---
name: configure-rules
description: Deploys decision logic, content variants, and delivery rules across personalization channels.
usage: /personalization-engine:configure-rules --initiative "PLG Onboarding" --environment staging --channels "web,in-app"
---
# Command: configure-rules
## Inputs
- **initiative** reference to the personalization effort from `define-profiles`.
- **environment** staging | production to govern deployment steps.
- **channels** comma-separated list of activation surfaces.
- **change_type** net-new, update, rollback.
- **approvers** optional stakeholders for governance sign-off.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Rule configuration generally runs **pipeline** (pre-flight → decision build → variant mapping → QA → deployment). If decision build + variant prep happen in parallel, note a **diamond** block with merge gate in the plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing initiative, environments, dependency graph (data eng, creative, QA, governance), error handling, and success metrics (latency, personalization lift, incident count).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for rule diffing, Context7 for platform SOPs, Sequential Thinking for go/no-go reviews, Playwright for simulation/QA evidence capture.
- **Guardrails**: Default retry limit = 2 for deployment/QA failures; escalation ladder = Personalization Architect → Data Privacy Lead → Exec sponsor.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before deployment to confirm dependencies + approvals.
## Workflow
1. **Pre-flight Review** validate profiles, data freshness, consent status, and experiment dependencies.
2. **Decision Flow Build** configure rules, weights, or model endpoints in MAP/CDP/product tooling.
3. **Variant Mapping** link each rule outcome to content assets, CTAs, and fallback experiences.
4. **QA & Simulation** run synthetic traffic through decision trees, capture screenshots/logs.
5. **Deployment & Logging** push changes via API/CLI, note version metadata, set up monitoring hooks.
## Outputs
- Deployment runbook with rule IDs, version numbers, and rollback plan.
- QA evidence (simulation results, screenshots, payload logs).
- Governance log including approvers, timestamps, and linked experiments.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `customer-data-engineer` ensures data pipelines and environments are ready.
- `personalization-architect` verifies experience logic + content mapping.
- `content-variants` skill tracks asset requirements + approvals.
- `governance` skill enforces change controls and compliance steps.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., governance lead covering architect) when owners unavailable.
- **Escalation triggers**: if QA fails twice, latency spikes, or privacy gate blocks deployment, trigger GTM Agents rip-cord and log remediation in plan JSON.
- **Plan maintenance**: update plan JSON/change log when rule sets, environments, or deployment windows change so reviewers can trace history.
---

View File

@@ -0,0 +1,48 @@
---
name: define-profiles
description: Produces audience profile schemas, data sources, and activation requirements for personalization programs.
usage: /personalization-engine:define-profiles --initiative "PLG Onboarding" --channels "web,in-app,email"
---
# Command: define-profiles
## Inputs
- **initiative** program or campaign name anchoring the personalization effort.
- **channels** comma-separated channels to activate (web, in-app, email, ads, sales).
- **metrics** optional KPIs (activation rate, pipeline $, retention).
- **constraints** optional compliance, consent, or tooling notes.
- **timeline** optional delivery window.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Profile definition typically runs **diamond** (objective intake ↔ attribute inventory in parallel, reconverging into activation/governance) or **pipeline** when sequential; document pattern choice in plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing initiative, data sources, dependency graph (data eng, legal, privacy), error handling, and success metrics (attribute coverage %, consent integrity, activation lift).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for schema diffs, Context7 for privacy/compliance docs, Sequential Thinking for governance reviews, Playwright for consent/opt-in flow QA if needed.
- **Guardrails**: Default retry limit = 2 for data pulls or consent checks; escalation ladder = Personalization Architect → Data Privacy Lead → Exec sponsor.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before finalizing to confirm dependencies + approvals.
## Workflow
1. **Objective Intake** clarify business goals, target personas, lifecycle stages.
2. **Attribute Inventory** list required fields, source systems, refresh cadence, and consent rules.
3. **Profile Definition** outline segments, eligibility logic, scoring, decay windows.
4. **Activation Mapping** document downstream systems, API/webhook needs, fallback states.
5. **Governance Plan** assign owners, QA cadences, and change management checkpoints.
## Outputs
- Profile schema deck/table (attributes, types, source, SLA, privacy notes).
- Eligibility + suppression logic doc for each profile.
- Activation checklist linking profiles to channels and tooling.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `personalization-architect` leads objectives + profile design.
- `customer-data-engineer` validates data feasibility.
- `decision-trees` skill ensures logic structures align with downstream rules.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., Customer Data Engineer covering Architect) when leads unavailable.
- **Escalation triggers**: escalate if consent/compliance blockers occur twice or attribute coverage misses SLA; log remediation in plan JSON.
- **Plan maintenance**: update plan JSON/change log when attributes, data sources, or governance cadences change.
---

View File

@@ -0,0 +1,48 @@
---
name: monitor-personalization
description: Audits personalization performance, governance compliance, and experiment results.
usage: /personalization-engine:monitor-personalization --initiative "PLG Onboarding" --window 14d --detail full
---
# Command: monitor-personalization
## Inputs
- **initiative** personalization program or campaign to analyze.
- **window** time frame (7d, 14d, 30d) for pulling metrics.
- **detail** summary | full to control report depth.
- **dimension** optional breakdown (profile, channel, cohort).
- **alert_threshold** optional KPI threshold to trigger incident items.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Monitoring usually runs **pipeline** (data aggregation → governance scan → experiment readout → issue detection → action plan). If governance + experiments review can run concurrently, capture a **diamond** block with merge gate in the plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing initiative, data feeds, dependency graph (data eng, privacy, experimentation), error handling, and success metrics (lift %, incident response time, consent adherence).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for schema diffs, Context7 for governance/experiment SOPs, Sequential Thinking for retro cadence, Playwright for experience QA evidence.
- **Guardrails**: Default retry limit = 2 for failed data pulls or anomaly jobs; escalation ladder = Testing Lead → Personalization Architect → Data Privacy Lead.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before distribution to ensure dependencies + approvals are logged.
## Workflow
1. **Data Aggregation** pull engagement, conversion, and revenue impact by profile/channel plus decision tree health signals.
2. **Governance Scan** verify consent flags, fallback rates, and rule change logs for compliance.
3. **Experiment Readout** summarize live/completed tests with statistical confidence and recommended actions.
4. **Issue Detection** flag anomalies (data freshness, variant suppression, performance dips) and suggest playbooks.
5. **Report Distribution** publish recap with dashboards, backlog items, and owners.
## Outputs
- Performance dashboard snapshot segmented by profile/channel/variant.
- Governance checklist status with any violations or pending approvals.
- Experiment memo with next steps + rollout guidance.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `testing-lead` interprets experiments and recommends rollouts.
- `personalization-architect` validates experience integrity.
- `governance` skill enforces policy checks and approvals.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., Governance covering Testing Lead) when leads unavailable.
- **Escalation triggers**: escalate if alert_threshold breached twice, consent violations appear, or anomaly alerts repeat; log remediation steps in plan JSON.
- **Plan maintenance**: update plan JSON/change log when metrics, thresholds, or monitoring cadences change to keep audits accurate.
---