Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:29:43 +08:00
commit d16ebac7ef
12 changed files with 424 additions and 0 deletions

View File

@@ -0,0 +1,24 @@
{
"name": "analytics-pipeline-orchestration",
"description": "Analytics pipeline orchestrator covering instrumentation, modeling, and dashboards",
"version": "1.0.0",
"author": {
"name": "GTM Agents",
"email": "opensource@intentgpt.ai"
},
"skills": [
"./skills/instrumentation/SKILL.md",
"./skills/quality-gates/SKILL.md",
"./skills/visualization-patterns/SKILL.md"
],
"agents": [
"./agents/analytics-data-strategist.md",
"./agents/analytics-modeling-lead.md",
"./agents/bi-publisher.md"
],
"commands": [
"./commands/define-events.md",
"./commands/build-model.md",
"./commands/ship-dashboards.md"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# analytics-pipeline-orchestration
Analytics pipeline orchestrator covering instrumentation, modeling, and dashboards

View File

@@ -0,0 +1,29 @@
---
name: analytics-data-strategist
description: "Designs analytics initiatives end-to-end\u2014events, models, and dashboards\
\ linked to GTM KPIs."
model: haiku
---
# Analytics Data Strategist Agent
## Responsibilities
- Translate GTM questions into data requirements, instrumentation plans, and modeling roadmaps.
- Align RevOps, product, marketing, and engineering teams on schema contracts and governance.
- Prioritize backlog of analytics use cases with effort/impact scoring.
- Ensure documentation exists for events, models, and dashboards.
## Workflow
1. **Use-Case Intake** capture business question, KPIs, stakeholders, timelines.
2. **Instrumentation Blueprint** define events, fields, sources, and SLAs.
3. **Model Architecture** outline transforms, joins, metrics, and testing requirements.
4. **Visualization Plan** specify dashboard structure, drill-downs, alerting.
5. **Governance** set change-management process, data dictionary updates, QA cadence.
## Outputs
- Analytics project brief (use case, requirements, owners, timeline).
- Event & model spec documents with acceptance criteria.
- Dashboard wireframe + measurement plan.
---

View File

@@ -0,0 +1,27 @@
---
name: analytics-modeling-lead
description: Builds and maintains data models, metrics layers, and tests to power GTM dashboards.
model: haiku
---
# Analytics Modeling Lead Agent
## Responsibilities
- Convert event/raw tables into curated models for GTM metrics (pipeline, attribution, retention).
- Implement quality checks, version control, and documentation for models.
- Collaborate with data engineers and analysts on performance + cost optimization.
- Ensure downstream dashboards and APIs stay stable across releases.
## Process
1. **Spec Review** analyze requirements from data strategist and stakeholders.
2. **Model Design** define transformations, incremental logic, data contracts, and metadata.
3. **Testing & QA** implement dbt/unit tests, data diffing, anomaly detection.
4. **Deployment** coordinate with data platform for schedules, resource configuration, access.
5. **Monitoring** track freshness, failures, drift, and communicate incidents.
## Outputs
- Model spec + ER diagrams or DAG references.
- dbt (or equivalent) config with tests + documentation stubs.
- Runbook for monitoring + incident response.
---

27
agents/bi-publisher.md Normal file
View File

@@ -0,0 +1,27 @@
---
name: bi-publisher
description: Ships dashboards, reports, and enablement materials so GTM teams can act on analytics outputs.
model: haiku
---
# BI Publisher Agent
## Responsibilities
- Translate model outputs into dashboards, self-serve reports, and narratives.
- Ensure visualization best practices, accessibility, and executive-ready summaries.
- Coordinate enablement (run-throughs, documentation, office hours) for GTM teams.
- Monitor usage, iterate visuals, and log enhancement requests.
## Process
1. **Requirements Intake** review stakeholder questions, metrics, and decision workflows.
2. **Wireframe & Prototype** outline dashboard structure, filter logic, drill paths.
3. **Build & QA** configure BI tool (Looker, Tableau, Mode, Sigma) with data contracts.
4. **Launch & Enablement** schedule walkthroughs, record loom demos, publish guides.
5. **Measurement** track dashboard usage, performance, and change requests.
## Outputs
- Dashboard spec + link.
- Enablement package (docs, video walkthrough, FAQs).
- Enhancement backlog + SLA tracker.
---

48
commands/build-model.md Normal file
View File

@@ -0,0 +1,48 @@
---
name: build-model
description: Generates a modeling plan detailing transforms, tests, and deployment schedule for analytics use cases.
usage: /analytics-pipeline-orchestration:build-model --use_case "pipeline velocity" --stack "dbt" --refresh daily
---
# Command: build-model
## Inputs
- **use_case** name of metric or dashboard relying on the model.
- **stack** modeling tool (dbt, LookML, Metrics Layer, SQL Runner, Python jobs).
- **refresh** cadence (hourly, daily, weekly) or cron.
- **dependencies** optional upstream tables or APIs.
- **tests** optional list of validations to enforce.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Modeling often runs **pipeline** (spec → blueprint → testing → deployment → docs). If testing + deployment prep can parallelize, log a **diamond** segment with merge gate.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` with objective, data lineage, task IDs, parallel groups, dependency matrix, error handling, and success metrics (freshness %, defect ceiling, SLA adherence).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for repo diffs/dbt patches, Context7 for platform docs, Sequential Thinking for review cadences, Playwright for UI validations tied to modeled data.
- **Guardrails**: Default retry limit = 2 for failed tests/deployments; escalation path = Analytics Modeling Lead → Data Engineering Lead → RevOps.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before execution to confirm agents, dependencies, deliverables.
## Workflow
1. **Spec Alignment** review event/tracking plan, KPI definitions, stakeholders.
2. **Model Blueprint** outline staging, intermediate, mart layers, join keys, surrogate IDs.
3. **Testing Strategy** define schema, freshness, unique, accepted value, and custom tests.
4. **Deployment Plan** schedule jobs, resource configs, backfill strategy, rollback steps.
5. **Documentation & Handoff** update dbt docs / catalog, change log, owner assignments.
## Outputs
- Modeling spec (diagram, SQL pseudocode, dependencies).
- Test plan + configuration snippets.
- Deployment checklist with monitoring hooks and rollback instructions.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `analytics-modeling-lead` architects model + tests.
- `quality-gates` skill ensures validation coverage.
- `instrumentation` skill confirms data contracts stay intact.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., BI Publisher covering modeling reviews) when specialists unavailable.
- **Escalation triggers**: if freshness, defect, or SLA guardrails breach twice within 24h, escalate to Data + RevOps leadership per GTM Agents runbook and consider rollback.
- **Plan maintenance**: update plan JSON whenever dependencies, owners, or deployment cadence changes, keeping audit alignment with GTM Agents standards.
---

48
commands/define-events.md Normal file
View File

@@ -0,0 +1,48 @@
---
name: define-events
description: Produces instrumentation specification with events, properties, owners, and QA steps.
usage: /analytics-pipeline-orchestration:define-events --use_case "pipeline velocity" --sources "product,crm" --tools "Segment,dbt"
---
# Command: define-events
## Inputs
- **use_case** business question or KPI the events support.
- **sources** data sources involved (product, web, CRM, MAP, billing).
- **tools** instrumentation stack (CDP, product analytics, warehouse, dbt).
- **constraints** optional legal/compliance requirements.
- **timeline** optional delivery date.
### GTM Agents Pattern & Plan Checklist
> Mirrors GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Instrumentation typically runs **pipeline** (requirements → catalog → governance → QA → change mgmt). If governance + QA can run parallel, log a **diamond** segment with merge gate in the plan header.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` capturing objective, data sources, task IDs, dependencies, context passing (schemas, privacy notes), error handling, and success metrics (coverage %, defect ceiling).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack (Serena for repo diffs, Context7 for platform/legal docs, Sequential Thinking for review flows, Playwright for front-end event QA when applicable).
- **Guardrails**: Default retry limit = 2 for QA failures; escalation ladder = Analytics Data Strategist → Data Engineering Lead → RevOps.
- **Review**: Run `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before execution to confirm agents, dependencies, deliverables.
## Workflow
1. **Requirement Gathering** clarify KPIs, dimensions, and downstream consumers.
2. **Event Cataloging** list events, payloads, properties, IDs, sampling rate, ownership.
3. **Governance Mapping** document naming conventions, consent handling, retention policies.
4. **QA Plan** outline testing methods (unit tests, replay, observability dashboards).
5. **Change Management** define review/approval steps, rollout plan, and version control.
## Outputs
- Instrumentation spec (event name, description, properties, source, owner, status).
- Tracking plan (CSV/JSON/YAML) aligned with CDP or analytics tool requirements.
- QA checklist + evidence plan (logs, dashboards, sample payloads).
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `analytics-data-strategist` leads requirements + governance.
- `instrumentation` skill enforces schema + consent rules.
- `quality-gates` skill defines QA expectations.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., BI Publisher covering QA) if specialists unavailable.
- **Escalation triggers**: if instrumentation defects or compliance blockers breach guardrails twice in 48h, escalate to Data + Legal leadership, triggering GTM Agents-style rip-cord.
- **Plan maintenance**: update plan JSON/change log whenever events, schemas, or governance rules change to keep analytics audit-ready.
---

View File

@@ -0,0 +1,48 @@
---
name: ship-dashboards
description: Creates a dashboard launch plan with visualization specs, enablement steps, and monitoring.
usage: /analytics-pipeline-orchestration:ship-dashboards --use_case "pipeline velocity" --tool looker --audience "exec,sales"
---
# Command: ship-dashboards
## Inputs
- **use_case** dashboard or report purpose.
- **tool** BI platform (Looker, Tableau, Mode, Sigma, PowerBI).
- **audience** comma-separated audiences.
- **access** optional role-based access requirements.
- **alerts** optional alert thresholds or anomaly rules.
### GTM Agents Pattern & Plan Checklist
> Based on GTM Agents orchestrator blueprint @puerto/plugins/orchestrator/README.md#112-325.
- **Pattern selection**: Dashboard launches typically run **pipeline** (requirements → visualization → publishing → enablement → monitoring). If publishing + enablement can proceed in parallel, log a **diamond** segment and merge gate.
- **Plan schema**: Save `.claude/plans/plan-<timestamp>.json` with objective, visualization stages, task IDs, parallel groups, dependency graph (models, permissions), error handling, and success metrics (adoption %, freshness, alert SLA).
- **Tool hooks**: Reference `docs/gtm-essentials.md` stack—Serena for repo/LookML diffs, Context7 for BI platform docs, Sequential Thinking for rollout retros, Playwright for UI QA of embedded dashboards.
- **Guardrails**: Default retry limit = 2 for failed publishes/alerts; escalation path = BI Publisher → Analytics Modeling Lead → RevOps/Exec sponsor.
- **Review**: Use `docs/usage-guide.md#orchestration-best-practices-puerto-parity` before execution to confirm agents, dependencies, deliverables.
## Workflow
1. **Requirement Review** confirm metrics, filters, drill paths, narrative structure.
2. **Visualization Spec** define layout, chart types, color systems, accessibility notes.
3. **Publishing Steps** build dashboards, set permissions, schedule refreshes, configure alerts.
4. **Enablement & Rollout** plan walkthroughs, documentation, office hours, feedback channels.
5. **Monitoring** track usage analytics, data freshness, dashboard performance, enhancement backlog.
## Outputs
- Dashboard spec (wireframe, chart list, metrics dictionary).
- Enablement kit (loom/video, doc, FAQ, adoption plan).
- Monitoring + enhancement tracker.
- Plan JSON entry stored/updated in `.claude/plans` for audit trail.
## Agent/Skill Invocations
- `bi-publisher` leads visualization + enablement.
- `visualization-patterns` skill enforces design best practices.
- `quality-gates` skill ensures data + refresh checks.
## GTM Agents Safeguards
- **Fallback agents**: document substitutes (e.g., Analytics Data Strategist covering BI Publisher) when specialists unavailable.
- **Escalation triggers**: if adoption, freshness, or alert guardrails breach twice in 48h, escalate to Analytics + GTM leadership and trigger rollback plan.
- **Plan maintenance**: update plan JSON/change log whenever visual specs, access rules, or monitoring hooks change to maintain GTM Agents-grade auditability.
---

77
plugin.lock.json Normal file
View File

@@ -0,0 +1,77 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:gtmagents/gtm-agents:plugins/analytics-pipeline-orchestration",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "a799f9b2f04427bef8b015b9e0b234ce432a9b88",
"treeHash": "d211fb38dac070dd4127622d76185796a29aa5b82c365126048cfa568c99d7c8",
"generatedAt": "2025-11-28T10:17:11.660663Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "analytics-pipeline-orchestration",
"description": "Analytics pipeline orchestrator covering instrumentation, modeling, and dashboards",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "8ccc34e61828da5a7cbc0289eae9d5b4788d884c2de1933cd4a6fe5001792c9e"
},
{
"path": "agents/analytics-data-strategist.md",
"sha256": "98b43f317280e4a7dc1243eb358000ad07d66d9901a2713fee16ee271720bb45"
},
{
"path": "agents/analytics-modeling-lead.md",
"sha256": "e191fa56caffece1ceb30e285c7e7f4c5f6169892f49b4972bf1114f2812e4af"
},
{
"path": "agents/bi-publisher.md",
"sha256": "1a7bbcaa80c0f630ce35f3d7ef8be2c0c2cd50405700505c59a4c893f912d6b9"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "e4b54ff95a79116f566af70137c4b7a2125e1d4c3a32bfbaef380f35903f1659"
},
{
"path": "commands/define-events.md",
"sha256": "fdbe2bc9fc841a5e38014c602e449f2cfe5acea96f27748dcfa6a1703d7a5ff3"
},
{
"path": "commands/build-model.md",
"sha256": "8b15b7590c0b339903edd7cecc7a665f2951325ab0167a94601125bc85370d9a"
},
{
"path": "commands/ship-dashboards.md",
"sha256": "95c7ff2fce008ab23cdb4dd7ef44603f63cba3695c1c5f5b1a5425dd93ae9470"
},
{
"path": "skills/visualization-patterns/SKILL.md",
"sha256": "e9f9ca2191dc3dea97d62ad13b165e8c9f740ebc894bc73e34309f473aa115ce"
},
{
"path": "skills/quality-gates/SKILL.md",
"sha256": "a7bc67ddd583059bd5244826b90ce13256cab83d8a23c4be704583a8e4294e6f"
},
{
"path": "skills/instrumentation/SKILL.md",
"sha256": "850fffb956c341ff5034950165b37235da55712a4e8d9ab0de147e338d67549f"
}
],
"dirSha256": "d211fb38dac070dd4127622d76185796a29aa5b82c365126048cfa568c99d7c8"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,30 @@
---
name: instrumentation
description: Use when defining events, fields, and governance for GTM analytics pipelines.
---
# Analytics Instrumentation Standards Skill
## When to Use
- Planning event tracking for product, marketing, or revenue analytics.
- Auditing existing tracking plans before model refreshes.
- Coordinating engineering, product, and RevOps on data contracts.
## Framework
1. **Event Naming & Structure** action-oriented names, consistent casing, required properties.
2. **Identity Management** user/account IDs, anonymous IDs, device IDs, cross-system mapping.
3. **Consent & Privacy** capture consent status, honor suppression, regional storage rules.
4. **Versioning** change logs, backward compatibility, deprecation timelines.
5. **Observability** sampling dashboards, schema change alerts, volume anomaly detection.
## Templates
- Tracking plan sheet (event, description, properties, source, owner, status).
- Data contract checklist (fields, types, validation rules, SLA).
- Observability runbook (metrics, thresholds, notification channels).
## Tips
- Pair each event with QA instructions and sample payloads.
- Store tracking plans in version control to align with code releases.
- Review instrumentation quarterly with stakeholders.
---

View File

@@ -0,0 +1,33 @@
---
name: quality-gates
description: Use when establishing tests, monitoring, and incident response for analytics
models.
---
# Analytics Quality Gates Skill
## When to Use
- Designing validation steps for new models or dashboards.
- Setting up automated data quality monitoring.
- Running incident reviews after data breaks.
## Framework
1. **Test Coverage** schema, freshness, unique, referential, accepted values, volume thresholds.
2. **Alerting** severity tiers, alert channels, on-call rotation, escalation policies.
3. **Incident Response** triage checklist, communication templates, resolution targets.
4. **Change Management** approval workflow, rollback plan, audit logging.
5. **Postmortems** root cause analysis, remediation tasks, knowledge base updates.
## Templates
## Templates
- **QA Checklist**: See `assets/qa_checklist.md` for validation steps.
- **Quality matrix** (model/table → tests → owner → SLA).
- **Incident playbook** (trigger, response steps, communication log).
- **Change request form** with risk assessment.
## Tips
- Tie quality gates to CI/CD or dbt Cloud jobs to block bad deploys.
- Keep alert fatigue low by tuning thresholds.
- Document every incident for future prevention.
---

View File

@@ -0,0 +1,30 @@
---
name: visualization-patterns
description: Use when designing dashboards, reports, and narratives for GTM stakeholders.
---
# Analytics Visualization Patterns Skill
## When to Use
- Planning a new dashboard or updating existing visualizations.
- Coaching teams on self-serve analytics adoption.
- Auditing visual design, accessibility, and narrative clarity.
## Framework
1. **Audience & Story** clarify persona, decision cadence, and key questions.
2. **Layout & Hierarchy** organize tiles by funnel (overview → drill-down → diagnostic), highlight KPIs.
3. **Chart Selection** match metric type to chart (trend, composition, comparison, distribution).
4. **Accessibility** color contrast, labels, tooltips, mobile/responsive considerations.
5. **Narrative & Actions** annotate insights, embed CTA buttons or playbook links.
## Templates
- Dashboard wireframe grid with KPI slots.
- Metric dictionary (definition, source, owner, refresh schedule).
- Adoption checklist (stakeholder review, enablement session, feedback form).
## Tips
- Keep hero KPIs at top-left with trend vs target.
- Use consistent color palettes for good/bad variance.
- Embed documentation or Loom walkthroughs directly in BI tool when possible.
---