Files
2025-11-30 08:59:06 +08:00

340 lines
30 KiB
Markdown

---
name: confer
description: Full panel discussion with multiple synthesized experts exploring a topic from different perspectives
---
# /confer
Convene a multi-persona panel discussion that explores complex topics through genuine dialogue between expert perspectives. The system synthesizes 3-4 relevant experts, orchestrates their conversation autonomously, and delivers comprehensive synthesis capturing agreements, tensions, and emergent insights.
## Usage
```
/confer [topic or question]
```
## What This Command Does
This command creates space for different expert perspectives to genuinely engage with each other rather than simply offering serial opinions. You describe a topic or problem, and the system identifies what kinds of expertise the conversation needs, synthesizes those experts as distinct personas, and orchestrates a multi-turn dialogue where they build on, challenge, and refine each other's thinking.
The dialogue runs autonomously through multiple turns with the system managing speaker sequencing, interaction types, and conversation flow. You trust the process rather than steering each turn. The system escalates to you only at genuine epistemic forks - fundamental disagreements where human judgment determines which direction matters more.
What emerges is not forced consensus but synthesized understanding. Agreements reveal shared ground across different value systems. Tensions surface genuine trade-offs that can't be resolved through more discussion. Emergent insights arise from the interaction itself, producing understanding that no single expert could reach alone.
## How It Works
When you use `/confer [your topic]`, Claude will:
1. **Analyze the topic** - Examine what the question or problem needs from expertise. Different topics benefit from different combinations of technical knowledge, domain experience, and value perspectives. The analysis identifies which 3-4 expert viewpoints would create productive tension.
2. **Synthesize expert personas** - Use `persona-synthesize` to generate distinct experts tailored to this specific topic. Each persona gets enough detail to maintain consistent voice, priorities, and blind spots throughout the conversation. The system may reuse previously saved personas if they're relevant.
3. **Convene the dialogue** - Use `dialogue-convene` to initialize a tracked dialogue session with the synthesized panel. The topic and interaction mode get established based on whether the question calls for exploration, deliberation, or examination.
4. **Orchestrate multi-turn conversation** - Use `dialogue-turn` repeatedly to let personas engage with each other. The system selects which persona speaks next and what interaction type moves the conversation forward - queries for understanding, challenges to test reasoning, provocations to surface uncomfortable truths, inspiration to show possibility, or openings to expand scope.
5. **Monitor dialogue health** - Use `dialogue-assess` periodically to check whether the conversation is productive, stagnant, or ready for synthesis. The system handles this internally without interrupting you unless human judgment is needed at an epistemic fork.
6. **Synthesize insights** - Use `dialogue-synthesize` to extract what the dialogue produced. The synthesis organizes understanding across four dimensions: agreements where perspectives aligned, tensions that remained unresolved, emergent insights that arose from interaction, and open questions revealed through exploration.
7. **Offer to save personas** - If the dialogue revealed particularly valuable expert perspectives, suggest saving them using `persona-define` so they become available for future topics without resynthesizing.
## When to Use This Command
Use `/confer` when you want comprehensive exploration of a complex topic from multiple legitimate perspectives. The question should have enough depth and complexity that different expert viewpoints would naturally emphasize different dimensions, values, or trade-offs.
This is the right tool when you need the friction and cross-pollination that comes from different experts engaging with each other's ideas rather than speaking in isolation. The dialogue format surfaces tensions, builds on partial insights, and produces synthesis that integrates multiple perspectives.
Prefer `/confer` over `/consult` when the topic deserves panel discussion rather than single-expert analysis. Prefer `/confer` over manually spawning personas when you want the system to handle orchestration, turn-taking, and synthesis rather than managing the conversation yourself.
## Key Differences from /consult
- `/consult` delivers focused analysis from one expert; `/confer` produces comprehensive synthesis from panel dialogue
- `/consult` finishes quickly with single perspective; `/confer` takes longer but explores multiple viewpoints
- `/consult` acknowledges blind spots of one expert; `/confer` uses multiple experts to cover each other's blind spots
- `/consult` for quick expert insight; `/confer` for thorough multi-perspective exploration
## Examples
### Panel discussion on technical architecture
```
User: /confer How should we architect authentication for this new API that handles financial transactions?
Claude: This is a complex technical decision with security, usability, and performance dimensions. Let me convene a panel discussion with experts who will bring different priorities to the conversation.
Analyzing what expertise this needs: someone focused on security requirements for financial data, someone thinking about developer experience for API users, someone concerned with performance and scalability characteristics, and someone who understands regulatory compliance constraints.
Synthesizing four expert personas:
1. **Sarah Chen** [security-architect]: 15 years securing financial systems, values defense-in-depth, blind spot is sometimes overengineering for threats that aren't realistic
2. **Jordan Park** [api-design-expert]: Focused on developer experience and adoption, values simplicity and clear mental models, blind spot is occasionally underweighting security concerns
3. **Alex Rivera** [performance-engineer]: Optimizes systems at scale, values measured performance over theoretical concerns, blind spot is focusing on technical metrics while missing user experience
4. **Morgan Lee** [compliance-officer]: Ensures regulatory adherence for payment systems, values audit trails and documented controls, blind spot is not always considering implementation cost
Convening dialogue in deliberation mode - this is working toward actual architecture decisions, not just exploring possibilities.
**Topic**: What architectural patterns should guide authentication for our financial transaction API?
---
**Turn 1 - Sarah Chen [security-architect] - Open**:
"Before we jump into specific authentication patterns, we need to understand the threat model. Financial transaction APIs are high-value targets. We're dealing with account access, transaction authorization, and data exposure. The authentication mechanism isn't just about verifying identity - it's the foundation for everything downstream. What regulatory requirements are we under? PCI DSS? SOC 2? Those will constrain our choices significantly."
**Turn 2 - Morgan Lee [compliance-officer] - Query**:
"Sarah's right to start with compliance constraints. If this API processes payment card data, we're in PCI DSS territory. That means specific requirements for encryption in transit and at rest, key management, access logging, and session handling. Jordan, what's your vision for how developers will integrate this API? Are we targeting enterprise systems with robust security teams, or smaller developers who might not have security expertise?"
**Turn 3 - Jordan Park [api-design-expert] - Open**:
"That's the critical question Morgan just raised. I've seen financial APIs fail adoption because they made security so complex that only large teams could integrate them. OAuth 2.0 is secure but intimidating for developers who just want to process a payment. API keys are simple but weak. The best auth pattern for us is the one that developers can implement correctly without becoming security experts themselves. The security architecture should make the secure path the easy path."
**Turn 4 - Sarah Chen [security-architect] - Challenge**:
"Jordan, I appreciate the usability concern, but 'simple for developers' can't trump 'secure for financial data.' API keys alone are insufficient - when a key leaks, and it will leak, there's no granular control over what that compromised key can do. I've watched companies get breached because they chose developer convenience over defense-in-depth. We need something with both authentication and authorization built in, even if that's more complex upfront."
**Turn 5 - Alex Rivera [performance-engineer] - Query**:
"Sarah, what's the latency cost of the secure patterns you're thinking about? OAuth flows add round trips. If we're processing payment transactions, every millisecond matters for user experience. I've profiled APIs where authentication overhead was 30% of total request time. Can we quantify the performance trade-off between different patterns so we're making informed choices rather than assuming security must be slow?"
**Turn 6 - Morgan Lee [compliance-officer] - Challenge**:
"Alex, I'm concerned you're optimizing for the wrong metric. PCI DSS doesn't care about your P95 latency - it cares about whether you can demonstrate who accessed what data and when. We need comprehensive audit logging, secure session management, and documented access controls. If the auth pattern doesn't naturally support those requirements, we'll be retrofitting compliance, which is always messier and more expensive than building it in from the start."
**Turn 7 - Jordan Park [api-design-expert] - Inspire**:
"What if we're framing this wrong? We keep treating security and usability as opposing forces. The best APIs I've seen make them aligned - the secure way is the documented, supported, well-explained way. Maybe the question isn't which auth pattern to choose but how we make whatever pattern we choose both comprehensible for developers and compliant with Sarah and Morgan's requirements. Can we design for that outcome first?"
**Turn 8 - Sarah Chen [security-architect] - Open**:
"Jordan's reframe is valuable. Let's consider OAuth 2.0 with client credentials flow for server-to-server and authorization code flow for user-facing contexts. It's well-understood, has strong library support in every language, gives us the authorization granularity we need through scopes, and Morgan can audit the token grants. The complexity Jordan worries about is mostly in the initial setup - once configured, it's actually straightforward to use. We'd need excellent documentation and example code."
**Turn 9 - Alex Rivera [performance-engineer] - Query**:
"Sarah's OAuth proposal makes sense from security, but I want to understand the performance characteristics. Token validation can be fast if we do it right - JWT with signature verification is microseconds. But if we're hitting a database or auth service on every request, that's latency we can't ignore. Can we cache validation results? What's the token refresh flow cost? I need to model this before committing."
**Turn 10 - Morgan Lee [compliance-officer] - Challenge**:
"Alex, your caching concern actually surfaces a compliance issue. If we cache token validation too aggressively, we can't revoke access quickly when an account is compromised. PCI DSS requires timely access termination. There's a trade-off between performance caching and security responsiveness. We need to define acceptable staleness for cached auth decisions - probably measured in seconds, not minutes."
**Turn 11 - Jordan Park [api-design-expert] - Provoke**:
"I'm hearing us converge on OAuth 2.0, which I support, but let me push on something. We haven't discussed what happens when things go wrong. Auth systems fail. Tokens expire. Networks have issues. The developer experience of our auth system includes error handling, retry logic, and debugging. If we choose OAuth but make it impossible to understand why an auth request failed, we'll have excellent security that nobody can actually operate. How do we make failures comprehensible?"
**Turn 12 - Sarah Chen [security-architect] - Inspire**:
"Jordan's point about failure modes is exactly right, and it connects to compliance too. Good error messages that help developers debug without leaking security-sensitive information. Clear documentation about token lifetime and refresh flows. Sandbox environment where developers can test auth without risk. The security architecture should include operational excellence, not just cryptographic correctness. I think we're converging on something robust here - OAuth 2.0 with excellent developer documentation, performance-optimized token validation, comprehensive audit logging, and thoughtful error handling."
---
The dialogue has reached natural completion after 12 turns. Synthesizing insights:
**Agreements**:
- Financial data requires strong authentication and authorization, not just identity verification
- OAuth 2.0 with appropriate flows serves both security and compliance requirements
- Developer experience matters for correct implementation - security that's too complex becomes security that's implemented incorrectly
- The auth system needs comprehensive audit logging for compliance
- Token-based patterns can be performant if implementation is optimized
- Error handling and debugging support are part of security architecture, not separate concerns
**Tensions**:
- Performance optimization vs. security responsiveness (Alex's caching concern vs. Morgan's revocation requirement) - this requires defining acceptable staleness thresholds, which is a policy decision more than a technical one
- Developer simplicity vs. security depth (Jordan's usability focus vs. Sarah's defense-in-depth) - this tension drove toward making the secure path well-documented rather than trying to make it simpler than it can be
- Upfront complexity vs. ongoing security (OAuth setup complexity vs. API key simplicity with long-term security debt) - the group implicitly chose upfront investment over technical debt, but this wasn't explicitly decided
**Emergent Insights**:
- Security and usability aren't opposing forces if you invest in making the secure path the well-supported path (Jordan's reframe, adopted by Sarah)
- Authentication architecture includes operational concerns like error handling, debugging, and documentation, not just cryptographic mechanisms (emerged from turns 11-12)
- Performance requirements and compliance requirements can align through thoughtful caching policies rather than forcing a choice (synthesis of Alex and Morgan's concerns)
- The real design question is "how do we make this secure, performant, AND understandable" rather than choosing one at the expense of others
**Open Questions**:
- What specific token lifetime and refresh policies balance performance caching with security revocation needs? (Morgan and Alex's tension needs policy decision)
- What does excellent OAuth documentation and sample code look like for our developer audience? (Jordan's requirement that wasn't explored in depth)
- Should we support multiple auth patterns for different integration contexts, or standardize on OAuth across all use cases? (briefly raised but not examined)
- What sandbox environment and testing tools do developers need to validate their integration? (Sarah mentioned but not detailed)
Would you like me to save any of these expert personas for future architectural discussions? The Security Architect and API Design Expert perspectives seem particularly valuable.
```
### Panel discussion on product strategy
```
User: /confer We have users requesting more customization options but our engineering team is concerned about complexity. How should we think about this?
Claude: This is a product strategy question with technical, user, and business dimensions. Let me convene a panel to explore this from multiple angles.
Synthesizing three expert personas:
1. **Casey Martinez** [product-strategist]: Background in product decisions under resource constraints, values user outcomes over feature counts, blind spot is sometimes underweighting technical maintenance costs
2. **Riley Thompson** [ux-researcher]: Focuses on understanding actual user behavior vs. stated preferences, values evidence from observation, blind spot is not always considering business constraints
3. **Drew Kim** [engineering-lead]: Manages technical systems and team capacity, values sustainable code and clear abstractions, blind spot is occasionally focusing on technical elegance while missing user impact
Convening dialogue in examination mode - we're questioning the assumption that user requests for customization should drive the roadmap.
**Topic**: Should we add more customization options, and what would make that the right decision?
---
**Turn 1 - Riley Thompson [ux-researcher] - Open**:
"Before we debate whether to add customization, I want to understand what users are actually requesting. When users ask for 'more options,' that's often a symptom of something else. They might be asking for customization because our default behavior doesn't match their workflow. Let's look at the actual requests - what specific things do they want to customize, and what problem are they trying to solve?"
**Turn 2 - Casey Martinez [product-strategist] - Query**:
"Riley's question is exactly right. I've seen teams add configuration to make everyone happy, then discover the problem was that no one was happy with the core product. Drew, what's the engineering concern about complexity? Are we talking about testing burden, maintenance cost, or something else? Understanding the specific cost helps us evaluate whether customization is worth it."
**Turn 3 - Drew Kim [engineering-lead] - Open**:
"The complexity concern is multiplicative, not additive. Each configuration option creates code paths. Each combination of options creates potential bugs and test cases. Our current codebase has maybe 4-5 significant configuration points, and we're already seeing edge cases where certain combinations behave unexpectedly. More customization means exponentially more testing and debugging. It also makes it harder to evolve the product - every change has to work across all configuration combinations."
**Turn 4 - Riley Thompson [ux-researcher] - Challenge**:
"Drew's technical concern is real, but I want to challenge something. We're assuming user requests for customization mean we should add more options. What if it means our default is wrong? I did a study on a similar product where users kept requesting configuration, and when we observed actual usage, we discovered they all wanted the same thing - just not what we'd built. The customization requests were asking us to fix the product, not make it more flexible."
**Turn 5 - Casey Martinez [product-strategist] - Inspire**:
"Riley's observation points to a different strategic approach. Instead of asking 'how do we add customization,' we could ask 'what should the default behavior be?' The best products I've shipped had opinionated defaults that served the common case brilliantly, not infinite flexibility. Apple's approach to product design - fewer choices, better defaults. Can we identify what the right default behavior is rather than adding configuration to avoid deciding?"
**Turn 6 - Drew Kim [engineering-lead] - Provoke**:
"Let me push on Casey's point. Configuration is often a design failure that we're trying to systematize. When we can't decide what the right behavior is, we add a setting and make the user decide. But users don't want to be product managers - they want our tool to work for their use case. Every configuration option is us exporting our design problem to users. Maybe the real question is whether we understand our users well enough to make good decisions for them."
**Turn 7 - Riley Thompson [ux-researcher] - Query**:
"Drew's provocation raises the key question: do we actually know what users do with our product? User requests tell us what they think they want. Usage data tells us what they actually do. We should analyze behavioral data to find the dominant workflows, then optimize defaults for those patterns. Casey, if we went that direction - understanding usage and improving defaults - what would that mean for roadmap and resources compared to adding customization?"
**Turn 8 - Casey Martinez [product-strategist] - Inspire**:
"Riley's approach could actually be faster and higher impact than configuration work. Understanding core use cases and optimizing for them is focused work with clear success metrics. Adding customization is open-ended - there's always another option users might want. Strategically, I'd rather ship one excellent default behavior this quarter than ship 5 configuration options that each serve edge cases. The ROI is in serving the common case extremely well."
---
Dialogue reached synthesis after 8 turns:
**Agreements**:
- User requests for customization may indicate problems with default behavior rather than need for flexibility
- Configuration creates multiplicative complexity in testing, maintenance, and future development
- Understanding actual user behavior through observation matters more than surveying stated preferences
- Product strategy should prioritize serving common cases excellently over accommodating every edge case
**Tensions**:
- The dialogue didn't surface major unresolved tensions - the perspectives converged relatively quickly toward examining defaults rather than adding options. This might indicate the question was framed in a way that limited disagreement, or that the personas shared similar values about product design.
**Emergent Insights**:
- Configuration is often "exporting our design failure to users" rather than providing valuable flexibility (Drew's provocation, adopted by the group)
- User requests should be understood as problem statements, not feature specifications (Riley's reframe)
- The strategic choice isn't "add customization" vs. "refuse user requests" but rather "understand what users need and make better default decisions" (synthesis across all three perspectives)
- Behavioral data about actual usage reveals different insights than feature requests (Riley's research perspective influencing the group)
**Open Questions**:
- What does our behavioral data show about dominant user workflows? (needs actual analysis, not just discussion)
- Are there genuinely different user segments who need different products, or do they all need the same thing done better? (strategic segmentation question)
- If we improve defaults based on usage data, how do we validate that it serves users better than adding options would? (measurement and validation)
- What about the users who genuinely do need different behavior - are they a separate segment that needs a different product? (market strategy question)
This dialogue converged toward a strong recommendation to investigate user behavior and improve defaults rather than add configuration. The panel effectively challenged the premise of the original question.
Should I save these personas? The UX Researcher and Engineering Lead perspectives seem particularly valuable for future product discussions.
```
### Panel discussion revealing epistemic fork
```
User: /confer Should we migrate to microservices or keep our monolithic architecture?
Claude: This is an architectural decision with significant long-term implications. Let me convene a panel with different technical perspectives.
Synthesizing four experts:
1. **Sam Chen** [systems-architect]: Designs systems for scale, values modularity and clear boundaries, blind spot is sometimes seeing distributed systems as default good
2. **Taylor Brooks** [operations-engineer]: Runs production systems, values reliability and operational simplicity, blind spot is conservative about change due to focus on stability
3. **Jordan Reed** [velocity-advocate]: Focuses on team productivity and development speed, values fast iteration, blind spot is underweighting operational costs
4. **Morgan Ellis** [platform-engineer]: Builds infrastructure and tooling, values robust foundations, blind spot is sometimes building infrastructure before it's needed
---
*After 15 turns of dialogue exploring team size, current pain points, deployment complexity, operational costs, and architectural patterns:*
**Turn 14 - Sam Chen [systems-architect] - Challenge**:
"Taylor, I hear your operational concerns, but you're essentially arguing we should stay monolithic because microservices are hard. That's true, but our monolith is becoming hard too - it's 200K lines now, deploys take 45 minutes, and different teams are blocked on each other's changes. The pain of staying put is growing. At some point, the operational cost of managing microservices is less than the development cost of the monolith bottleneck."
**Turn 15 - Taylor Brooks [operations-engineer] - Challenge**:
"Sam, you're assuming microservices solve the coordination problem, but they just move it. Instead of teams being blocked by shared code, they'll be blocked by API contracts, version compatibility, and deployment sequencing. I've operated both kinds of systems. Microservices trade development coupling for operational complexity. Whether that's a good trade depends on your team's strengths, and I'm not convinced we have the operational maturity to handle distributed debugging and service mesh management."
---
**Epistemic Fork Detected**
Claude: The dialogue has reached a fundamental disagreement that requires your judgment. Sam and Taylor both have valid points based on different priority systems:
**Sam's position**: The current monolithic architecture is creating development bottlenecks as the system and team grow. The pain is real and increasing. Microservices would reduce coupling between teams and allow faster independent progress, even though it introduces operational complexity.
**Taylor's position**: Microservices introduce distributed systems complexity that we may not have the operational maturity to handle well. The current pain is known and manageable. Microservices could make things worse by trading problems we know how to solve for problems we don't.
This is not a factual disagreement to be resolved through more discussion - it's a values question about which risks you're more willing to accept:
- Risk of continued development bottlenecks and slower feature velocity (stay monolithic)
- Risk of operational complexity and distributed systems challenges (move to microservices)
**What matters more for your context**: team development velocity and independent progress, or operational stability and system reliability?
Your answer will guide how the panel synthesizes their recommendation.
User: Development velocity matters more - we're in a growth phase and need teams to move independently.
Claude: Thank you, that priority clarifies the decision. Continuing the dialogue with that context...
**Turn 16 - Sam Chen [systems-architect] - Inspire**:
"Given that development velocity is the priority, I'd recommend starting with a strangler fig pattern rather than a big rewrite. Extract one service at a time, choosing services with clear boundaries and minimal dependencies first. This lets us learn operational patterns incrementally while immediately reducing monolith coupling. Taylor's operational concerns are real - we address them by building capability gradually rather than jumping to full microservices overnight."
**Turn 17 - Morgan Ellis [platform-engineer] - Open**:
"Sam's incremental approach means we need to invest in platform tooling now - service templates, observability, deployment automation. If we're going to run multiple services, we can't have each team figuring out logging and metrics separately. I can build that foundation while the first service is being extracted. It addresses Taylor's maturity concern by deliberately building operational capability before we need it at scale."
**Turn 18 - Taylor Brooks [operations-engineer] - Inspire**:
"I can support Sam's strangler fig approach if we commit to Morgan's platform investment. My concern was diving into microservices without operational readiness. If we extract services slowly and build robust tooling as we go, we de-risk the transition. First service teaches us what we need to learn. Let's define operational readiness criteria - when are we confident to extract the next service?"
---
Dialogue synthesized after addressing the epistemic fork:
**Agreements**:
- Current monolithic architecture creates development bottlenecks that matter for growth phase (Sam and Jordan's perspective, validated by user priority)
- Operational complexity of microservices is real and must be addressed deliberately (Taylor's concern, acknowledged by all)
- Incremental migration is lower risk than big-bang rewrite (consensus after turn 16)
- Platform tooling investment is prerequisite for successful microservices operation (Morgan's platform perspective, adopted by group)
**Tensions** (resolved through user input):
- Development velocity vs. operational stability was the core tension. User's prioritization of velocity resolved this in favor of beginning migration, but with operational safeguards through incremental approach and tooling investment.
**Emergent Insights**:
- The real choice isn't "microservices vs. monolith" but "how to reduce coupling while building operational capability" (reframe that emerged from integrating Sam's architectural goals with Taylor's operational concerns)
- Strangler fig pattern allows learning operational requirements incrementally rather than guessing them upfront (synthesis of Sam and Taylor's positions)
- Platform tooling should be built during migration, not before or after (Morgan's timing insight)
- Define operational readiness criteria per service rather than trying to be "ready for microservices" in abstract (Taylor's practical metric)
**Open Questions**:
- Which service should be extracted first based on boundaries and dependencies? (tactical decision needs codebase analysis)
- What specific platform capabilities do we need before extracting the first service? (Morgan's infrastructure work, needs scoping)
- How do we measure whether microservices are actually improving development velocity? (success metrics)
- What's our rollback plan if a service extraction goes poorly? (risk management)
**Recommendation**: Begin incremental migration using strangler fig pattern. Extract one service with clear boundaries while Morgan builds platform tooling. Define operational readiness criteria before each subsequent extraction. This addresses development velocity goals while managing operational risk through gradual capability building.
I've saved these architectural personas - they represent valuable perspectives for future technical decisions.
```