11 KiB
Research Claim Map Template
Workflow
Copy this checklist and track your progress:
Research Claim Map Progress:
- [ ] Step 1: Define claim precisely
- [ ] Step 2: Gather evidence for and against
- [ ] Step 3: Rate evidence quality
- [ ] Step 4: Assess source credibility
- [ ] Step 5: Identify limitations
- [ ] Step 6: Synthesize conclusion
Step 1: Define claim precisely
Restate as specific, testable assertion with numbers, dates, clear terms. See Claim Reformulation.
Step 2: Gather evidence for and against
Find sources supporting and contradicting claim. See Evidence Categories.
Step 3: Rate evidence quality
Apply evidence hierarchy (primary > secondary > tertiary). See Evidence Quality Rating.
Step 4: Assess source credibility
Evaluate expertise, independence, track record, methodology. See Credibility Assessment.
Step 5: Identify limitations
Document gaps, assumptions, uncertainties. See Limitations Documentation.
Step 6: Synthesize conclusion
Determine confidence level (0-100%) and recommendation. See Confidence Calibration.
Research Claim Map Template
1. Claim Statement
Original claim: [Quote exact claim as stated]
Reformulated claim (specific, testable): [Restate with precise terms, numbers, dates, scope]
Why this claim matters: [Decision impact, stakes, consequences if true/false]
Key terms defined:
- [Term 1]: [Definition to avoid ambiguity]
2. Evidence For
| Source | Evidence Type | Quality | Credibility | Summary |
|---|---|---|---|---|
| [Source name/link] | [Primary/Secondary/Tertiary] | [H/M/L] | [H/M/L] | [What it says] |
Strongest evidence for:
- [Most compelling evidence with explanation why it's strong]
- [Second strongest]
3. Evidence Against
| Source | Evidence Type | Quality | Credibility | Summary |
|---|---|---|---|---|
| [Source name/link] | [Primary/Secondary/Tertiary] | [H/M/L] | [H/M/L] | [What it says] |
Strongest evidence against:
- [Most compelling counter-evidence with explanation]
- [Second strongest]
4. Source Credibility Analysis
For each major source, evaluate:
Source: [Name/Link]
- Expertise: [H/M/L] - [Why: credentials, domain knowledge]
- Independence: [H/M/L] - [Conflicts of interest, bias, incentives]
- Track Record: [H/M/L] - [Prior accuracy, corrections, reputation]
- Methodology: [H/M/L] - [How they obtained information, transparency]
- Overall credibility: [H/M/L]
Source: [Name/Link]
- Expertise: [H/M/L] - [Why]
- Independence: [H/M/L] - [Why]
- Track Record: [H/M/L] - [Why]
- Methodology: [H/M/L] - [Why]
- Overall credibility: [H/M/L]
5. Limitations and Gaps
What's unknown or uncertain:
- [Gap 1: What evidence is missing]
- [Gap 2: What couldn't be verified]
- [Gap 3: What's ambiguous or unclear]
Assumptions made:
- [Assumption 1: What we're assuming to be true]
- [Assumption 2]
Quality concerns:
- [Concern 1: Weaknesses in evidence or methodology]
- [Concern 2]
Further investigation needed:
- [What additional evidence would increase confidence]
- [What questions remain unanswered]
6. Conclusion
Confidence level: [0-100%]
Confidence reasoning:
- [Why this confidence level based on evidence quality, source credibility, limitations]
Assessment: [Choose one]
- ✓ Claim validated (70-100% confidence) - Evidence strongly supports claim
- ≈ Claim partially true (40-69% confidence) - Mixed or weak evidence, requires nuance
- ✗ Claim rejected (0-39% confidence) - Evidence contradicts or insufficient support
Recommendation: [Action to take based on this assessment - what should be believed, decided, or done]
Key caveats:
- [Important qualification 1]
- [Important qualification 2]
Guidance for Each Section
Claim Reformulation Examples
Vague → Specific:
- ❌ "Product X is better" → ✓ "Product X loads pages 50% faster than Product Y on benchmark Z"
- ❌ "Most customers are satisfied" → ✓ "NPS score ≥50 based on survey of ≥1000 customers in Q3 2024"
- ❌ "Studies show it works" → ✓ "≥3 peer-reviewed RCTs show ≥20% improvement vs placebo, p<0.05"
Avoid:
- Subjective terms ("better", "significant", "many")
- Undefined metrics ("performance", "quality", "efficiency")
- Vague time ranges ("recently", "long-term")
- Unclear comparisons ("faster", "cheaper" - than what?)
Evidence Categories
Primary (Strongest):
- Original research data, raw datasets
- Direct measurements, transaction logs
- First-hand testimony from participants
- Legal documents, contracts, financial filings
- Photographs, videos of events (verified authentic)
Secondary (Medium):
- Analysis/synthesis of primary sources
- Peer-reviewed research papers
- News reporting citing primary sources
- Expert analysis with transparent methodology
- Government/institutional reports
Tertiary (Weakest):
- Summaries of secondary sources
- Textbooks, encyclopedias, Wikipedia
- Press releases, marketing content
- Opinion pieces, editorials
- Anecdotal reports
Non-Evidence (Unreliable):
- Social media claims without verification
- Anonymous sources with no corroboration
- Circular citations (A→B→A)
- "Experts say" without named experts
- Cherry-picked quotes out of context
Evidence Quality Rating
High (H):
- Multiple independent primary sources agree
- Methodology transparent and replicable
- Large sample size, rigorous controls
- Peer-reviewed or independently verified
- Recent and relevant to current context
Medium (M):
- Single primary source or multiple secondary sources
- Some methodology disclosed
- Moderate sample size, some controls
- Some independent verification
- Somewhat dated but still applicable
Low (L):
- Tertiary sources only
- Methodology opaque or questionable
- Small sample, no controls, anecdotal
- No independent verification
- Outdated or context has changed
Source Credibility Scoring
Expertise:
- High: Domain expert, relevant credentials, published research
- Medium: General knowledge, some relevant experience
- Low: No demonstrated expertise, out of domain
Independence:
- High: No financial/personal stake, third-party verification
- Medium: Some potential bias but disclosed
- Low: Direct conflict of interest, undisclosed bias
Track Record:
- High: Consistent accuracy, transparent about corrections
- Medium: Unknown history or mixed record
- Low: History of errors, retractions, misinformation
Methodology:
- High: Transparent process, data/methods shared, replicable
- Medium: Some details provided, partially verifiable
- Low: Black box, unverifiable, cherry-picked data
Limitations and Gaps
Common gaps:
- Missing primary sources (only secondary summaries available)
- Conflicting evidence without clear resolution
- Outdated information (claim may have changed)
- Incomplete data (partial picture only)
- Methodology unclear (can't assess quality)
- Context missing (claim true but misleading framing)
Document:
- What evidence you expected to find but didn't
- What questions you couldn't answer
- What assumptions you had to make to proceed
- What contradictions remain unresolved
Confidence Level Calibration
90-100% (Near Certain):
- Multiple independent primary sources
- High credibility sources with strong methodology
- No significant contradicting evidence
- Minimal assumptions or gaps
- Example: "Earth orbits the Sun"
70-89% (Confident):
- Strong secondary sources or single primary source
- Credible sources, some methodology disclosed
- Minor contradictions explainable
- Some assumptions but reasonable
- Example: "Vendor has >5,000 customers based on analyst report"
50-69% (Uncertain):
- Mixed evidence quality or conflicting sources
- Moderate credibility, unclear methodology
- Significant gaps or assumptions
- Requires more investigation to be confident
- Example: "Feature will improve retention 10-20%"
30-49% (Skeptical):
- More/stronger evidence against than for
- Low credibility sources or weak evidence
- Major gaps, questionable assumptions
- Claim likely exaggerated or misleading
- Example: "Supplement cures disease based on testimonials"
0-29% (Likely False):
- Strong evidence contradicting claim
- Unreliable sources, no credible support
- Claim contradicts established facts
- Clear misinformation or fabrication
- Example: "Vaccine contains tracking microchips"
Common Patterns
Pattern 1: Vendor Due Diligence
Claim: Vendor claims product capabilities, performance, customer metrics Approach: Seek independent verification, customer references, trials Red flags: Only vendor sources, vague metrics, "up to X" ranges, cherry-picked case studies
Pattern 2: News Fact-Check
Claim: Event occurred, statistic cited, quote attributed Approach: Trace to primary source, check multiple outlets, verify context Red flags: Single source, anonymous claims, sensational framing, out-of-context quotes
Pattern 3: Research Validity
Claim: Study shows X causes Y, treatment is effective Approach: Check replication, sample size, methodology, competing explanations Red flags: Single study, conflicts of interest, p-hacking, correlation claimed as causation
Pattern 4: Competitive Intelligence
Claim: Competitor has capability, market share, strategic direction Approach: Triangulate public filings, analyst reports, customer feedback Red flags: Rumor/speculation, outdated info, no primary verification
Quality Checklist
- Claim restated as specific, testable assertion
- Evidence gathered for both supporting and contradicting
- Each source rated for evidence quality (Primary/Secondary/Tertiary)
- Each source assessed for credibility (Expertise, Independence, Track Record, Methodology)
- Strongest evidence for and against identified
- Limitations and gaps documented explicitly
- Assumptions stated clearly
- Confidence level quantified (0-100%)
- Recommendation is actionable and evidence-based
- Caveats and qualifications noted
- No cherry-picking (actively sought contradicting evidence)
- Distinction made between "no evidence found" and "evidence against"
- Sources properly attributed with links/citations
- Avoided common biases (confirmation, authority, recency, availability)
- Quality sufficient for decision (if not, flag need for more investigation)