170 lines
6.0 KiB
Markdown
170 lines
6.0 KiB
Markdown
# Scientific Method Core Principles
|
|
|
|
## Fundamental Principles
|
|
|
|
### 1. Empiricism
|
|
- Knowledge derives from observable, measurable evidence
|
|
- Claims must be testable through observation or experiment
|
|
- Subjective experience alone is insufficient for scientific conclusions
|
|
|
|
### 2. Falsifiability (Popper's Criterion)
|
|
- A hypothesis must be capable of being proven false
|
|
- Unfalsifiable claims are not scientific (e.g., "invisible, undetectable forces")
|
|
- Good hypotheses make specific, testable predictions
|
|
|
|
### 3. Reproducibility
|
|
- Results must be replicable by independent researchers
|
|
- Methods must be described with sufficient detail for replication
|
|
- Single studies are rarely definitive; replication strengthens confidence
|
|
|
|
### 4. Parsimony (Occam's Razor)
|
|
- Prefer simpler explanations over complex ones when both fit the data
|
|
- Don't multiply entities unnecessarily
|
|
- Extraordinary claims require extraordinary evidence
|
|
|
|
### 5. Systematic Observation
|
|
- Use standardized, rigorous methods
|
|
- Control for confounding variables
|
|
- Minimize observer bias through blinding and protocols
|
|
|
|
## The Scientific Process
|
|
|
|
### 1. Question Formation
|
|
- Identify a specific, answerable question
|
|
- Ensure the question is within the scope of scientific inquiry
|
|
- Consider whether current methods can address the question
|
|
|
|
### 2. Literature Review
|
|
- Survey existing knowledge
|
|
- Identify gaps and contradictions
|
|
- Build on previous work rather than reinventing
|
|
|
|
### 3. Hypothesis Development
|
|
- State a clear, testable prediction
|
|
- Define variables operationally
|
|
- Specify the expected relationship between variables
|
|
|
|
### 4. Experimental Design
|
|
- Choose appropriate methodology
|
|
- Identify independent and dependent variables
|
|
- Control confounding variables
|
|
- Select appropriate sample size and population
|
|
- Plan statistical analyses in advance
|
|
|
|
### 5. Data Collection
|
|
- Follow protocols consistently
|
|
- Record all observations, including unexpected results
|
|
- Maintain detailed lab notebooks or data logs
|
|
- Use validated measurement instruments
|
|
|
|
### 6. Analysis
|
|
- Apply appropriate statistical methods
|
|
- Test assumptions of statistical tests
|
|
- Consider effect size, not just significance
|
|
- Look for alternative explanations
|
|
|
|
### 7. Interpretation
|
|
- Distinguish between correlation and causation
|
|
- Acknowledge limitations
|
|
- Consider alternative interpretations
|
|
- Avoid overgeneralizing beyond the data
|
|
|
|
### 8. Communication
|
|
- Report methods transparently
|
|
- Include negative results
|
|
- Acknowledge conflicts of interest
|
|
- Make data and code available when possible
|
|
|
|
## Critical Evaluation Criteria
|
|
|
|
### When Reviewing Scientific Work, Ask:
|
|
|
|
**Validity Questions:**
|
|
- Does the study measure what it claims to measure?
|
|
- Are the methods appropriate for the research question?
|
|
- Were controls adequate?
|
|
- Could confounding variables explain the results?
|
|
|
|
**Reliability Questions:**
|
|
- Are measurements consistent?
|
|
- Would the study produce similar results if repeated?
|
|
- Are inter-rater reliability and measurement precision reported?
|
|
|
|
**Generalizability Questions:**
|
|
- Is the sample representative of the target population?
|
|
- Are the conditions realistic or artificial?
|
|
- Do the results apply beyond the specific context?
|
|
|
|
**Statistical Questions:**
|
|
- Is the sample size adequate for the analysis?
|
|
- Are the statistical tests appropriate?
|
|
- Are effect sizes reported alongside p-values?
|
|
- Were multiple comparisons corrected?
|
|
|
|
**Logical Questions:**
|
|
- Do the conclusions follow from the data?
|
|
- Are alternative explanations considered?
|
|
- Are causal claims supported by the study design?
|
|
- Are limitations acknowledged?
|
|
|
|
## Red Flags in Scientific Claims
|
|
|
|
1. **Cherry-picking data** - Highlighting only supporting evidence
|
|
2. **Moving goalposts** - Changing predictions after seeing results
|
|
3. **Ad hoc hypotheses** - Adding explanations to rescue a failed prediction
|
|
4. **Appeal to authority** - "Expert X says" without evidence
|
|
5. **Anecdotal evidence** - Relying on personal stories over systematic data
|
|
6. **Correlation implies causation** - Confusing association with causality
|
|
7. **Post hoc rationalization** - Explaining results after the fact without prediction
|
|
8. **Ignoring base rates** - Not considering prior probability
|
|
9. **Confirmation bias** - Seeking only evidence that supports beliefs
|
|
10. **Publication bias** - Only positive results get published
|
|
|
|
## Standards for Causal Inference
|
|
|
|
### Bradford Hill Criteria (adapted)
|
|
1. **Strength** - Strong associations are more likely causal
|
|
2. **Consistency** - Repeated observations by different researchers
|
|
3. **Specificity** - Specific outcomes from specific causes
|
|
4. **Temporality** - Cause precedes effect (essential)
|
|
5. **Biological gradient** - Dose-response relationship
|
|
6. **Plausibility** - Coherent with existing knowledge
|
|
7. **Coherence** - Consistent with other evidence
|
|
8. **Experiment** - Experimental evidence supports causation
|
|
9. **Analogy** - Similar cause-effect relationships exist
|
|
|
|
### Establishing Causation Requires:
|
|
- Temporal precedence (cause before effect)
|
|
- Covariation (cause and effect correlate)
|
|
- Elimination of alternative explanations
|
|
- Ideally: experimental manipulation showing cause produces effect
|
|
|
|
## Peer Review and Scientific Consensus
|
|
|
|
### Understanding Peer Review
|
|
- Filters obvious errors but isn't perfect
|
|
- Reviewers can miss problems or have biases
|
|
- Published ≠ proven; it means "passed initial scrutiny"
|
|
- Retraction mechanisms exist for flawed papers
|
|
|
|
### Scientific Consensus
|
|
- Emerges from convergence of multiple independent lines of evidence
|
|
- Consensus can change with new evidence
|
|
- Individual studies rarely overturn consensus
|
|
- Consider the weight of evidence, not individual papers
|
|
|
|
## Open Science Principles
|
|
|
|
### Transparency Practices
|
|
- Preregistration of hypotheses and methods
|
|
- Open data sharing
|
|
- Open-source code
|
|
- Preprints for rapid dissemination
|
|
- Registered reports (peer review before data collection)
|
|
|
|
### Why Transparency Matters
|
|
- Reduces publication bias
|
|
- Enables verification
|
|
- Prevents p-hacking and HARKing (Hypothesizing After Results are Known)
|
|
- Accelerates scientific progress
|