Initial commit
This commit is contained in:
552
skills/peer-review/references/common_issues.md
Normal file
552
skills/peer-review/references/common_issues.md
Normal file
@@ -0,0 +1,552 @@
|
||||
# Common Methodological and Statistical Issues in Scientific Manuscripts
|
||||
|
||||
This document catalogs frequent issues encountered during peer review, organized by category. Use this as a reference to identify potential problems and provide constructive feedback.
|
||||
|
||||
## Statistical Issues
|
||||
|
||||
### 1. P-Value Misuse and Misinterpretation
|
||||
|
||||
**Common Problems:**
|
||||
- P-hacking (selective reporting of significant results)
|
||||
- Multiple testing without correction (familywise error rate inflation)
|
||||
- Interpreting non-significance as proof of no effect
|
||||
- Focusing exclusively on p-values without effect sizes
|
||||
- Dichotomizing continuous p-values at arbitrary thresholds (p=0.049 vs p=0.051)
|
||||
- Confusing statistical significance with biological/clinical significance
|
||||
|
||||
**How to Identify:**
|
||||
- Suspiciously high proportion of p-values just below 0.05
|
||||
- Many tests performed but no correction mentioned
|
||||
- Statements like "no difference was found" from non-significant results
|
||||
- No effect sizes or confidence intervals reported
|
||||
- Language suggesting p-values indicate strength of effect
|
||||
|
||||
**What to Recommend:**
|
||||
- Report effect sizes with confidence intervals
|
||||
- Apply appropriate multiple testing corrections (Bonferroni, FDR, Holm-Bonferroni)
|
||||
- Interpret non-significance cautiously (lack of evidence ≠ evidence of lack)
|
||||
- Pre-register analyses to avoid p-hacking
|
||||
- Consider equivalence testing for "no difference" claims
|
||||
|
||||
### 2. Inappropriate Statistical Tests
|
||||
|
||||
**Common Problems:**
|
||||
- Using parametric tests when assumptions are violated (non-normal data, unequal variances)
|
||||
- Analyzing paired data with unpaired tests
|
||||
- Using t-tests for multiple groups instead of ANOVA with post-hoc tests
|
||||
- Treating ordinal data as continuous
|
||||
- Ignoring repeated measures structure
|
||||
- Using correlation when regression is more appropriate
|
||||
|
||||
**How to Identify:**
|
||||
- No mention of assumption checking
|
||||
- Small sample sizes with parametric tests
|
||||
- Multiple pairwise t-tests instead of ANOVA
|
||||
- Likert scales analyzed with t-tests
|
||||
- Time-series data analyzed without accounting for repeated measures
|
||||
|
||||
**What to Recommend:**
|
||||
- Check assumptions explicitly (normality tests, Q-Q plots)
|
||||
- Use non-parametric alternatives when appropriate
|
||||
- Apply proper corrections for multiple comparisons after ANOVA
|
||||
- Use mixed-effects models for repeated measures
|
||||
- Consider ordinal regression for ordinal outcomes
|
||||
|
||||
### 3. Sample Size and Power Issues
|
||||
|
||||
**Common Problems:**
|
||||
- No sample size justification or power calculation
|
||||
- Underpowered studies claiming "no effect"
|
||||
- Post-hoc power calculations (which are uninformative)
|
||||
- Stopping rules not pre-specified
|
||||
- Unequal group sizes without justification
|
||||
|
||||
**How to Identify:**
|
||||
- Small sample sizes (n<30 per group for typical designs)
|
||||
- No mention of power analysis in methods
|
||||
- Statements about post-hoc power
|
||||
- Wide confidence intervals suggesting imprecision
|
||||
- Claims of "no effect" with large p-values and small n
|
||||
|
||||
**What to Recommend:**
|
||||
- Conduct a priori power analysis based on expected effect size
|
||||
- Report achieved power or precision (confidence interval width)
|
||||
- Acknowledge when studies are underpowered
|
||||
- Consider effect sizes and confidence intervals for interpretation
|
||||
- Pre-register sample size and stopping rules
|
||||
|
||||
### 4. Missing Data Problems
|
||||
|
||||
**Common Problems:**
|
||||
- Complete case analysis without justification (listwise deletion)
|
||||
- Not reporting extent or pattern of missingness
|
||||
- Assuming data are missing completely at random (MCAR) without testing
|
||||
- Inappropriate imputation methods
|
||||
- Not performing sensitivity analyses
|
||||
|
||||
**How to Identify:**
|
||||
- Different n values across analyses without explanation
|
||||
- No discussion of missing data
|
||||
- Participants "excluded from analysis"
|
||||
- Simple mean imputation used
|
||||
- No sensitivity analyses comparing complete vs. imputed data
|
||||
|
||||
**What to Recommend:**
|
||||
- Report extent and patterns of missingness
|
||||
- Test MCAR assumption (Little's test)
|
||||
- Use appropriate methods (multiple imputation, maximum likelihood)
|
||||
- Perform sensitivity analyses
|
||||
- Consider intention-to-treat analysis for trials
|
||||
|
||||
### 5. Circular Analysis and Double-Dipping
|
||||
|
||||
**Common Problems:**
|
||||
- Using the same data for selection and inference
|
||||
- Defining ROIs based on contrast then testing that contrast in same ROI
|
||||
- Selecting outliers then testing for differences
|
||||
- Post-hoc subgroup analyses presented as planned
|
||||
- HARKing (Hypothesizing After Results are Known)
|
||||
|
||||
**How to Identify:**
|
||||
- ROIs or features selected based on results
|
||||
- Unexpected subgroup analyses
|
||||
- Post-hoc analyses not clearly labeled as exploratory
|
||||
- No data-independent validation
|
||||
- Introduction that perfectly predicts findings
|
||||
|
||||
**What to Recommend:**
|
||||
- Use independent datasets for selection and testing
|
||||
- Pre-register analyses and hypotheses
|
||||
- Clearly distinguish confirmatory vs. exploratory analyses
|
||||
- Use cross-validation or hold-out datasets
|
||||
- Correct for selection bias
|
||||
|
||||
### 6. Pseudoreplication
|
||||
|
||||
**Common Problems:**
|
||||
- Technical replicates treated as biological replicates
|
||||
- Multiple measurements from same subject treated as independent
|
||||
- Clustered data analyzed without accounting for clustering
|
||||
- Non-independence in spatial or temporal data
|
||||
|
||||
**How to Identify:**
|
||||
- n defined as number of measurements rather than biological units
|
||||
- Multiple cells from same animal counted as independent
|
||||
- Repeated measures not acknowledged
|
||||
- No mention of random effects or clustering
|
||||
|
||||
**What to Recommend:**
|
||||
- Define n as biological replicates (animals, patients, independent samples)
|
||||
- Use mixed-effects models for nested or clustered data
|
||||
- Account for repeated measures explicitly
|
||||
- Average technical replicates before analysis
|
||||
- Report both technical and biological replication
|
||||
|
||||
## Experimental Design Issues
|
||||
|
||||
### 7. Lack of Appropriate Controls
|
||||
|
||||
**Common Problems:**
|
||||
- Missing negative controls
|
||||
- Missing positive controls for validation
|
||||
- No vehicle controls for drug studies
|
||||
- No time-matched controls for longitudinal studies
|
||||
- No batch controls
|
||||
|
||||
**How to Identify:**
|
||||
- Methods section lists only experimental groups
|
||||
- No mention of controls in figures
|
||||
- Unclear baseline or reference condition
|
||||
- Cross-batch comparisons without controls
|
||||
|
||||
**What to Recommend:**
|
||||
- Include negative controls to assess specificity
|
||||
- Include positive controls to validate methods
|
||||
- Use vehicle controls matched to experimental treatment
|
||||
- Include sham surgery controls for surgical interventions
|
||||
- Include batch controls for cross-batch comparisons
|
||||
|
||||
### 8. Confounding Variables
|
||||
|
||||
**Common Problems:**
|
||||
- Systematic differences between groups besides intervention
|
||||
- Batch effects not controlled or corrected
|
||||
- Order effects in sequential experiments
|
||||
- Time-of-day effects not controlled
|
||||
- Experimenter effects not blinded
|
||||
|
||||
**How to Identify:**
|
||||
- Groups differ in multiple characteristics
|
||||
- Samples processed in different batches by group
|
||||
- No randomization of sample order
|
||||
- No mention of blinding
|
||||
- Baseline characteristics differ between groups
|
||||
|
||||
**What to Recommend:**
|
||||
- Randomize experimental units to conditions
|
||||
- Block on known confounders
|
||||
- Randomize sample processing order
|
||||
- Use blinding to minimize bias
|
||||
- Perform batch correction if needed
|
||||
- Report and adjust for baseline differences
|
||||
|
||||
### 9. Insufficient Replication
|
||||
|
||||
**Common Problems:**
|
||||
- Single experiment without replication
|
||||
- Technical replicates mistaken for biological replication
|
||||
- Small n justified by "typical for the field"
|
||||
- No independent validation of key findings
|
||||
- Cherry-picking representative examples
|
||||
|
||||
**How to Identify:**
|
||||
- Methods state "experiment performed once"
|
||||
- n=3 with no justification
|
||||
- "Representative image shown"
|
||||
- Key claims based on single experiment
|
||||
- No validation in independent dataset
|
||||
|
||||
**What to Recommend:**
|
||||
- Perform independent biological replicates (typically ≥3)
|
||||
- Validate key findings in independent cohorts
|
||||
- Report all replicates, not just representative examples
|
||||
- Conduct power analysis to justify sample size
|
||||
- Show individual data points, not just summary statistics
|
||||
|
||||
## Reproducibility Issues
|
||||
|
||||
### 10. Insufficient Methodological Detail
|
||||
|
||||
**Common Problems:**
|
||||
- Methods not described in sufficient detail for replication
|
||||
- Key reagents not specified (vendor, catalog number)
|
||||
- Software versions and parameters not reported
|
||||
- Antibodies not validated
|
||||
- Cell line authentication not verified
|
||||
|
||||
**How to Identify:**
|
||||
- Vague descriptions ("standard protocols were used")
|
||||
- No information on reagent sources
|
||||
- Generic software mentioned without versions
|
||||
- No antibody validation information
|
||||
- Cell lines not authenticated
|
||||
|
||||
**What to Recommend:**
|
||||
- Provide detailed protocols or cite specific protocols
|
||||
- Include reagent vendors, catalog numbers, lot numbers
|
||||
- Report software versions and all parameters
|
||||
- Include antibody validation (Western blot, specificity tests)
|
||||
- Report cell line authentication method (STR profiling)
|
||||
- Make protocols available (protocols.io, supplementary materials)
|
||||
|
||||
### 11. Data and Code Availability
|
||||
|
||||
**Common Problems:**
|
||||
- No data availability statement
|
||||
- "Data available upon request" (often unfulfilled)
|
||||
- No code provided for computational analyses
|
||||
- Custom software not made available
|
||||
- No clear documentation
|
||||
|
||||
**How to Identify:**
|
||||
- Missing data availability statement
|
||||
- No repository accession numbers
|
||||
- Computational methods with no code
|
||||
- Custom pipelines without access
|
||||
- No README or documentation
|
||||
|
||||
**What to Recommend:**
|
||||
- Deposit raw data in appropriate repositories (GEO, SRA, Dryad, Zenodo)
|
||||
- Share analysis code on GitHub or similar
|
||||
- Provide clear documentation and README files
|
||||
- Include requirements.txt or environment files
|
||||
- Make custom software available with installation instructions
|
||||
- Use DOIs for permanent data citation
|
||||
|
||||
### 12. Lack of Method Validation
|
||||
|
||||
**Common Problems:**
|
||||
- New methods not compared to gold standard
|
||||
- Assays not validated for specificity, sensitivity, linearity
|
||||
- No spike-in controls
|
||||
- Cross-reactivity not tested
|
||||
- Detection limits not established
|
||||
|
||||
**How to Identify:**
|
||||
- Novel assays presented without validation
|
||||
- No comparison to existing methods
|
||||
- No positive/negative controls shown
|
||||
- Claims of specificity without evidence
|
||||
- No standard curves or controls
|
||||
|
||||
**What to Recommend:**
|
||||
- Validate new methods against established approaches
|
||||
- Show specificity (knockdown/knockout controls)
|
||||
- Demonstrate linearity and dynamic range
|
||||
- Include positive and negative controls
|
||||
- Report limits of detection and quantification
|
||||
- Show reproducibility across replicates and operators
|
||||
|
||||
## Interpretation Issues
|
||||
|
||||
### 13. Overstatement of Results
|
||||
|
||||
**Common Problems:**
|
||||
- Causal language for correlational data
|
||||
- Mechanistic claims without mechanistic evidence
|
||||
- Extrapolating beyond data (species, conditions, populations)
|
||||
- Claiming "first to show" without thorough literature review
|
||||
- Overgeneralizing from limited samples
|
||||
|
||||
**How to Identify:**
|
||||
- "X causes Y" from observational data
|
||||
- Mechanism proposed without direct testing
|
||||
- Mouse data presented as relevant to humans without caveats
|
||||
- Claims of novelty with missing citations
|
||||
- Broad claims from narrow samples
|
||||
|
||||
**What to Recommend:**
|
||||
- Use appropriate language ("associated with" vs. "caused by")
|
||||
- Distinguish correlation from causation
|
||||
- Acknowledge limitations of model systems
|
||||
- Provide thorough literature context
|
||||
- Be specific about generalizability
|
||||
- Propose mechanisms as hypotheses, not conclusions
|
||||
|
||||
### 14. Cherry-Picking and Selective Reporting
|
||||
|
||||
**Common Problems:**
|
||||
- Reporting only significant results
|
||||
- Showing "representative" images that may not be typical
|
||||
- Excluding outliers without justification
|
||||
- Not reporting negative or contradictory findings
|
||||
- Switching between different statistical approaches
|
||||
|
||||
**How to Identify:**
|
||||
- All reported results are significant
|
||||
- "Representative of 3 experiments" with no quantification
|
||||
- Data exclusions mentioned in results but not methods
|
||||
- Supplementary data contradicts main findings
|
||||
- Multiple analysis approaches with only one reported
|
||||
|
||||
**What to Recommend:**
|
||||
- Report all planned analyses regardless of outcome
|
||||
- Quantify and show variability across replicates
|
||||
- Pre-specify outlier exclusion criteria
|
||||
- Include negative results
|
||||
- Pre-register analysis plan
|
||||
- Report effect sizes and confidence intervals for all comparisons
|
||||
|
||||
### 15. Ignoring Alternative Explanations
|
||||
|
||||
**Common Problems:**
|
||||
- Preferred explanation presented without considering alternatives
|
||||
- Contradictory evidence dismissed without discussion
|
||||
- Off-target effects not considered
|
||||
- Confounding variables not acknowledged
|
||||
- Limitations section minimal or absent
|
||||
|
||||
**How to Identify:**
|
||||
- Single interpretation presented as fact
|
||||
- Prior contradictory findings not cited or discussed
|
||||
- No consideration of alternative mechanisms
|
||||
- No discussion of limitations
|
||||
- Specificity assumed without controls
|
||||
|
||||
**What to Recommend:**
|
||||
- Discuss alternative explanations
|
||||
- Address contradictory findings from literature
|
||||
- Include appropriate specificity controls
|
||||
- Acknowledge and discuss limitations thoroughly
|
||||
- Consider and test alternative hypotheses
|
||||
|
||||
## Figure and Data Presentation Issues
|
||||
|
||||
### 16. Inappropriate Data Visualization
|
||||
|
||||
**Common Problems:**
|
||||
- Bar graphs for continuous data (hiding distributions)
|
||||
- No error bars or error bars not defined
|
||||
- Truncated y-axes exaggerating differences
|
||||
- Dual y-axes creating misleading comparisons
|
||||
- Too many significant figures
|
||||
- Colors not colorblind-friendly
|
||||
|
||||
**How to Identify:**
|
||||
- Bar graphs with few data points
|
||||
- Unclear what error bars represent (SD, SEM, CI?)
|
||||
- Y-axis doesn't start at zero for ratio/percentage data
|
||||
- Left and right y-axes with different scales
|
||||
- Values reported to excessive precision (p=0.04562)
|
||||
- Red-green color schemes
|
||||
|
||||
**What to Recommend:**
|
||||
- Show individual data points with scatter/box/violin plots
|
||||
- Always define error bars (SD, SEM, 95% CI)
|
||||
- Start y-axis at zero or indicate breaks clearly
|
||||
- Avoid dual y-axes; use separate panels instead
|
||||
- Report appropriate significant figures
|
||||
- Use colorblind-friendly palettes (viridis, colorbrewer)
|
||||
- Include sample sizes in figure legends
|
||||
|
||||
### 17. Image Manipulation Concerns
|
||||
|
||||
**Common Problems:**
|
||||
- Excessive contrast/brightness adjustment
|
||||
- Spliced gels or images without indication
|
||||
- Duplicated images or panels
|
||||
- Uneven background in Western blots
|
||||
- Selective cropping
|
||||
- Over-processed microscopy images
|
||||
|
||||
**How to Identify:**
|
||||
- Suspicious patterns or discontinuities
|
||||
- Very high contrast with no background
|
||||
- Similar features in different panels
|
||||
- Straight lines suggesting splicing
|
||||
- Inconsistent backgrounds
|
||||
- Loss of detail suggesting over-processing
|
||||
|
||||
**What to Recommend:**
|
||||
- Apply adjustments uniformly across images
|
||||
- Indicate spliced gels with dividing lines
|
||||
- Show full, uncropped images in supplementary materials
|
||||
- Provide original images if requested
|
||||
- Follow journal image integrity policies
|
||||
- Use appropriate image analysis tools
|
||||
|
||||
## Study Design Issues
|
||||
|
||||
### 18. Poorly Defined Hypotheses and Outcomes
|
||||
|
||||
**Common Problems:**
|
||||
- No clear hypothesis stated
|
||||
- Primary outcome not specified
|
||||
- Multiple outcomes without correction
|
||||
- Outcomes changed after data collection
|
||||
- Fishing expeditions presented as hypothesis-driven
|
||||
|
||||
**How to Identify:**
|
||||
- Introduction doesn't state clear testable hypothesis
|
||||
- Multiple outcomes with unclear hierarchy
|
||||
- Outcomes in results don't match those in methods
|
||||
- Exploratory study presented as confirmatory
|
||||
- Many tests with no multiple testing correction
|
||||
|
||||
**What to Recommend:**
|
||||
- State clear, testable hypotheses
|
||||
- Designate primary and secondary outcomes a priori
|
||||
- Pre-register studies when possible
|
||||
- Apply appropriate corrections for multiple outcomes
|
||||
- Clearly distinguish exploratory from confirmatory analyses
|
||||
- Report all pre-specified outcomes
|
||||
|
||||
### 19. Baseline Imbalance and Selection Bias
|
||||
|
||||
**Common Problems:**
|
||||
- Groups differ at baseline
|
||||
- Selection criteria applied differentially
|
||||
- Healthy volunteer bias
|
||||
- Survivorship bias
|
||||
- Indication bias in observational studies
|
||||
|
||||
**How to Identify:**
|
||||
- Table 1 shows significant baseline differences
|
||||
- Inclusion criteria different between groups
|
||||
- Response rate <50% with no analysis
|
||||
- Analysis only includes completers
|
||||
- Groups self-selected rather than randomized
|
||||
|
||||
**What to Recommend:**
|
||||
- Report baseline characteristics in Table 1
|
||||
- Use randomization to ensure balance
|
||||
- Adjust for baseline differences in analysis
|
||||
- Report response rates and compare responders vs. non-responders
|
||||
- Consider propensity score matching for observational data
|
||||
- Use intention-to-treat analysis
|
||||
|
||||
### 20. Temporal and Batch Effects
|
||||
|
||||
**Common Problems:**
|
||||
- Samples processed in batches by condition
|
||||
- Temporal trends not accounted for
|
||||
- Instrument drift over time
|
||||
- Different operators for different groups
|
||||
- Reagent lot changes between groups
|
||||
|
||||
**How to Identify:**
|
||||
- All treatment samples processed on same day
|
||||
- Controls from different time period
|
||||
- No mention of batch or time effects
|
||||
- Different technicians for groups
|
||||
- Long study duration with no temporal analysis
|
||||
|
||||
**What to Recommend:**
|
||||
- Randomize samples across batches/time
|
||||
- Include batch as covariate in analysis
|
||||
- Perform batch correction (ComBat, limma)
|
||||
- Include quality control samples across batches
|
||||
- Report and test for temporal trends
|
||||
- Balance operators across conditions
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
### 21. Incomplete Statistical Reporting
|
||||
|
||||
**Common Problems:**
|
||||
- Test statistics not reported
|
||||
- Degrees of freedom missing
|
||||
- Exact p-values replaced with inequalities (p<0.05)
|
||||
- No confidence intervals
|
||||
- No effect sizes
|
||||
- Sample sizes not reported per group
|
||||
|
||||
**How to Identify:**
|
||||
- Only p-values given with no test statistics
|
||||
- p-values reported as p<0.05 rather than exact values
|
||||
- No measures of uncertainty
|
||||
- Effect magnitude unclear
|
||||
- n reported for total but not per group
|
||||
|
||||
**What to Recommend:**
|
||||
- Report complete test statistics (t, F, χ², etc. with df)
|
||||
- Report exact p-values (except p<0.001)
|
||||
- Include 95% confidence intervals
|
||||
- Report effect sizes (Cohen's d, odds ratios, correlation coefficients)
|
||||
- Report n for each group in every analysis
|
||||
- Consider CONSORT-style flow diagram
|
||||
|
||||
### 22. Methods-Results Mismatch
|
||||
|
||||
**Common Problems:**
|
||||
- Methods describe analyses not performed
|
||||
- Results include analyses not described in methods
|
||||
- Different sample sizes in methods vs. results
|
||||
- Methods mention controls not shown
|
||||
- Statistical methods don't match what was done
|
||||
|
||||
**How to Identify:**
|
||||
- Analyses in results without methodological description
|
||||
- Methods describe experiments not in results
|
||||
- Numbers don't match between sections
|
||||
- Controls mentioned but not shown
|
||||
- Different software mentioned than used
|
||||
|
||||
**What to Recommend:**
|
||||
- Ensure complete concordance between methods and results
|
||||
- Describe all analyses performed in methods
|
||||
- Remove methodological descriptions of experiments not performed
|
||||
- Verify all numbers are consistent
|
||||
- Update methods to match actual analyses conducted
|
||||
|
||||
## How to Use This Reference
|
||||
|
||||
When reviewing manuscripts:
|
||||
1. Read through methods and results systematically
|
||||
2. Check for common issues in each category
|
||||
3. Note specific problems with evidence
|
||||
4. Provide constructive suggestions for improvement
|
||||
5. Distinguish major issues (affect validity) from minor issues (affect clarity)
|
||||
6. Prioritize reproducibility and transparency
|
||||
|
||||
This is not an exhaustive list but covers the most frequently encountered issues. Always consider the specific context and discipline when evaluating potential problems.
|
||||
290
skills/peer-review/references/reporting_standards.md
Normal file
290
skills/peer-review/references/reporting_standards.md
Normal file
@@ -0,0 +1,290 @@
|
||||
# Scientific Reporting Standards and Guidelines
|
||||
|
||||
This document catalogs major reporting standards and guidelines across scientific disciplines. When reviewing manuscripts, verify that authors have followed the appropriate guidelines for their study type and discipline.
|
||||
|
||||
## Clinical Trials and Medical Research
|
||||
|
||||
### CONSORT (Consolidated Standards of Reporting Trials)
|
||||
**Purpose:** Randomized controlled trials (RCTs)
|
||||
**Key Requirements:**
|
||||
- Trial design, participants, and interventions clearly described
|
||||
- Primary and secondary outcomes specified
|
||||
- Sample size calculation and statistical methods
|
||||
- Participant flow through trial (enrollment, allocation, follow-up, analysis)
|
||||
- Baseline characteristics of participants
|
||||
- Numbers analyzed in each group
|
||||
- Outcomes and estimation with confidence intervals
|
||||
- Adverse events
|
||||
- Trial registration number and protocol access
|
||||
|
||||
**Reference:** http://www.consort-statement.org/
|
||||
|
||||
### STROBE (Strengthening the Reporting of Observational Studies in Epidemiology)
|
||||
**Purpose:** Observational studies (cohort, case-control, cross-sectional)
|
||||
**Key Requirements:**
|
||||
- Study design clearly stated
|
||||
- Setting, eligibility criteria, and participant sources
|
||||
- Variables clearly defined
|
||||
- Data sources and measurement methods
|
||||
- Bias assessment
|
||||
- Sample size justification
|
||||
- Statistical methods including handling of missing data
|
||||
- Participant flow and characteristics
|
||||
- Main results with confidence intervals
|
||||
- Limitations discussed
|
||||
|
||||
**Reference:** https://www.strobe-statement.org/
|
||||
|
||||
### PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
|
||||
**Purpose:** Systematic reviews and meta-analyses
|
||||
**Key Requirements:**
|
||||
- Protocol registration
|
||||
- Systematic search strategy across multiple databases
|
||||
- Inclusion/exclusion criteria
|
||||
- Study selection process
|
||||
- Data extraction methods
|
||||
- Quality assessment of included studies
|
||||
- Statistical methods for meta-analysis
|
||||
- Assessment of publication bias
|
||||
- Heterogeneity assessment
|
||||
- PRISMA flow diagram showing study selection
|
||||
- Summary of findings tables
|
||||
|
||||
**Reference:** http://www.prisma-statement.org/
|
||||
|
||||
### SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials)
|
||||
**Purpose:** Clinical trial protocols
|
||||
**Key Requirements:**
|
||||
- Administrative information (title, registration, funding)
|
||||
- Introduction (rationale, objectives)
|
||||
- Methods (design, participants, interventions, outcomes, sample size)
|
||||
- Ethics and dissemination
|
||||
- Trial schedule and assessments
|
||||
|
||||
**Reference:** https://www.spirit-statement.org/
|
||||
|
||||
### CARE (CAse REport guidelines)
|
||||
**Purpose:** Case reports
|
||||
**Key Requirements:**
|
||||
- Patient information and demographics
|
||||
- Clinical findings
|
||||
- Timeline of events
|
||||
- Diagnostic assessment
|
||||
- Therapeutic interventions
|
||||
- Follow-up and outcomes
|
||||
- Patient perspective
|
||||
- Informed consent
|
||||
|
||||
**Reference:** https://www.care-statement.org/
|
||||
|
||||
## Animal Research
|
||||
|
||||
### ARRIVE (Animal Research: Reporting of In Vivo Experiments)
|
||||
**Purpose:** Studies involving animal research
|
||||
**Key Requirements:**
|
||||
- Title indicates study involves animals
|
||||
- Abstract provides accurate summary
|
||||
- Background and objectives clearly stated
|
||||
- Ethical statement and approval
|
||||
- Housing and husbandry details
|
||||
- Animal details (species, strain, sex, age, weight)
|
||||
- Experimental procedures in detail
|
||||
- Experimental animals (number, allocation, welfare assessment)
|
||||
- Statistical methods appropriate
|
||||
- Exclusion criteria stated
|
||||
- Sample size determination
|
||||
- Randomization and blinding described
|
||||
- Outcome measures defined
|
||||
- Adverse events reported
|
||||
|
||||
**Reference:** https://arriveguidelines.org/
|
||||
|
||||
## Genomics and Molecular Biology
|
||||
|
||||
### MIAME (Minimum Information About a Microarray Experiment)
|
||||
**Purpose:** Microarray experiments
|
||||
**Key Requirements:**
|
||||
- Experimental design clearly described
|
||||
- Array design information
|
||||
- Samples (origin, preparation, labeling)
|
||||
- Hybridization procedures and parameters
|
||||
- Image acquisition and quantification
|
||||
- Normalization and data transformation
|
||||
- Raw and processed data availability
|
||||
- Database accession numbers
|
||||
|
||||
**Reference:** http://fged.org/projects/miame/
|
||||
|
||||
### MINSEQE (Minimum Information about a high-throughput Nucleotide Sequencing Experiment)
|
||||
**Purpose:** High-throughput sequencing (RNA-seq, ChIP-seq, etc.)
|
||||
**Key Requirements:**
|
||||
- Experimental design and biological context
|
||||
- Sample information (source, preparation, QC)
|
||||
- Library preparation (protocol, adapters, size selection)
|
||||
- Sequencing platform and parameters
|
||||
- Data processing pipeline (alignment, quantification, normalization)
|
||||
- Quality control metrics
|
||||
- Raw data deposition (SRA, GEO, ENA)
|
||||
- Processed data and analysis code availability
|
||||
|
||||
### MIGS/MIMS (Minimum Information about a Genome/Metagenome Sequence)
|
||||
**Purpose:** Genome and metagenome sequencing
|
||||
**Key Requirements:**
|
||||
- Sample origin and environmental context
|
||||
- Sequencing methods and coverage
|
||||
- Assembly methods and quality metrics
|
||||
- Annotation approach
|
||||
- Quality control and contamination screening
|
||||
- Data deposition in INSDC databases
|
||||
|
||||
**Reference:** https://gensc.org/
|
||||
|
||||
## Structural Biology
|
||||
|
||||
### PDB (Protein Data Bank) Deposition Requirements
|
||||
**Purpose:** Macromolecular structure determination
|
||||
**Key Requirements:**
|
||||
- Atomic coordinates deposited
|
||||
- Structure factors for X-ray structures
|
||||
- Restraints and experimental data for NMR
|
||||
- EM maps and metadata for cryo-EM
|
||||
- Model quality validation metrics
|
||||
- Experimental conditions (crystallization, sample preparation)
|
||||
- Data collection parameters
|
||||
- Refinement statistics
|
||||
|
||||
**Reference:** https://www.wwpdb.org/
|
||||
|
||||
## Proteomics and Mass Spectrometry
|
||||
|
||||
### MIAPE (Minimum Information About a Proteomics Experiment)
|
||||
**Purpose:** Proteomics experiments
|
||||
**Key Requirements:**
|
||||
- Sample processing and fractionation
|
||||
- Separation methods (2D gel, LC)
|
||||
- Mass spectrometry parameters (instrument, acquisition)
|
||||
- Database search and validation parameters
|
||||
- Peptide and protein identification criteria
|
||||
- Quantification methods
|
||||
- Statistical analysis
|
||||
- Data deposition (PRIDE, PeptideAtlas)
|
||||
|
||||
**Reference:** http://www.psidev.info/
|
||||
|
||||
## Neuroscience
|
||||
|
||||
### COBIDAS (Committee on Best Practices in Data Analysis and Sharing)
|
||||
**Purpose:** MRI and fMRI studies
|
||||
**Key Requirements:**
|
||||
- Scanner and sequence parameters
|
||||
- Preprocessing pipeline details
|
||||
- Software versions and parameters
|
||||
- Statistical analysis approach
|
||||
- Multiple comparison correction
|
||||
- ROI definitions
|
||||
- Data sharing (raw data, analysis scripts)
|
||||
|
||||
**Reference:** https://www.humanbrainmapping.org/cobidas
|
||||
|
||||
## Flow Cytometry
|
||||
|
||||
### MIFlowCyt (Minimum Information about a Flow Cytometry Experiment)
|
||||
**Purpose:** Flow cytometry experiments
|
||||
**Key Requirements:**
|
||||
- Experimental overview and purpose
|
||||
- Sample characteristics and preparation
|
||||
- Instrument information and settings
|
||||
- Reagents (antibodies, fluorophores, concentrations)
|
||||
- Compensation and controls
|
||||
- Gating strategy
|
||||
- Data analysis approach
|
||||
- Data availability
|
||||
|
||||
**Reference:** http://flowcyt.org/
|
||||
|
||||
## Ecology and Environmental Science
|
||||
|
||||
### MIAPPE (Minimum Information About a Plant Phenotyping Experiment)
|
||||
**Purpose:** Plant phenotyping studies
|
||||
**Key Requirements:**
|
||||
- Investigation and study metadata
|
||||
- Biological material information
|
||||
- Environmental parameters
|
||||
- Experimental design and factors
|
||||
- Phenotypic measurements and methods
|
||||
- Data file descriptions
|
||||
|
||||
**Reference:** https://www.miappe.org/
|
||||
|
||||
## Chemistry and Chemical Biology
|
||||
|
||||
### MIRIBEL (Minimum Information Reporting in Bio-Nano Experimental Literature)
|
||||
**Purpose:** Nanomaterial characterization
|
||||
**Key Requirements:**
|
||||
- Nanomaterial composition and structure
|
||||
- Size, shape, and morphology characterization
|
||||
- Surface chemistry and functionalization
|
||||
- Purity and stability
|
||||
- Experimental conditions
|
||||
- Characterization methods
|
||||
|
||||
## Quality Assessment and Bias
|
||||
|
||||
### CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies)
|
||||
**Purpose:** Quality assessment for animal studies in systematic reviews
|
||||
**Key Items:**
|
||||
- Publication in peer-reviewed journal
|
||||
- Statement of temperature control
|
||||
- Randomization to treatment
|
||||
- Blinded assessment of outcome
|
||||
- Avoidance of anesthetic with marked intrinsic properties
|
||||
- Use of appropriate animal model
|
||||
- Sample size calculation
|
||||
- Compliance with regulatory requirements
|
||||
- Statement of conflict of interest
|
||||
- Study pre-registration
|
||||
|
||||
### SYRCLE's Risk of Bias Tool
|
||||
**Purpose:** Assessing risk of bias in animal intervention studies
|
||||
**Domains:**
|
||||
- Selection bias (sequence generation, baseline characteristics, allocation concealment)
|
||||
- Performance bias (random housing, blinding of personnel)
|
||||
- Detection bias (random outcome assessment, blinding of assessors)
|
||||
- Attrition bias (incomplete outcome data)
|
||||
- Reporting bias (selective outcome reporting)
|
||||
- Other sources of bias
|
||||
|
||||
## General Principles Across Guidelines
|
||||
|
||||
### Common Requirements
|
||||
1. **Transparency:** All methods, materials, and analyses fully described
|
||||
2. **Reproducibility:** Sufficient detail for independent replication
|
||||
3. **Data Availability:** Raw data and analysis code shared or deposited
|
||||
4. **Registration:** Studies pre-registered where applicable
|
||||
5. **Ethics:** Appropriate approvals and consent documented
|
||||
6. **Conflicts of Interest:** Disclosed for all authors
|
||||
7. **Statistical Rigor:** Methods appropriate and fully described
|
||||
8. **Completeness:** All outcomes reported, including negative results
|
||||
|
||||
### Red Flags for Non-Compliance
|
||||
- Methods section lacks critical details
|
||||
- No mention of following reporting guidelines
|
||||
- Data availability statement missing or vague
|
||||
- No database accession numbers for omics data
|
||||
- No trial registration for clinical studies
|
||||
- Sample size not justified
|
||||
- Statistical methods inadequately described
|
||||
- Missing flow diagrams (CONSORT, PRISMA)
|
||||
- Selective reporting of outcomes
|
||||
|
||||
## How to Use This Reference
|
||||
|
||||
When reviewing a manuscript:
|
||||
1. Identify the study type and discipline
|
||||
2. Find the relevant reporting guideline(s)
|
||||
3. Check if authors mention following the guideline
|
||||
4. Verify that key requirements are addressed
|
||||
5. Note any missing elements in your review
|
||||
6. Suggest the appropriate guideline if not mentioned
|
||||
|
||||
Many journals require authors to complete reporting checklists at submission. Reviewers should verify compliance even if a checklist was submitted.
|
||||
Reference in New Issue
Block a user