Scientific manuscript peer review checklist
Submission-ready • Desk review + peer review
This page provides a step-by-step checklist of the core criteria reviewers and editors use during peer review of scientific manuscripts.
Goal: reduce desk-reject risk, catch submission-blocking (CRITICAL) issues early, and make the manuscript submission-ready.
Authors
Pre-submission self-assessment against reviewer and editor criteria.
Reviewers
Write reviewer reports in a systematic structure.
Editors
Core checks for fast, objective desk-review decisions.
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Is fit with the journal's aims and scope explicit? | Target journal scope statement + manuscript "fit" rationale | Scope mismatch is the most common cause of desk rejection. |
| CRITICAL | Are ethics approval and required registrations complete? | Committee name + date + reference number + consent process description | Ethics gaps are often non-negotiable for editors. |
| MAJOR | Are methods and analysis reproducible at sufficient detail? | Participant selection + measurements + analysis plan + software/version | "Reproducibility" is a red line for reviewers. |
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Is the research question/hypothesis clear and measurable? | PICO/PECO framing or one testable hypothesis sentence | Unclear aims create “drift” across all sections. |
| MAJOR | Is the scientific contribution and gap in the literature explicit? | A “gap” paragraph in the introduction + three core references | The “why now?” question must be answered clearly. |
| MAJOR | Is the target audience and scope appropriate for the journal? | Article type per journal guidelines + scope fit note | Choosing the wrong article type also speeds rejection. |
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Is the study design (RCT/observational/review, etc.) appropriate for the question and clearly stated? | Design name + timeline + primary/secondary endpoints | The wrong design breaks trust even if results look positive. |
| CRITICAL | Are inclusion/exclusion criteria and sampling clear? | Flow diagram + numbers screened/included | Selection bias is one of the first issues reviewers probe. |
| MAJOR | Are measurement instruments valid and reliable? | Scale/device definition + validation reference | Measurement error can inflate or deflate effects. |
| MAJOR | Is sample size justified (power analysis or rationale)? | Power analysis output or sample-size assumptions | "Underpowered" weakens both negative and positive findings. |
| MAJOR | Are statistical methods reported at a "re-runnable" level of detail? | Test names + assumption checks + effect size + confidence interval | SAMPL guidelines help standardize basic statistical reporting. |
| MAJOR | In observational studies, are confounders and bias-mitigation strategies explicit? | Variable selection + modeling + sensitivity analyses | STROBE provides a core item set for observational reporting. |
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Are the introduction/results/discussion consistent with the latest evidence and guidelines in the field? | At least three core sources from the last 3–5 years + justified classics | Missing recent evidence often triggers a "novelty" critique. |
| CRITICAL | Have you checked whether key references were retracted? | A "retraction check" note for each critical reference | Authors are responsible for verifying retracted publications. |
| MAJOR | Have citations been verified for "claim → evidence" alignment? | At least one authoritative source for each strong claim | Miscited work undermines scientific trust. |
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Was an appropriate reporting guideline selected and applied for the design? | (i) Guideline name, (ii) completed checklist, (iii) item locations | Editors increasingly expect guideline use. |
| CRITICAL | Does the Methods section fully cover participant selection, measurements, and statistics? | Subheadings in Methods + reproducible detail | Methods are the core of "auditability." |
| MAJOR | Are results presented with effect sizes and uncertainty (CIs)? | For each main result: effect measure + 95% CI + n | Reporting p-values alone is no longer sufficient. |
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Does your study require ethics committee approval? | "Required/not required" decision + committee letter | National indexes and many institutions require ethics statements. |
| CRITICAL | Are ethics committee details reported in the correct format? | Committee name + date + reference number (Methods and first/last page) | Complete disclosure of ethics information is mandatory. |
| CRITICAL | Is voluntary participation / informed consent clearly described? | Consent form + who obtained consent and when | Under the Helsinki principles, the consent process is essential. |
| Priority | Check question | Concrete evidence / output | Short note |
|---|---|---|---|
| CRITICAL | Is "why was this study necessary?" answered in one clear sentence? | A one-sentence "need statement" in the introduction + evidence of the gap | Without a clear contribution, the editor may not send the paper to reviewers. |
| MAJOR | Are findings compared fairly with the existing literature? | "Similar studies" + differences + possible explanations | Overclaiming is one of the most criticized discussion errors. |
| MAJOR | Are limitations and generalizability realistic? | Limitations paragraph + explanation of bias direction | Owning limitations increases trust. |
Pre-submission review algorithm
| Core criterion | TR Index | YÖK ethics | Editorial std. | Helsinki | PRISMA |
|---|---|---|---|---|---|
| Ethics committee approval and disclosure | ✔️ | △ | ✔️ | ✔️ | △ |
| Plagiarism / duplicate publication rules | △ | ✔️ | ✔️ | △ | △ |
| Authorship criteria | △ | ✔️ | ✔️ | △ | △ |
| Conflicts of interest / funding | △ | △ | ✔️ | ✔️ | ✔️ |
| Study registration | △ | △ | ✔️ | ✔️ | △ |
✔️: explicit requirement | △: strong recommendation / standard practice
The three blocks below help structure a reviewer report: brief summary, major revisions, and minor revisions.
| Block | What to write | Example structure | Note |
|---|---|---|---|
| SUMMARY | Study aim, strengths, and overall assessment | 2–4 sentences: aim → contribution → overall decision (revise/reject/accept) | The "why" should be obvious to the reader. |
| MAJOR | Required changes affecting scientific validity | Issue → rationale → actionable recommendation (bullet list) | Methods/statistics/ethics usually appear here. |
| MINOR | Presentation, language, formatting, small improvements | Wording/tables/abbreviations/reference formatting | Suggestions that do not change the scientific core. |
To go deeper on the topics behind this checklist:
- Desk reject and editorial triage: What is a desk reject?
- Why papers are rejected: Why papers get rejected
- How reviewers evaluate manuscripts: How reviewers evaluate manuscripts
- Peer review workflow: How the peer review process works
- Journal impact factor: What is journal impact factor?
- How to write a scientific paper: How to write a scientific paper
© 2026 Submission Ready Framework. Suitable for institutional attribution.