
Why radiology manuscripts get desk-rejected
Radiology and imaging informatics journals see many algorithmic submissions. Desk rejects frequently involve dataset issues, weak reference standards, or claims that do not exceed existing clinical workflows.
Frequent issues at triage
AI models validated on single-site retrospective data without external test sets or reader studies. Leakage between train/test via overlapping patients or preprocessing. Weak reference standards or incorporation bias. Incremental AUC without clinical consequence framing. Image quality variability ignored. Reproducibility—missing code availability or insufficient architecture detail. Editors desk-reject when the contribution is unclear relative to recent literature.
This page is editorial guidance for authors, not medical advice. Desk-reject patterns vary by journal and editor; always read the target journal’s instructions and scope before submitting.
Pre-review benefits
Structured feedback can ask whether performance metrics match the intended clinical use, whether confidence intervals are reported, and whether limitations are proportional to claims. It can also prompt clearer patient flow and dataset documentation.
This page is editorial guidance for authors, not medical advice. Desk-reject patterns vary by journal and editor; always read the target journal’s instructions and scope before submitting.
Checklist
Provide external validation or a credible plan. Define the reference standard and reading process. Report dataset splits and leakage controls. Translate metrics into clinical workflow terms. Choose a journal aligned with AI methods vs clinical radiology. Pre-review before submission.
This page is editorial guidance for authors, not medical advice. Desk-reject patterns vary by journal and editor; always read the target journal’s instructions and scope before submitting.