1. Context: Expansion of Scientific Publishing and Reviewer Fatigue
The global scientific publishing ecosystem has expanded rapidly due to rising research output, institutional incentives to publish, and the proliferation of STEM journals. This growth has significantly increased the demand for peer reviewers, who form the backbone of quality control in science.
However, the supply of qualified reviewers has not kept pace. Senior academics increasingly face an overload of review requests, forcing them to selectively engage only with narrow sub-disciplines. This leads to delays, uneven review quality, and systemic stress within the peer-review mechanism.
The issue has direct governance and development relevance because credible scientific knowledge underpins evidence-based policymaking, public health responses, and technological innovation. If peer review weakens, flawed or low-quality research may inform policy decisions, leading to ineffective or harmful outcomes.
If ignored, reviewer fatigue risks eroding trust in scientific institutions and slowing genuine innovation due to bottlenecks in knowledge validation.
Key data:
- In 2020, peer reviewers globally spent ~130 million hours, equivalent to nearly 15,000 years, on peer review — American Chemical Society.
Peer review is a public good sustained by private academic labour; overstretching it without systemic reform threatens the reliability of science and, by extension, governance outcomes.
2. Entry of Artificial Intelligence into Peer Review Processes
In response to reviewer shortages and rising submission volumes, major academic publishers are increasingly deploying AI tools to support the peer-review workflow. These tools are primarily used at pre-screening stages to reduce the burden on human reviewers.
AI is particularly effective in detecting text similarity, plagiarism, image manipulation, and potential data fabrication patterns. By filtering out low-quality or non-compliant submissions early, AI can reduce the cognitive and time burden on expert reviewers.
For governance, this introduces efficiency gains in the knowledge-production pipeline, ensuring faster dissemination of credible research that informs policy and development planning. However, over-reliance without safeguards can institutionalise algorithmic errors.
If AI tools are adopted uncritically, they may legitimise flawed research at scale, undermining scientific credibility.
Functions assisted by AI:
- Plagiarism and integrity checks
- Scope and formatting compliance
- Preliminary quality assessment
- Reviewer identification based on publication history
"While AI cannot replace human reviewers or make final editorial decisions, it can play a valuable supporting role." — Deeksha Gupta, American Chemical Society
AI improves administrative efficiency but cannot substitute epistemic judgement, which remains central to scientific governance.
3. Limits of AI: Why Human Judgement Remains Indispensable
Despite efficiency gains, AI systems lack contextual understanding necessary for evaluating conceptual novelty, methodological soundness, and disciplinary relevance. These dimensions are critical in determining whether research advances knowledge meaningfully.
AI models operate on existing datasets and patterns, making them inherently backward-looking. As a result, they struggle to assess unconventional hypotheses, interdisciplinary insights, or paradigm-shifting ideas.
From a development perspective, this limitation matters because transformative innovations—particularly in health, environment, and technology—often emerge from challenging established frameworks rather than optimising within them.
Ignoring this distinction risks creating a conservative knowledge ecosystem that prioritises conformity over creativity.
Areas requiring human oversight:
- Conceptual originality and significance
- Context-specific methodological evaluation
- Nuanced feedback for scientific advancement
Scientific progress depends on judgement, interpretation, and creativity—capabilities that remain distinctly human and institutionally irreplaceable.
4. Risks of Error Amplification and Knowledge Distortion
A key concern highlighted by researchers is the risk of error amplification when AI-generated summaries or assessments contain inaccuracies. Once integrated into the citation ecosystem, such errors can propagate rapidly.
Future researchers may unknowingly cite flawed AI-mediated interpretations, creating chains of non-replicable or incorrect findings. This is especially problematic in high-impact fields like medicine or climate science.
For governance, distorted scientific consensus can misguide regulation, public spending, and risk assessment. Errors embedded early in the knowledge pipeline are costly to correct later.
If unaddressed, AI-driven amplification could reduce reproducibility and weaken the self-correcting nature of science.
Unchecked automation can convert isolated errors into systemic failures within the scientific knowledge commons.
5. Biases and Structural Inequities in AI Systems
AI models inherit biases from their training datasets, algorithmic assumptions, and the institutional contexts in which they are developed. These biases may privilege highly cited or older research while marginalising newer or corrective studies.
Socioeconomic and linguistic biases can further skew visibility and validation of research from the Global South, affecting equity in global knowledge production.
This has direct implications for inclusive development, as policy-relevant research from less-represented regions may be systematically underweighted.
If ignored, AI may reinforce existing hierarchies in science rather than democratise knowledge.
Sources of bias:
- Dataset inclusion/exclusion choices
- Algorithmic weighting of citations
- Institutional and linguistic dominance
Bias in scientific validation mechanisms translates into bias in policy priorities and development outcomes.
6. Safeguards and Responsible Use of AI in Publishing
Experts emphasise that AI must be deployed only for bounded, well-defined tasks and always under human supervision. Validation against primary sources is essential to prevent misinformation.
Using multiple AI models rather than relying on a single system can reduce systematic errors. Equally important is sourcing data from credible and authentic databases.
From an institutional perspective, publishers must invest in rigorous testing and validation before scaling AI tools, aligning with principles of accountability and transparency.
Neglecting safeguards risks delegitimising both AI tools and the journals that use them.
Recommended safeguards:
- Cross-verification of AI-generated citations
- Multi-model approaches
- Human-in-the-loop decision-making
Responsible AI use strengthens institutional trust; irresponsible use undermines it.
7. AI, Creativity, and the Nature of Scientific Discovery
AI excels at solving well-defined problems within established frameworks but struggles with reframing questions or challenging foundational assumptions. As such, it is better suited for incremental rather than fundamental discoveries.
There is concern that excessive reliance on AI shortcuts may reduce the “generative friction” that often leads to deep insights through sustained intellectual struggle.
For long-term development, preserving human creativity is critical, as transformative solutions to complex challenges—pandemics, climate change, inequality—require original thinking.
If creativity is constrained, scientific progress may become efficient but shallow.
True innovation arises not from optimisation alone but from reimagining the problem space—a capacity unique to human intelligence.
Conclusion
AI offers significant opportunities to enhance efficiency, integrity, and scalability in scientific publishing, particularly by alleviating reviewer fatigue. However, its role must remain supportive rather than substitutive. Sustaining credible knowledge production requires strong human oversight, institutional safeguards, and a commitment to preserving creativity and equity. In the long run, balanced integration of AI will determine whether science continues to serve as a reliable foundation for governance and development.
