Overview
A scoping review maps the breadth and nature of evidence on a topic. It clarifies concepts, identifies evidence types, and surfaces gaps. It does so without necessarily appraising study quality or estimating effects.
This method is useful when a question is broad, heterogeneous, or exploratory. Use it when you need a decision-ready overview rather than a pooled estimate. The appropriate reporting checklist is PRISMA-ScR, which sets out preferred items for transparency and completeness for scoping reviews (PRISMA-ScR).
Start by confirming fit. Scoping reviews differ from systematic reviews in aim (mapping vs effect estimation) and synthesis (descriptive vs statistical). They also often omit risk-of-bias appraisals.
For methods detail, see the JBI Manual for Evidence Synthesis—widely used guidance for scoping methodology (JBI Manual).
If you’re building a protocol today, plan to follow PRISMA-ScR for reporting. Prospectively register your protocol to strengthen credibility.
Authoritative resources:
- PRISMA-ScR checklist on the EQUATOR Network (PRISMA-ScR)
- JBI Manual for Evidence Synthesis – Scoping Reviews (JBI Manual)
Is a scoping review the right method? A PCC-based decision framework
Choose a scoping review when your purpose is to map what exists, how it has been studied, and where the gaps are. It is especially suited to diverse study designs, outcomes, or populations.
If you must answer a narrow comparative effectiveness or impact question, a systematic review is usually a better fit.
Use the PCC framework (Population, Concept, Context) to articulate scope. Then test method fit. If PCC yields a broad or exploratory scope—for example, the diversity of mental health apps used by adults in community settings—scoping methods are well suited.
If the purpose is to chart interventions and outcomes against domains for decision-makers, consider whether a mapping review or an evidence gap map (EGM) is the more actionable product. JBI provides clear guidance for scoping review indications, and the Cochrane Handbook offers cross-method distinctions that help locate your review type in the evidence ecosystem (Cochrane Handbook).
Practical tip: draft your PCC and sketch the main outputs you need (narrative map, outcome-by-intervention matrix, or a gap visualization). If the output is a visual matrix of evidence presence/absence by outcome and subgroup, an EGM may serve stakeholders better.
Using PCC to define scope and eligibility
Translate PCC into operational inclusion and exclusion criteria. Then pressure-test those criteria against a small sample of records.
This conversion makes your question reproducible and your selection defensible. JBI recommends explicit eligibility rules and piloting them for consistency before full screening (JBI Manual).
Work through PCC like this:
- Population: who is under study and who is not (e.g., adults with chronic pain; exclude pediatric-only populations).
- Concept: the phenomena or interventions (e.g., digital self-management apps; exclude clinician-only decision support).
- Context: settings, geography, or systems (e.g., outpatient/community; exclude inpatient trials).
For example: “Adults (P) using digital self-management apps (C) in community care (C).” This becomes inclusion criteria that require adult samples, self-management app interventions, and non-inpatient settings.
A common pitfall is burying key limits (like setting) inside search terms rather than making them explicit eligibility criteria. This leads to inconsistent screening.
Before launching full screening, pilot 50–100 records with two reviewers. Refine definitions and minimize avoidable conflicts.
When to choose mapping reviews or evidence gap maps instead
Choose a mapping review when you must systematically categorize and index the characteristics of studies (e.g., designs, outcomes, measures) across a field and deliver a structured catalog. Opt for an evidence gap map when stakeholders need a visual, decision-oriented display of where evidence exists and where it is lacking across interventions, outcomes, and priority subgroups.
- Mapping review: best when the output is a structured taxonomy or database of study features to support planning and prioritization.
- Evidence gap map: best when decision-makers want a matrix showing density and gaps by intervention-outcome-subgroup to inform funding or guideline priorities.
- Scoping review: best when you need a narrative synthesis plus descriptive statistics that characterize what is known and how it has been studied.
If your end user expects a quick, time-boxed answer to refine a broad topic into a fundable question, consider a rapid scoping approach. Use focused sources and streamlined processes, with explicit tradeoffs documented.
If the need is ultimately a recommendation about effect, start scoping to map terrain. Then transition to a systematic review where appropriate.
Team, roles, and realistic timelines and costs
Assemble a multidisciplinary team anchored by a methodologist, a content expert, and an information specialist/librarian. At minimum, include two independent reviewers for screening and a third to adjudicate conflicts.
Calibration pilots before full screening are widely recommended in scoping methods guidance (Steps for Conducting a Scoping Review). Timelines vary by scope, but most teams allocate weeks for protocol development and search. Expect weeks for screening and selection, and weeks for data charting and synthesis.
A librarian’s involvement improves search quality, deduplication, and PRISMA-S compliance while reducing rework (PRISMA-S extension). For capacity planning, estimate record volumes from similar topics and scale reviewer workload accordingly.
Larger evidence bases warrant more reviewers or staged automation with validation. Build in buffer time for stakeholder feedback and data verification. Plan early for journal requirements to avoid last-minute rewrites.
Protocol planning and registration options
A robust scoping review protocol aligns your question with methods you can execute and defend. Pre-specify screening, charting, and synthesis decisions. Lay out calibration and quality safeguards.
Register the protocol to time-stamp intent, enable peer input, and strengthen transparency. Most journals welcome or require registration for evidence syntheses.
Because scoping reviews are generally not accepted by PROSPERO, use an open repository such as OSF Registries for protocol registration and materials sharing (PROSPERO; OSF Registries). Include a plan for managing grey literature and preprints, and specify how you will handle non-English evidence.
A short internal peer review of the protocol (e.g., by a librarian and a senior methodologist) can catch feasibility and reporting blind spots before you begin.
What to include in a scoping review protocol
Make every crucial decision explicit so reviewers and editors see a coherent plan. Use the checklist below to ensure coverage before registration.
- PCC question and objectives, with a plain-language summary.
- Explicit eligibility criteria translated from PCC (population, concept, context, study types, dates, languages).
- Search strategy plan: databases, controlled vocabulary, text-words, grey literature, preprints, citation chasing.
- Screening workflow: two-reviewer processes, calibration pilots, conflict resolution, and automation guardrails.
- Data charting: variables, codebook, pilot testing, double-charting subset, and change management.
- Synthesis approach: numerical summary, narrative mapping, thematic analysis (as applicable).
- Stakeholder/public involvement and equity plan (including what you will collect under PROGRESS-Plus).
- Reporting standards and materials: PRISMA-ScR, PRISMA-S, planned appendices, and data-sharing repository.
- Update signals and maintenance plan (alerts, triggers, versioning).
After drafting, sanity-check feasibility against the likely volume of records and team availability. Invite a librarian to test-run the search to confirm scope and yield before you register.
OSF registration and transparency
Register your scoping review protocol on OSF Registries to create a citable, time-stamped record that you can update with materials as you go. Upload your protocol or use a structured template, and tag it appropriately (e.g., “scoping review,” “PRISMA-ScR”).
Link the registration to project components for search strategies, codebooks, and data. OSF’s openness supports reviewer confidence and streamlines journal submission when editors ask for protocol access (OSF Registries).
As methods evolve, maintain a dated change log on OSF explaining what changed and why. This becomes the backbone of your deviations section.
Sharing search strategies, deduplication logs, and charting forms on OSF also supports FAIR practices and reproducibility.
Why PROSPERO often doesn’t accept scoping reviews
PROSPERO is designed primarily for systematic reviews of health-related outcomes. It generally does not accept scoping review protocols.
Its eligibility guidance specifies that non-analytic evidence mapping exercises typically fall outside scope. This is why many teams use OSF instead (PROSPERO – About registration and eligibility).
If a funder requests registration, confirm accepted registries early. If necessary, supplement OSF with institutional repositories or a protocol preprint.
Search strategy: databases, grey literature, and preprints
Your search must be comprehensive for the scope yet feasible to execute and document. Combine controlled vocabulary (e.g., MeSH, Emtree) with free-text synonyms. Use validated filters sparingly, and plan citation chasing for included records.
Report your search fully using the PRISMA-S extension. It specifies the details needed for reproducibility, such as interfaces, dates, strings, and limits (PRISMA-S extension).
Grey literature and non-English evidence reduce publication and language biases. Comprehensive searching outside traditional journals is encouraged by major methods handbooks (Cochrane Handbook).
Build and test your strategy with an information specialist. Pilot in a core database to judge yield, then scale to other sources.
Keep a search diary with versions, dates, and rationale for refinements. This helps you complete PRISMA-S items accurately.
Grey literature sources and documentation
Grey literature matters when policies, reports, theses, or conference outputs carry critical insights missing from journals. Prioritize sources based on where your topic is likely to be published outside peer-reviewed journals. Be explicit about scope and capture methods up front.
- Organizational/policy repositories (e.g., WHO, national agencies, professional societies).
- Trial and study registries (e.g., ClinicalTrials.gov, ICTRP) for ongoing and completed but unpublished work.
- Dissertations/theses repositories (e.g., ProQuest, institutional repositories).
- Conference proceedings and society abstracts (publisher or society sites).
- Government and NGO portals relevant to the topic domain.
Document exactly where you searched, the terms, dates, and filters used, and how you saved or exported results. In your manuscript, report these using PRISMA-S items. Include a grey literature appendix listing portals visited and the capture method.
Preprints: when they change conclusions and how to handle them
Preprints can materially alter understanding in an emergent field. They reveal early findings and methods.
Include them when timeliness matters or the evidence base is young. Label them clearly and run sensitivity analyses to test whether their inclusion affects your main patterns or conclusions.
Track preprints for subsequent peer-reviewed versions. If a preprint later publishes, replace the citation and update extracted data as part of your change log.
Guardrails help: create a separate preprint tag in your screening tool. Chart a simple “peer-review status” variable, and document any differences when a preprint becomes a published article.
If preprints drive your core conclusions, note this prominently. Justify inclusion based on decision needs.
Language and translation handling
Avoid excluding studies solely by language to minimize language bias. Screen all records by titles/abstracts and then translate eligible full texts pragmatically.
The Cochrane Handbook encourages inclusive strategies and warns that language limits can bias results, particularly in fields dominated by non-English research (Cochrane Handbook). Effective workflows include using bilingual reviewers when available, leveraging professional translation selectively, and using machine translation for screening with human verification of critical data.
Decide early which fields you will extract from non-English papers and how you will validate accuracy. Keep a translation log noting who translated, tools used, and any uncertainties resolved during consensus.
Managing large evidence sets: deduplication, screening calibration, and automation
Large yields are common in scoping reviews. Your pipeline must protect recall while preserving reviewer time.
Standardize deduplication first. Then run a structured calibration with two screeners. Use automation only with validation and a human-in-the-loop.
The goal is consistent, transparent decisions documented well enough to reproduce and audit. Begin with a test batch of records to calibrate, then scale once agreement is stable.
Use automation for prioritization rather than replacement. Record performance metrics (e.g., recall on a gold-standard set) in your methods. Calibration pilots and two independent screeners are common and recommended to ensure reliability (Steps for Conducting a Scoping Review).
Automation tools you can safely use for screening and deduplication
Automation can triage workload if you validate performance on your topic. Common tools include Rayyan for assisted screening and conflict management, and ASReview for active learning that prioritizes likely-relevant records.
For deduplication, reference managers (e.g., EndNote, Zotero) and scripted approaches efficiently remove exact and near-duplicate records.
Use these guardrails: pilot any tool on a labeled subset and target very high recall (e.g., >95%) before trusting prioritization. Never allow automation to make final exclusion decisions without human verification.
Document tool settings, training sets, and thresholds in your methods. Include a short automation appendix so reviewers can assess risk of missed studies.
Calibration targets and prevalence effects
Calibration is a short, structured exercise where two screeners independently apply eligibility to a sample. They then reconcile differences.
Aim for stable, high agreement before launching full screening. Many teams target at least 80% agreement or a substantial kappa, recognizing that kappa is sensitive to the prevalence of “include” decisions.
When relevant records are rare, percent agreement can look deceptively high while kappa is low. Use both metrics plus qualitative discussion to decide readiness.
Pilot 50–100 citations at title/abstract and a smaller set at full text. Iteratively refine eligibility wording and notes.
Record calibration results and the final rules you adopted. These become an appendix item and help onboard new team members midstream.
Deduplication pipelines and audit trails
A reproducible deduplication workflow prevents inflation of counts and wasted screening time. Start by normalizing fields (titles, DOIs, page numbers). Run exact-match algorithms, then apply fuzzy-matching rules to catch near-duplicates. Keep a manual verification step for edge cases.
Export a deduplication log that records rules, counts removed, and any manual changes. Report deduplication methods using PRISMA-S items and keep the log in your repository for auditability.
As a guardrail, never delete records outright without a recoverable backup. Instead, tag and hide duplicates in your screening tool so you can revisit decisions if needed.
Data charting and synthesis: quantitative, qualitative, and mixed methods
Data charting converts included studies into analyzable variables with a form and codebook your team can apply consistently. Pilot the form on a small sample to test feasibility. Revise definitions and plan a double-charting subset for reliability.
Your synthesis should fit your question. Use numerical summaries (counts by design, country, outcomes), narrative mapping (how concepts and measures vary), and thematic synthesis for qualitative evidence.
Keep your charting form lean at first and expand only if added variables will be used in synthesis or visualizations. When mixing quantitative and qualitative evidence, decide which stream leads the narrative and how the other supports it. Keep a change log for form edits and codebook updates to protect reproducibility.
Codebooks, reflexivity, and reliability
A good codebook defines each variable, allowable values, examples of edge cases, and decision notes for conflicts. Develop it iteratively: draft definitions, pilot with two charting reviewers, reconcile discrepancies, and revise with worked examples.
Build reflexivity into the process by documenting how disciplinary perspectives and prior assumptions may shape coding choices and interpretations. For reliability, double-chart a meaningful subset (e.g., 10–20%) and calculate agreement on key variables.
Prioritize variables that drive your synthesis. Use consensus meetings to resolve disagreements and record rationales. These become part of your methods narrative and appendices.
Nuances for qualitative-focused scoping reviews
When qualitative evidence is the focus, decide up front whether your synthesis will be primarily inductive or deductive. Be explicit about your approach and justify the framework if you use one.
Maintain an audit trail of coding decisions and theme development. Report how you managed interpretive saturation and the criteria used to cluster and name themes.
Structured approaches such as thematic synthesis or framework synthesis work well in scoping reviews. They map concepts and relationships without over-claiming causal inference.
Include illustrative quotes or paraphrased findings in your charting to support theme credibility. Explain how you handled context differences across settings.
Stakeholder and public involvement, and equity-focused planning
Engaging stakeholders or the public can sharpen your question, align outputs with decision needs, and improve equity sensitivity. Involve them early to refine PCC, later to validate themes or visualizations, and finally to interpret implications.
Plan data extraction items that allow you to analyze equity-relevant patterns. Report both involvement and equity methods clearly in line with PRISMA-ScR expectations.
Use an explicit equity lens such as PROGRESS-Plus to decide which population descriptors to chart (e.g., place, race/ethnicity, socioeconomic status). This enables comments on who is represented and where gaps may exacerbate disparities (Cochrane Equity Methods Group – PROGRESS-Plus).
Transparency here helps readers trust your interpretations. It also supports funders’ equity mandates.
Planning for PROGRESS-Plus
Build equity into your charting form by adding variables for key PROGRESS-Plus elements: place of residence, race/ethnicity/culture/language, occupation, gender/sex, religion, education, socioeconomic status, social capital, and “plus” factors such as age or disability.
Decide in advance how you will analyze these data. Options include reporting the distribution of studies across subgroups or highlighting where evidence is absent.
As you synthesize, note when outcomes or experiences differ across PROGRESS-Plus categories. Consider whether the evidence allows any cautious inferences.
Even when data are sparse, describing the pattern of representation is valuable. It is actionable for priority setting.
Reporting PPI in PRISMA-ScR
PRISMA-ScR anticipates reporting of stakeholder and public involvement where used. This includes who was involved, when, and what changed because of their input.
Map your PPI activities to specific steps—question refinement, eligibility rules, data items, or interpretation. Cite corresponding materials in appendices so reviewers can see the influence.
Use the PRISMA-ScR checklist to guide where these details fit. Consider including a short PPI statement summarizing contributions and compensation alongside your methods (PRISMA-ScR).
Should you include critical appraisal in a scoping review?
Usually no. Critical appraisal is not required for scoping reviews and is often inappropriate when your aim is to map evidence rather than assess its effects.
Consider limited appraisal only when decision-makers must understand broad signals of study quality to interpret the map responsibly. Report it transparently without grading overall certainty.
If you include appraisal, choose tools appropriate for the diverse designs you expect. Apply them consistently, and present results descriptively (e.g., counts of common limitations) instead of assigning ratings that imply effect strength.
Alternatives include coding study design and key methodological features (randomization, blinding, sampling) as descriptors. These help readers gauge the evidence landscape.
JBI guidance emphasizes that scoping reviews typically focus on breadth and characteristics rather than quality grading. Align your choice with your objectives (JBI Manual).
Reporting and reproducibility: PRISMA-ScR and beyond
Report your study in accordance with PRISMA-ScR and use the PRISMA-S extension to fully document search strategies, sources, and deduplication. Include a flow diagram adapted for scoping reviews, detailed methods for screening and charting, and specify any deviations from the protocol with justifications.
Explicit reporting is best practice and what reviewers expect for acceptance in reputable journals (PRISMA-S extension).
In addition to the manuscript, plan appendices with the operational detail needed to reproduce your work. Share datasets and materials in a repository to meet FAIR principles and funder expectations. Doing so also helps future updates and related projects.
For search and selection in particular, following PRISMA-S ensures another team could replicate your search and understand any differences.
Advanced appendices and reproducible search standards
Well-structured appendices accelerate peer review and establish your review as a reusable resource. Include:
- Full database search strategies for every source, with dates and interfaces.
- Deduplication rules and logs (counts removed by rule and manual checks).
- Screening calibration results and final eligibility rulebook.
- Data charting form, codebook, and any reliability statistics.
- Data dictionary and cleaned dataset with a readme describing variables and coding.
Before submission, cross-walk your appendices against PRISMA-ScR and PRISMA-S items to ensure there are no reporting gaps. A concise “how to use this dataset” note in your repository helps others cite and build on your work.
Data and code sharing (FAIR practices)
Share materials and data in an open repository such as OSF to make them findable, accessible, interoperable, and reusable. Include final search strings, dedup logs, screening decisions (with unique IDs), charted data, and analysis scripts if used.
Assign clear licenses and provide contact information for queries about updates or reuse. A lightweight data dictionary and a versioned change log increase clarity for downstream users.
If you derived visualizations or thematic maps from code, include that code with minimal documentation so others can reproduce figures.
Journal selection, peer review expectations, and common rejection reasons
Choose journals that regularly publish scoping reviews in your field and explicitly cite PRISMA-ScR in their author guidelines. Review recent scoping reviews they have accepted to calibrate expectations for scope, methods detail, and appendices.
Editors expect a defensible question, rigorous and transparent methods, and conclusions that match the descriptive nature of a scoping review. Before submission, cross-check that you have prospectively registered a protocol, adhered to it or documented deviations, and reported search details to PRISMA-S standards.
Where stakeholders or equity were part of your plan, ensure the manuscript and appendices make that work visible. A strong cover letter can explain your method choice (e.g., why scoping vs systematic) in one sentence to preempt desk rejection.
Common rejection reasons and how to preempt them
Most rejections trace to mismatches between aim, methods, and claims, or to avoidable reporting gaps. Use the list below as a final pre-submission audit.
- Unclear or misaligned question: fix by presenting a crisp PCC question and objectives that match scoping aims.
- Inadequate or poorly reported search: fix by librarian co-authorship, PRISMA-S-compliant reporting, and complete strategies in appendices.
- Conflating scoping with systematic review: fix by avoiding effect language or certainty claims and focusing on mapping and gaps.
- Thin methods transparency: fix by documenting calibration, screening, deduplication, and charting decisions with logs and codebooks.
- Overstated conclusions: fix by aligning interpretations with descriptive evidence and noting where evidence is absent.
- Missing protocol registration: fix by registering on OSF and citing the registration in your manuscript.
Run a final internal peer review against the PRISMA-ScR checklist. Verify that your abstract includes the elements journal reviewers scan first: aim, sources, selection basics, and key findings.
Updating a scoping review: triggers and transparent reporting
Plan for updates when you write your protocol. Treat them as part of a living evidence ecosystem rather than an afterthought.
Triggers include a surge of new studies, major policy or guideline changes, or explicit time-based intervals in fast-moving fields. Set update signals like database alerts, preprint notifications, and stakeholder feedback loops to catch developments early.
When updating, specify whether you are doing a partial update (e.g., new years of evidence added) or a full update (re-run from inception with revised methods). Maintain a change log that notes methods changes, new sources, or modifications to eligibility and show how these affected results.
In your updated manuscript, clearly mark what is new and what remains unchanged from prior versions. This helps readers track evolution.
Living and rapid updates vs full updates
Living and rapid updates keep decision-makers informed between full updates by focusing on timely additions and concise reporting. They require strong version control, narrower scope per cycle, and explicit documentation of what was and wasn’t searched to manage expectations.
A full update revalidates the entire pipeline. It is appropriate when methods have matured or the evidence base has shifted substantially.
Choose the approach that fits your stakeholders’ risk tolerance and the pace of the field. Regardless of format, preserve reproducibility: archive searches, track deduplication, and maintain the same charting variables unless you justify and document changes.
Scoping vs mapping reviews, evidence gap maps, and rapid scoping
Use a scoping review to describe what exists, how it has been studied, and where gaps lie. This is ideal when study designs and outcomes are diverse.
Choose a mapping review when you need a structured catalog of study characteristics to support meta-knowledge building. Opt for an evidence gap map when decision-makers want a visual matrix of evidence presence by intervention and outcome, often with subgroup overlays.
Rapid scoping is a time-boxed variant of scoping that streamlines sources or steps while documenting tradeoffs. It is best for informing near-term priorities.
If your end goal is to judge effects, consider starting with a scoping phase. Then transition to a systematic review where the question can be narrowed and appraised.
Whatever you choose, align method, outputs, and claims. Report them fully using PRISMA-ScR and PRISMA-S so others can trust and reuse your work.
References and further reading:
- PRISMA-ScR checklist on the EQUATOR Network (PRISMA-ScR)
- JBI Manual for Evidence Synthesis – Scoping Reviews (JBI Manual)
- PRISMA-S extension for search reporting (PRISMA-S extension)
- OSF protocol registration and materials sharing (OSF Registries)
- PROSPERO eligibility for registration (PROSPERO)
- Cochrane Handbook (bias reduction and searching) (Cochrane Handbook)
- Equity framework for PROGRESS-Plus (Cochrane Equity Methods Group)
- Practical method steps and calibration (Steps for Conducting a Scoping Review)