Covidence is widely used systematic review software that helps teams move from bulk search results to clean, extractable evidence. If you’re evaluating tools or about to implement one, this guide distills what matters for pricing, compliance, PRISMA 2020 alignment, methods templates, team operations, integrations, and troubleshooting. The goal: help you decide if Covidence fits your review type, team, and budget—and set it up right the first time.

Overview

Covidence is a web-based platform that streamlines the core stages of evidence synthesis: importing references, de-duplication, title/abstract and full-text screening, risk of bias, data extraction, and exporting to analysis tools. It is designed for researchers, clinicians, students, and librarians who need a reliable workflow for systematic reviews, scoping reviews, and related syntheses. In practice, Covidence reduces manual coordination overhead, centralizes decisions, and enforces consistent methods so your review remains PRISMA-ready and auditable.

Most teams start by importing RIS/CSV files from databases and reference managers, then run de-duplication. They configure dual screening with blinding to standardize decisions and reduce bias.

As you progress, customize risk of bias forms and data extraction templates to match your PICO and study designs. Export clean datasets for meta-analysis or registries. If you’re new to the platform, plan a short pilot to validate inclusion criteria, extraction forms, and conflict-resolution rules before scaling to your full dataset.

Pricing, plans, and eligibility

Choosing a plan is primarily about matching how you work—solo, lab, or institution—to cost predictability and administrative overhead. Individual subscriptions of Covidence systematic review software suit solo researchers and small ad hoc teams; institutional licenses centralize provisioning, training, and support.

Your total cost of ownership (TCO) should include licenses, training time, integrations, and any compliance reviews required by your organization. Build these considerations into your timeline so setup stays on track.

Procurement often hinges on seats (named vs. project-based), storage and attachment limits, and how many concurrent reviews you’ll run. Ask about trials or sandbox access to test imports, screening speed, and exports with your own data. Clarify renewal and cancellation terms in writing.

For multi-grant labs and departments, adding teaching/cohort use to the plan can reduce friction and cost compared to piecemeal individual purchases.

Individuals, labs, and multi-site teams

If you’re an individual or a two- to three-person team, a single subscription with the ability to invite collaborators will usually suffice. Labs with rotating trainees typically benefit from pooled seats or project-based licensing, so leads can add or remove collaborators without waiting on procurement each time.

Multi-site teams and consortia should prioritize centralized administration, audit visibility, and support SLAs. These capabilities are normally bundled into institutional licensing.

Evaluate whether you need fixed seats or flexible project-based access. Some labs run many small scoping reviews in parallel, while others batch work on one large review. Confirm how the plan handles inactive members, who can invite external collaborators, and whether students can carry work forward after graduation.

A short internal policy for seat assignment and project archiving saves time and prevents access confusion mid-review.

Academic, student, and non-profit discounts

Universities, teaching hospitals, and non-profits commonly receive discounted pricing with domain verification and a point of contact. Many vendors also offer student-friendly terms for thesis projects or class-based reviews, sometimes tied to faculty sponsorship.

If you’re in a resource-limited setting or running a registered evidence synthesis course, ask for educational bundles that include onboarding sessions and training materials. Expect to provide basic affiliation documentation and agree to appropriate use for teaching or research.

When in doubt, request a quote with and without education pricing to understand the savings and any feature differences. If your library already has a license, check for an institutional sign-up path with your email domain before purchasing individually.

Global availability, signup, and country restrictions

Covidence is accessible globally, but payment methods, currency display, and tax handling (e.g., VAT/GST) can vary by region. Some institutions require country-specific compliance documentation or restrict cross-border data transfers. Verify data residency options early if that applies to you.

For sign-up, institutional access often uses your university or hospital email domain. If you don’t see your domain recognized, contact your library or the vendor to confirm coverage.

If you work in a sanctions-restricted jurisdiction, your procurement office may need to confirm eligibility before purchase. To avoid delays, gather your legal entity details, billing contacts, and any internal vendor onboarding forms in advance.

Where possible, pilot with a limited dataset during procurement so your team is ready to scale once the license activates.

Security, compliance, and accessibility

Institutional buyers need confidence that Covidence can meet data protection and accessibility obligations. You should expect a clear data processing agreement (DPA), security overview, and accessibility conformance documentation during due diligence.

While Covidence often supports research data that is not patient-identifiable, teams handling sensitive content should explicitly verify controls, data residency, and acceptable use with both the vendor and their IRB/IT. Confirm how sensitive attachments are stored and accessed.

For privacy and healthcare use cases, ask whether the vendor can support GDPR commitments for EU residents and whether they sign a BAA for HIPAA-covered entities in the U.S. For independent assurance, some buyers prefer vendors with SOC 2 Type II reports; request the latest attestation if your organization requires it.

Accessible design matters for equity and productivity. Confirm keyboard navigation, contrast, and screen reader support consistent with WCAG guidelines.

For reference:

Data residency, retention, and deletion

Where your data lives, how long it persists, and how it’s deleted are core risk questions. Ask which regions host application and attachment data, whether backups are encrypted, and how long backups are retained after account deletion.

Clarify whether you can request data export and hard deletion of project content. Ask how long administrative logs are kept for audit.

If your institution requires geographic restrictions, confirm region options and standard contractual clauses for cross-border transfers under GDPR. For projects requiring long-term archiving, define who holds the authoritative export of records, decisions, and extractions at project close.

A simple runbook—what to export, who stores it, and how deletions are requested—prevents last-minute scrambles.

Accessibility and assistive technologies

An accessible review platform helps all team members perform consistently across long screening sessions. WCAG 2.1 AA remains a common target for web applications and includes support for keyboard-only navigation, sufficient color contrast, and ARIA labels for screen readers (WCAG 2.1).

If your organization has an accessibility office, ask for a VPAT or conformance statement to verify current status. If a reviewer uses JAWS, NVDA, or VoiceOver, test core tasks—title/abstract screening, conflict resolution, and exporting—before committing your full dataset.

Where gaps exist, simple mitigations like zoom presets, color palettes, and keyboard shortcuts can maintain throughput. Document your team’s accessibility preferences in the project SOP so new members get up to speed quickly.

Workflow alignment with PRISMA 2020

Covidence supports a PRISMA-aligned workflow from search to inclusion. It helps you track counts and decisions that roll up into your PRISMA 2020 flow diagram. PRISMA emphasizes transparent reporting of identification, screening, eligibility, and inclusion with reasons for exclusion reported at the full-text stage (PRISMA 2020 statement).

With thoughtful setup, you can generate export-ready counts and diagrams without reconstructing logs at the end. Assign reporting ownership early to keep numbers consistent.

Map your steps explicitly: import results from each source, run de-duplication, conduct independent title/abstract screening with predefined inclusion criteria, then full-text review with standardized exclusion reasons. As you proceed, maintain a minimal audit trail: where you searched, when, and any changes to criteria.

Before you begin, agree on who will own the PRISMA reporting and when you’ll lock counts for manuscript submission.

Configuring reasons for exclusion and PRISMA flow outputs

Well-defined exclusion reasons make PRISMA reporting fast and defensible. Configure mutually exclusive and collectively exhaustive reasons (e.g., wrong population, intervention, comparator, outcome, study design, language, or duplicate) for full-text screening.

Pilot them on a small set of articles to ensure reviewers apply them consistently and they align with your registered protocol. Provide examples in your SOP if certain reasons are commonly confused.

To produce a PRISMA 2020 flow diagram in Covidence, complete your de-duplication, track the number of records screened and excluded at each stage, and capture reasons at full text. When ready, export the PRISMA counts or diagram and verify totals match your database logs.

A short cross-check—sum of included + excluded + duplicates = total imported—catches most reporting discrepancies before submission.

Methods templates: RoB 2, GRADE, and customization

Risk of bias and certainty of evidence judgements should follow established standards. Covidence supports structured risk of bias assessments; many teams align with Cochrane’s RoB 2 domains for randomized trials and adapt for non-randomized designs where appropriate (Cochrane Handbook).

For certainty of evidence, your data extraction should capture outcomes, effect estimates, imprecision, and other details needed to apply GRADE outside the platform (GRADE Working Group). Keep the framework in your methods and analysis plan.

Customize templates so questions match your PICO, study designs, and outcomes. Keep the instrument focused—extraction forms balloon quickly and slow teams down.

Before full rollout, calibrate on a small stratified sample (e.g., 15–30 studies across designs and years) and refine any ambiguous fields or guidance notes.

Custom data extraction forms and codebook design

Your Covidence data extraction template should be explicit enough that different reviewers make the same choices independently. Define each field’s purpose, allowed values, units, and how to handle “not reported” or “unclear.”

Use controlled vocabularies and picklists for common fields (study design, funding source, outcome type) to improve consistency and simplify analysis downstream. Pair the form with a codebook that includes examples and edge cases, especially for complex outcomes or non-standard comparators.

If you plan to populate SRDR+ or meta-analysis software, align field names and types early to avoid messy mapping later. Keep a change log for any template edits after calibration so you can explain differences in your methods section.

Pilot testing and inter-rater calibration

A short pilot prevents expensive rework later. Assign two reviewers to extract the same 10–20 studies and compare results field-by-field, noting disagreements and ambiguities.

Discuss discrepancies, revise definitions, and decide when fields require dual extraction versus single extraction with verification. Track agreement rates and spot where guidance or picklists reduce variation.

If you need a formal statistic, export the pilot decisions and calculate Cohen’s kappa on key binary fields. Aim for stable agreement before scaling. Lock the template and codebook when you reach acceptable reliability, then proceed to full extraction.

Team roles, blinding, and quality control

Clear roles and blinding rules reduce bias and speed decision-making. Covidence allows you to invite reviewers and set permissions; most teams use dual independent screening at title/abstract and full text, with a third reviewer as tie-breaker.

Blinding screeners to each other’s votes helps prevent conformity bias. It is especially useful early in a review when criteria are being internalized.

As you scale, introduce periodic QA checks: random spot audits, conflict trend reviews, and agreement monitoring. Build these checkpoints into your project plan, not as an afterthought.

When conflict rates spike, pause to re-clarify criteria and adjust guidance to keep progress steady and consistent.

Dual screening rules and conflict workflows

Decide up front how many votes trigger inclusion or exclusion at each stage and how conflicts are resolved. A common pattern is “2 votes to include, 2 votes to exclude, 1:1 goes to full text” at title/abstract, then “2 votes to include, 2 to exclude with recorded reason” at full text.

Assign a rotating adjudicator to resolve conflicts. Document rationales for borderline cases so similar ones can be handled consistently.

Batch assignments so each reviewer sees a balanced mix of topics and years to avoid domain bias. If your team is large, use blocks of 100–200 references per reviewer to keep momentum while limiting context switching.

When you change criteria or guidance, annotate the timeline so you can discuss any inflection points in your manuscript.

Measuring agreement (kappa) and QA checkpoints

Agreement metrics help you decide when to recalibrate. You can approximate inter-rater reliability by exporting screening decisions and calculating Cohen’s kappa on a random sample.

Rising agreement and falling conflict rates indicate your criteria and training are working. Remember that kappa depends on prevalence and bias, so interpret it alongside qualitative notes and conflict categories rather than as a single pass/fail number.

Schedule QA checkpoints—for example, after the first 1,000 records and then monthly—to review conflicts and update guidance. If certain exclusion reasons dominate, refine definitions or add examples to your codebook.

Use these checkpoints to maintain blinding discipline and to ensure tie-breakers are applied consistently across the team.

Imports, de-duplication, and grey literature

A clean import pipeline saves days later. Covidence accepts RIS/CSV exports from major databases and reference managers like EndNote, Zotero, and RefWorks. You can attach PDFs at full-text stages.

For best results, normalize fields (titles, DOIs, PMIDs) and de-duplicate in your reference manager before importing. Then run Covidence deduplication and spot-check matches.

Grey literature and trial registries are critical for comprehensive coverage, but they add heterogeneity that complicates matching. Normalize titles and identifiers where possible and document your capture methods for PRISMA.

Keep a separate log for manual additions and website captures so counts remain transparent.

Advanced de-duplication methods and algorithms

Expect a hierarchy of matching that prioritizes strong identifiers (DOI, PMID), then combinations of title, year, journal, and authors. Pre-import dedup in EndNote or Zotero reduces noise.

After import, run Covidence deduplication to catch near-duplicates with minor formatting differences. Always spot-check matches with identical titles but different years or supplements to avoid accidental exclusions.

If your dataset mixes registries and bibliographic records, use custom tags before import to preserve provenance. After deduplication, export a list of removed duplicates for your PRISMA appendix.

As a final check, search for a few sentinel DOIs across the project to ensure expected studies remain.

Grey literature and trial registry records

Grey literature often lacks consistent metadata, which affects matching and screening efficiency. When capturing conference abstracts, theses, or policy reports, standardize titles, authors, and dates, and add a source tag (e.g., “conference,” “thesis,” “agency”).

For trial registries, include registration numbers and link related publications once found during full-text review. In PRISMA reporting, note your grey literature sources, date ranges, and any restrictions.

During screening, apply slightly broader inclusion at title/abstract to avoid prematurely excluding sparse records. Then verify at full text. Keep a short SOP for handling non-standard documents so adjudication remains consistent.

Data extraction forms, pilot testing, and calibration

A deliberate build–test–refine cycle will make your extraction faster and more reliable. Start by drafting your Covidence data extraction template from the protocol: outcomes, time points, measurement scales, study design, risk of bias domains, and effect measures.

Then pilot, revise, and lock the instrument before full-scale extraction.

A simple step-by-step helps teams move quickly:

This process keeps methods stable while giving reviewers the guidance they need to make consistent, auditable judgements. For complex outcomes or composite endpoints, add short examples in the codebook so future reviewers can reproduce decisions.

Automation and ML features

Automation in Covidence—such as priority screening or text-mining-assisted relevance—can accelerate throughput when used carefully. The key is to treat ML as triage, not a replacement for dual independent screening recommended by methods authorities.

Validate any automated ordering or suggestions on a holdout set and monitor recall so you don’t miss relevant studies. If you enable priority screening, regularly compare inclusion rates between ML-prioritized and random samples.

Keep humans in the loop for tie-breaks and criteria interpretation, especially for nuanced populations or interventions. As a safety net, document your ML settings and validation checks in the protocol or methods to maintain transparency.

Integrations, exports, and enterprise options

Covidence plays well with reference managers and analysis/reporting ecosystems through import/export. Most workflows export PRISMA counts, screening decisions, reasons for exclusion, risk of bias judgements, and extraction tables to CSV for analysis or registry upload.

Many teams then take effect sizes and evidence profiles forward into GRADE assessments outside the platform. For toolchain interoperability, Covidence supports standard RIS/CSV imports and exports that map cleanly to RevMan, EPPI-Reviewer, or SRDR+ with light transformation.

If you need enterprise options—user provisioning, SSO, audit logs, or an API—confirm availability and scope with the vendor. Capabilities vary by plan and may require custom agreements.

For teaching, ask about cohort management tools and sandbox projects to streamline classroom use.

Export formats and schemas

Plan your exports early so you capture what downstream tools expect. Common exports include:

After exporting, validate counts against your PRISMA log and spot-check field mappings to your analysis templates. If you plan to submit to registries like SRDR+, align field names and types before full extraction to minimize rework.

Living reviews, versioning, and performance at scale

Living reviews require repeatable update workflows and clear version history. In Covidence, you can add new search results to an existing project, re-run deduplication, and screen updates separately from your baseline cohort.

Document update cadence (e.g., quarterly), screening rules for updates, and when you lock each version for publication. Keep this consistent with PRISMA guidance on updates.

Performance matters when your search yields tens of thousands of records. Test a pilot import to gauge responsiveness and ensure your team can sustain screening throughput.

For very large datasets, consider staging imports by source and de-duplicating in batches. Use ML-assisted prioritization with validation to maintain recall while focusing human effort.

PDF handling, OCR, and bulk limits

Full-text management works best when you centralize PDFs and attach them consistently. OCR quality varies widely—scanned PDFs and poor-quality images can slow review and obscure key details.

Pre-process stubborn PDFs with a high-quality OCR tool before upload. To avoid hitting bulk upload limits or timeouts, batch attachments and keep individual files reasonably sized.

When PDFs are missing, use institutional subscriptions or interlibrary loan workflows defined in your SOP to avoid delays. If your team highlights or annotates locally, agree on a convention for naming and storing files so extraction remains consistent.

Periodically audit attachment coverage for included studies to ensure your archive is complete.

Migration playbooks and classroom setups

Moving from Rayyan, Excel, or DistillerSR to Covidence is feasible with planning and validation. For Rayyan, export your library and labels to CSV or RIS. For Excel, normalize columns (title, authors, year, DOI/PMID, abstract, notes) and save as CSV. For DistillerSR, export references and decisions where possible.

Import into Covidence, run de-duplication, and reapply any critical tags as fields or notes so they remain visible during screening.

For teaching, create a master project with a clean, deduplicated set and cloned projects for each cohort or team. Provide a short, task-focused SOP (login, screening rules, conflict resolution, deadlines) and a rubric for grading participation.

Use a short calibration exercise to normalize decisions before students begin independent screening.

Validation for lossless migration

Validation protects your decisions and audit trail during migration. After import:

Keep the original export files and a migration log with timestamps, file names, and checksums so you can trace any issues. If you’re migrating mid-review, preserve conflict statuses and document any unavoidable changes to decision states.

Troubleshooting, support, and SLAs

Most issues fall into a few categories: import errors, unexpected duplicates, permission or blinding confusion, and export schema mismatches. For imports, validate file encoding (UTF-8), field headers, and reference manager export settings. For duplicates, review matching settings and spot-check near-duplicate cases.

If screeners can see each other’s votes, verify project blinding settings and role assignments before proceeding. When you need help, start with the knowledge base and training modules, then escalate with project IDs, timestamps, and exact files used to reproduce the issue.

If your institution has an SLA, note response targets and hours of coverage so you can plan around maintenance windows. For onboarding, Covidence Academy modules are widely used; many courses offer completion badges or certificates—confirm availability for your module set if you need formal proof of training.

Comparisons and decision framework

No single tool fits every team. Covidence vs Rayyan vs DistillerSR vs EPPI-Reviewer vs RevMan comes down to review type, team size, budget, and required controls.

For small teams on tight budgets, Rayyan’s free/low-cost screening can be attractive for title/abstract stages, while Covidence provides an end-to-end path including full-text, risk of bias, extraction, and PRISMA outputs. DistillerSR and EPPI-Reviewer offer highly customizable enterprise-grade workflows and automation at higher price points. RevMan remains strong for Cochrane-style analyses but is not a screening tool by itself.

Use this quick framework:

Whichever path you choose, standardize your SOPs around PRISMA 2020 for reporting, align risk of bias with the Cochrane Handbook, and plan GRADE outside the platform with the GRADE Working Group. Those standards—and a disciplined calibration process—will contribute more to quality and reproducibility than any single feature.