Sign up

Best Practices for Critical Appraisal

Introduction to Critical Appraisal #

The extent to which a systematic review/meta-analysis (SR/MA) can draw conclusions about the effects of an intervention depends on whether the data and results from the included studies are valid. In particular, an SR/MA of invalid studies may produce a misleading result, yielding a narrow confidence interval around the wrong intervention effect estimate. The evaluation of the validity of the included studies is therefore an essential component of SR/MAs, and should influence the analysis, interpretation, and conclusions of the SR/MA. Standardized critical appraisal or “risk of bias” (RoB) tools are used to measure the extent of various types of biases from individual studies.

The validity of a study may be considered to have two dimensions. The first dimension is whether the study is asking an appropriate research question. This is often described as ‘external validity’, and its assessment depends on the purpose for which the study is to be used. External validity is closely connected with the generalizability or applicability of a study’s findings. The second dimension of a study’s validity relates to whether it answers its research question ‘correctly’, that is, in a manner free from bias. This is often described as ‘internal validity’. A good meta-analysis should assess both external validity and internal validity and should incorporate measures to minimize the risk of such biases.

Assign QC and Reviewer #

It is the responsibility of the study coordinator to assign the quality control (QC) reviewer and independent Critical Appraisal reviewers at the beginning of the project start. Typical Critical Appraisals require 1 QC reviewer and 2 independent Critical Appraisal reviewers.

Determine QC System #

1. Different study designs (Case Control, Cohort, etc.) will use different critical appraisal versions. Most will consist entirely of randomized controlled trials and/or cohort studies; however, different study designs may be included depending on the project. If necessary, discuss with the study coordinator to determine which system should be used. Note that it is the responsibility of the QC reviewer to determine the correct critical appraisal form.

2. Familiarize yourself with the different sections of the critical appraisal system you are using.

The common RoBs are:

  • Scottish Intercollegiate Guidelines Network (SIGN):
    • Cohort
    • Case Control
    • Diagnostic Accuracy
    • Economic Evaluations

NOTE: The instructions for the SIGN RoB are quite long (100+ pages!). The good news is that most of the information is not particularly relevant for the majority of SR/MAs. For quick reference, the most important sections to review are page #51 and pages #56-66. Other sections may be useful depending on the project.

  • modified Newcastle-Ottawa Scale (mNOS)
    • Cohort
    • Case Control
  • Joanna Briggs Institute (JBI)
    • Case Report
    • Case Series
  • Cochrane
    • Randomized Trial
    • Non-Randomized Study

3. Know which study designs you are including in the SR/MA. Be familiar with these study designs and choose the correct critical appraisal template based on the studies included in the SR/MA.

4. Normally, two reviewers will complete a critical appraisal system individually. Do not collaborate on one with your fellow reviewer because your critical appraisal would be biased.

5. An experienced QC reviewer should adjudicate results of the critical appraisal and resolve any conflicts between critical appraisal results from the independent reviewers.

    Download the Critical Appraisal or Risk of Bias Instructions #

    Download a document with all instructions on Risk of Bias

    Updated on October 25, 2024
    Did this article help?

    Have a question?

    Send us an email and we’ll get back to you as quickly as we can!