“A comprehensive search forms the foundation of any systematic review.” 1 This page covers tips how to build a suitable search query in general for your project. Since Nested Knowledge offers Automatic Searches for several databases, see below for database-specific guidance: PICO-based Research Question construction See this short video on Research Question structure following conventions for...
Whilst Nested Knowledge started out as a comprehensive tool for conducting systematic literature reviews, the software has grown into a dynamic environment for synthesizing diverse research methodologies. Our goal is to empower researchers to navigate the intricate landscape of evidence synthesis with ease, regardless of review type. This page defines and discusses the typical review types conducted in Nested Knowledge, highlighting examples of specific nests conducted in the software previously. And, if you don't have a review type in mind, hopefully this will help guide you into making that decision before diving into your nest!
After running your search through one or more databases, the next step is to screen through the results to decide which studies you want to include. Select Screening Mode Screening mode should be determined and set up before screening begins. Choose between Standard (one round of screening) and Two pass (two rounds of screening: Title/Abstract...
Tagging is a process of applying a set of labels to the included studies in a review. AutoLit offers a built-in hierarchical tagging system connected to the data element set up for extraction. Hierarchical or non-hierarchical tagging systems can be implemented in other systematic review setups.
Smart Tag Recommendations is most effective when the tag hierarchy is kept as simple and concrete as possible. See below for tips on how to optimise smart tag recommendations:
Several studies have documented the fact that extraction errors in systematic reviews are very common, with extraction error rates ranging from 8% to 63% (Mathes et al., 2017). Unfortunately, no universal recommendations exist regarding how to best extract data. For instance, recommendations vary as to whether data extraction should be conducted by at least two different people (Buchter et al., 2020).
The extent to which a systematic review/meta-analysis (SR/MA) can draw conclusions about the effects of an intervention depends on whether the data and results from the included studies are valid. In particular, an SR/MA of invalid studies may produce a misleading result, yielding a narrow confidence interval around the wrong intervention effect estimate. The evaluation of the validity of the included studies is therefore an essential component of SR/MAs, and should influence the analysis, interpretation, and conclusions of the SR/MA. Standardized critical appraisal or “risk of bias” (RoB) tools are used to measure the extent of various types of biases from individual studies. The validity of a study may be considered to have two dimensions. The first dimension is whether the study is asking an appropriate research question. This is often described as ‘external validity’, and its assessment depends on the purpose for which the study is to be used. External validity is closely connected with the generalizability or applicability of a study’s findings. The second dimension of a study’s validity relates to whether it answers its research question ‘correctly’, that is, in a manner free from bias. This is often described as ‘internal validity’. A good meta-analysis should assess both external validity and internal validity and should incorporate measures to minimize the risk of such biases.