Sign up

Lesson 1: Introduction to Systematic Review

  • What is a Systematic Review and how do you start?
  • How can you craft a Research Question for a review?
  • What is P, I, O / PICO?
  • What are the main steps in a review?
  • What is a Basilar Artery Stroke and how can it be treated?

 

Homework:

  • Identify a potential topic of interest to complete a systematic review for
  • Craft a Research Question for your review.
  • Identify the P, I, and O for your review.
  • (Optional): Create a free/demo account on AutoLit using these instructions
    • You can draft your protocol and complete your review in the first ‘nest’ you create!

Speaker 1: Hi, Kevin Kallmes with Nested Knowledge here presenting the first lesson in our team’s course on how to systematically review the medical literature. Before we hop in, I wanted to note briefly that we at Nested Knowledge have also built a software for the systematic review process, so you’ll get to see behind the curtain on how it works as part of this course, but you can also create an account and begin reviewing immediately, if you go to the link at the bottom of your screen. That out of the way, let’s hop in with lesson one.

 

S1: We get into the systematic review processor, how to review, I think it’s important to ask ourselves when is a systematic review the right tool for us to pick up? For that I have a couple of qualifications, and the first is we need to bring a question, and it can’t be just any question, it has to be a medical research question, and experts have some variations on the framing here, but I think that a simple and clear way to phrase a research question is, for Population P, how does intervention I perform as measured by outcome O? 

 

S1: So an example of this would be for elderly patients, how does the Pfizer COVID vaccine perform as measured by adverse events? You can obviously think of many questions that have that framing, but the important thing to pull out of this is that structure underlying it of P, I and O.

 

S1: The last qualification for us to do a systematic review is that we need to be trying to compare and combine underlying evidence. So we can’t be doing a new trial where we’re taking a novel drug and putting it into a new population. We need to be going to the medical literature in order to find existing studies that may have evidence that reads on our question of interest. Now, let’s assume that we actually qualify… We have a question, it’s a medical research question in the PO paradigm, and we are going to compare and combine evidence. How do we get started? 

 

S1: Well, it’s hard to compare and combine evidence if we don’t have studies to draw it from. So the first step we need to do is go out into the medical literature and bring back potential studies of interest, and our method there is going to be searching on medical databases such as PubMed or Embase or Ovid, and putting in a query and pulling out all of the studies that result from that query, and we’re gonna go through an example of a good query in a second, so for now, I think we can move on with the exact steps interview, and then we’re gonna go into each of them in more detail in a second.

 

S1: So once we have found those studies that may be of interest, our next step is going to be to filter them, because it’s unlikely that our search query on these databases separated the wheat from the chaff perfectly. So we need to set up a system where we include relevant articles and exclude those that are unwanted, those that don’t have any information that could help us answer our question. Once we have our set of relevant studies, we then need to extract the content that actually helps us answer our question, and I think it’s a good idea to separate out the content of interest into qualitative concepts, so say, What’s the study design or what are the procedural practices that were used in different studies, from quantitative data, like what were the patients’ ages and what was the mortality rate? 

 

S1: So we need to extract both qualitative and quantitative information from the underlying studies, and then we are going to analyze and present our findings. So in steps one through three, we have found articles, we filtered to the relevant ones, and we extracted the qualitative and quantitative information that helps us answer our research questions. So in step four, we need to synthesize that information present it in tables or visuals, but we also need to create and write up our insights that actually take that information and use it to provide an answer to our research question.

 

S1: So a systematic review is the process of finding, filtering and extracting content from underlying studies so that we can answer a research question and present the evidence that supports that answer using cables and visuals or any other method of presenting our insights. So as promised, let’s actually hop in in a little more detail on a review that we’ve already completed. So I’m going to use an example review from the Stanford collaboration on basilar artery stroke, and I wanna give you guys a little bit of medical background so you’re not lost as I take you through the structure of this review.

 

S1: Basilar artery strokes are a subset of acute ischemic stroke. Acute ischemic stroke just means a blood clot that has embolized into your brain and gotten stuck and it’s blocking the vessel, so you don’t get oxygen in your brain tissue. And the basilar artery is a really crucial artery that actually feeds your posterior circulation, and that includes feeding your brain stem. So a stroke in your basilar artery is extraordinarily harmful, just to show you roughly how harmful, the mortality rate, even with treatment, is currently about 45% for basilar artery strokes, and the rate of serious neurological deficit is 33%.

 

S1: So basilar artery strokes are a very serious problem, and obviously, stroke physicians are working their hardest to find therapies that can assist with this, which inspired the trials that underly the review as well as the review itself. And this review is published. You can find it on our website at the link below, and then you can also find it published in the Stroke: Vascular and Interventional Neurology Journal. So without further ado, let’s jump into this more detailed examination of the four steps of a review in this basilar artery example.

 

S1: So as we said, to start out our review, we need to build a search, and for that we need to build a boolean query, meaning one that has a structure to it. The parentheses or’s and ands here are that structure. And you can see that into that structure, we’re just going to feed our research question. We put in terms related to our population, so a population term number one, population term number two, and then we link them together with terms related to our interventions of interest and our outcomes of interest. And the actual query that we started with in this review was looking for basilar artery acute ischemic strokes, interventions like thrombectomy, where you put devices into patients’ blood vessels, snake them up into the basilar artery and try to pull out the clot, and thrombolysis, which is a clot-busting drug.

 

S1: And so in this review, we’re going to be comparing those interventions with respect to outcomes like mortality or neurological outcome, and stroke physicians have a specific scale they use for neurological outcome. So the Modified Rankin Scale score is a score of neurofunctional outcome, going from zero being no symptoms, up to five is severely disabled or bedridden, and six is dead. So our starter query is P, I, and O in basilar artery. We pulled back all the articles that resulted from that query, and we screened them. And the important activity in screening, other than actually filtering the wheat from the chaff, is setting up the rules that you’re going to use in that screening process. And for that, we need to choose exclusion criteria that are highly likely to separate out articles that hold information of interest from those that don’t.

 

S1: And so in this review, we used exclusion reasons like, “Doesn’t relate to our disease state of interest,” “Doesn’t report outcomes,” “Is an editorial or a letter or a guideline.” Obviously, many of these published study types aren’t going to contain the P, I, and O of interest, and so we wanna come up with rules that push those out. We can also exclude articles based on rules like, “We only want recent evidence.” Say that there’s been a shift in procedural practice, you may want to exclude all evidence from before that shift. In our case, the procedural practices in stroke were established around 2015, so we’re going to search only that period.

 

S1: And then we can also create reasons that we think will help narrow ourselves down to an apples to apples comparison. So if we have a biased population in one of these articles, say, the underlying article reported only patients over 90 years old, we may want to exclude that to try to keep from confounding our review. And so we can come up with exclusion reasons that will filter the unbiased from biased population or procedural practice or any other exclusion reason that helps set up our rules for keeping the wheat and kicking out the chaff.

 

S1: For tagging, this is a nested knowledge specific practice, but we find it really helpful to build out a hierarchy of the concepts of interest in our review. And like with screening and searching, we actually start from our research question. Our research question tells us the population interventions and outcomes of interest, and those actually become the root tags or the highest level concepts in our tagging hierarchy. And then we build out child tags below those, and you can see here the actual hierarchy from our study where underneath population, we had characteristics such as procedure time or patient demographics. We had interventions, thrombolysis and thrombectomy, and then we also collected outcomes like mortality and Modified Rankin Scale score, all within that PO paradigm.

 

S1: And then lastly, we needed to extract quantitative information. And for that, we needed to go into each underlying study, and this is a table from one of them, and identify the interventions of interest. So here it was endovascular therapy or thrombectomy versus medical care, or thrombolysis. We need to identify the data elements that contain our outcomes. So here we can see that favorable Modified Rankin Scale score was one of the data elements of interest, and there are others listed throughout the table. And then we also need to identify the time points at which this was collected, again, to make sure that we’re comparing interventions, data elements, and the actual procedural flow and followup periods in an apples to apples manner.

 

S1: So in our basilar artery review, we had a boolean query focused on basilar artery strokes. We excluded all articles that weren’t going to read on our research question of interest, which is whether thrombectomy outperforms thrombolysis, and then we built out a tagging hierarchy to collect all the qualitative information, and set up extraction of the interventions in data elements of interest to us at the time points they were reported in underlying studies.

 

S1: These steps should create outputs, and in the nested knowledge system, they actually automatically create each of the outputs I’m about to show. But when you search, you shouldn’t keep it to yourself. An important part of the systematic review process is effectively keeping an audit record of how you found the studies of interest, how you selected the ones that you think should be put into your review, and then how you chose what content to extract. We’re gonna go into a lot more detail on that in lesson two, about study design and protocol building, but your search should give you an output of your full search history, and here you can see the queries we ended up running on PubMed.

 

S1: You should also generate a PRISMA chart. PRISMA is a set of guidelines on how to complete a good systematic review. And the PRISMA chart is really the center of it, where it shows the flow of articles in or out, and then all of the exclusion reasons or rules that you applied to separate the wheat from the chaff there. And so you can see in this flow that we ended up pulling back 252 studies from PubMed. We de-duplicated them, so we kicked out all the articles that were duplicative of the same patient population. We screened out articles for the reasons that we set up earlier, and then we narrowed those 252 studies down to the three that actually had evidence that we wanted to extract for our research question. Then we applied those tags from that tagging hierarchy that you saw, and that automatically built us out a qualitative synthesis diagram.

 

S1: So this is a sunburst diagram where you can see our population, intervention and outcome concepts of interest. So you can see that we collected the timing of treatment, the patient demographics, endovascular, and medical therapies. And then we collected outcomes even beyond Modified Rankin Scale in mortality, including symptomatic hemorrhage, including clot clearance, and including a stroke scale. And then furthermore, we can also see some data and details about these underlying studies from this diagram, but in effect, you can see it’s a review of the concepts from those underlying studies in our systematic review.

 

S1: We also needed to complete extraction, if you recall, and our extraction creates a quantitative outputs. So as you can imagine, these are data summaries, which you can see, at the top of the page, you can see the good neurological outcome, the mortality and the hemorrhage for endovascular therapy and for standard medical therapy. And then we also can complete meta-analytical statistics on top of those data points that we’ve collected as long as we followed good collection of practices and identify the statistic types that we’re gathering, we can complete meta-analysis bringing back inferential statistics like odds ratios. Here we can see a forest plot of good neurological outcome between endovascular therapy and standard medical therapy, and a forest plot showing the outcomes for each of the three underlying studies, and the overall odds that we see here are that endovascular therapy has 2.14 times the odds of good neurological outcome over standard medical therapy.

 

S1: So in effect, if you have thrombectomy rather than thrombolysis, you’re twice as likely to have a good neurological outcome. Now that’s an important outcome from a systematic review that we can turn around and turn into an insight that we communicate to the medical community as a conclusion. And speaking of conclusions, I wanted to go through really quickly, again, that whole process that we just outlined, and then set ourselves up to dive into more detail on the systematic review process in the next lesson.

 

S1: So let’s summarize. When should you do a review? Well, you better be combining and comparing underlying studies that answer a PIO-based research question, how do you review? Well, there’s a simple four-step process of finding studies by searching on PubMed or similar databases, screening them, so filtering down to the studies that are gonna be of interest, extracting content, either qualitative that you tag or quantitative that you extract, and then presenting those findings in the outputs that we’ve shown briefly.

 

S1: We went over the topic of basilar artery stroke, including a review that we’ve completed and published previously with the P, I and O of basilar artery stroke, thrombectomy compared to medical therapy, and we saw in one of our insights just now that thrombectomy actually has two-to-one odds ratio for a good neurological outcome over thrombectomy… Sorry over thrombolysis, and then outcomes that we collected like mortality and neurological outcome.

 

S1: And the next steps that we’re going to take are going to go back into that same review and look at the way that we actually designed it, how we establish those search parameters, screening, exclusion reasons, how we build that hierarchy and present those findings, we’re going to jump much deeper into the software and also show how to draft a good protocol to set yourselves up for a good methods section in your publication of a systematic review.

 

S1: Lastly, before we sign off, I always like to give good sources and helpful links, so those links you saw on the first page on how to sign up and also learn more about us are here. I also have a cheat code for those of you you wanna work ahead. If you wanna learn how to use our software and review yourself without going through lessons two through whatever, you can go directly to our autoIt documentation. This has step-by-step instructions on every single thing you need to do in a systematic review to get from research question to publishable output. So for those of you who wanna work head, hit the second link in that list.

 

S1: I’ve also linked to some more information on basilar artery strokes, if anyone else has developed a passion for it through this lesson, and you can also find our published review. And then I’ve also given some information on the PO paradigm on PRISMA charts, so how The EQUATOR Network put together the requirements and the flow for articles going into a systematic review, and then also some basics of those meta-analytical statistics that we used to figure out that there was actually a two-to-one odds ratio of good neurological outcome with thrombolysis in this example review that we went over. Thank you guys so much for your attention, and I’m looking forward to seeing again in lesson two. Until then, au revoir.

Have a question?

Send us an email and we’ll get back to you as quickly as we can!