Lesson 6: Pitfalls

  • How to find and choose a Journal
  • What are the most common errors at every step of a review?
  • What mistakes in Study Design can render a review impossible to complete?
  • What are the common Limitations you should report when publishing a review?
  • How can publishing a systematic review impact your medical career?

 

Homework:

  • Identify at least 3 candidate journals to submit your review to
  • Review your nest and your manuscript for potential failure modes
  • Continue Screening, Tagging, and Extracting in your nest until all included studies are completely extracted!
    • When complete, fill in your findings in the Results and Discussion sections

Speaker 1: Hello, this is Kevin Kallmes with Nested knowledge. Bringing you the last lesson in our course on how to systematically review the medical literature. Today, I thought I’d go informal, film outdoors in my beautiful Lake Como neighborhood, Lake Como, Minnesota and go through some ways that in all those steps that you’ve learned across lessons one through five, all the ways that you might fail in your systematic review, but before we hop in on that. Let’s review what we’ve learned so far.

 

S1: So if you recall, in lesson one, we went through how to review, centering in on research question with the PI and O as the center of that research question. Then in lesson two, we discussed study design and protocol drafting, including adding collaborators, including identifying study characteristics and patient characteristics of interest, and then mapping out the comparisons you’re actually going to make in your study. Lesson number three, we took our preliminary search, which was based on the PI and O from lesson one, and we built it out into a full boolean query, so that includes not just and and or’s and parenthesis, but also truncations and mesh headers and other synonyms. Then we went through the full life cycle that a study follows as it is being added to your review, so you screen, any included article is then tagged for qualitative content and extracted for quantitative content, and if you’re doing a Nested Knowledge, then you get automatic visuals created for you, and then lastly, we discussed how to interpret your findings and write them up, so we went through the basic methods drafting, results drafting, and then also warned against two complex or overreaching of introductions in discussion sections, and then we also…

 

S1: I checked in on our Basilar artery review where the actual published results from about six months ago have an exciting update where the three study review that we published at the beginning of 2022 has actually been updated in a living meta-analysis to add two more randomized controlled trials, that pushed our major outcomes that is mortality and modified ranking scale score as measured by 0-3, so good neurological outcome, both of those were on the edge of a significant result in our review, two new RCTs came out matching the results of previous studies perfectly, and bring us to a significant result that I…

 

S1: I think is eminently publishable, so we followed our basilar artery review not only from creation to publishing, but also to updating.

 

S1: Anyway, before we jump into failure modes, there was one piece about the writing that we didn’t include last week, and that is choosing a journal to submit to… Now, this is not to say that you can only do a systematic review to publish it, systematic reviews are the backbone of a lot of different reports in medicine, ranging from regulatory reports to the FDA or to the EAU, to HTA… So health technology assessments that are generally completed to support insurance decision-making, those are not always published, but if you’re taking this course, I’m assuming that you may be interested in publishing a systematic review, and I think journal choice is one of the more intimidating pieces of that process. So I think you should… As with your review, you should start by identifying concrete goals in your journal choices, and there are a lot of trade-offs here, so really you’re not choosing an optimal journal in general, you’re choosing an optimal journal for your goals. So the most important piece is timing, and I say most important because if you fail to identify a good journal on this metric, then you will effectively never get your review published.

 

S1: There are some journals that have extraordinarily long review times, they’re using free reviewers and therefore they may not be able to get a review turned around over months, and in the systematic review world, that’s especially bad because as we noticed with the Basilar artery review, our outcomes changed in six months. So by the time a journal would get back to us, if it’s a several months wait, we may need to update the review already, and therefore be in a cycle of updating and submission and updating and submission that we can never get out of. So I recommend choosing journals that are very timely, that’s probably the most important choice that you’re going to make while you’re looking at journals.

 

S1: And journals have realized this, and many of them have published their time to first response on their websites. So I would recommend looking at those, and the best I’ve seen pretty much is a two-week turnaround to first decision, chance of acceptance, so here you’re going to be pushing on two different metrics, one of them is How likely am I… Going to get in on the first time versus how impactful will it be if published there. In general, you’re going to find that the journals with lower impact factors are going to have higher acceptance rates, so I tend to start with high impact journals with low acceptance rates and then submit down my list, I’ll always have a list of journals with backups on backups, and in that effort, if I think that timing is really important, I might skip the highest journals in that list, I might say, Hey, let’s go to journal number three on this list, it has the highest acceptance rate, the lowest impact, and it has a fast turnaround time, you might pick that first, if timing is your absolute leading metric. If you care a lot though about where something is published, then you likely want to look at the impact of the journal, I would say look at impact factor.

 

S1: And I think the impact factor is certainly a… It’s a broad measure of how much a journal is read, for those of you who aren’t in on this metric, it’s sort of become the single number that a journal is judged by, unfortunately, it is effectively how many citations does the average article in a journal get over a two-year period where… So if a journal has 10 articles with 10 citations over a two year period, it’ll have an impact factor of one. Good journals tend to range up into the high double digits, I recommend, if you want any level of acceptance to keep it in the single-digit range, a really good journal to submit to is probably going to be in the range of 5-10, and a back-up journal is going to be 1-5, and I generally wouldn’t go below impact factor of one, but those are just general guidelines, and I actually think that instead of depending on impact factor as your single number, to say whether journal is worthy or not worthy, look at what people in your field read, what is being cited, what it’s being quoted. That is the more important question, so instead of asking yourself how impactful am I as judged by a single number, ask yourself, what is the audience that I wanna speak to? 

 

S1: Okay, and then lastly, really briefly, if you have money available, open access is always a great bailout option for publications that are routinely rejected, open access journals tend to have more… Basically, they accept more of the articles that are sent to them and they turn them around fast, so downside of Open Access, generally going to be paying between $1500 and $3000 to get an article published, upside of Open Access, fast turnaround with a high chance of acceptance.

 

S1: So that is really a trade-off there, and I generally keep open access as a bailout option if I can’t keep into a high-impact journal in my field of interest. However, I also know that you might just… You might not just want advice on which journal to choose, you might also want some guidance on how to find those journals, so I think that the two best methods to find a journals are actually very simple. It’s Google and PubMed. Google, I would just search for your field in disease state, so if I search for neurosurgery and stroke, I’m going to get a bunch of journals in the neurosurgical field that publish on stroke, that’s going to be my list that I wanna considered among… And then again, I want to be looking at impact, chance of acceptance and turnaround time across all of those journals that I find, but I also… This is not really a hack, this is to me, maybe a way to go straight to your audience of interest, and that is… Which journals? On PubMed, so when you search for stroke, if you’re publishing the Basilar Artery review or if you just searched for your disease state of interest in PubMed, what journals are you seeing in those articles that are being returned from your PubMed search.

 

S1: Especially for major papers and other systematic reviews, can you find journals that previous systematic reviews in your field were published in. That may be the place you wanna go, it certainly has been proven out by at least one previous author, so Google can get you great results very quickly. Generally, you’re gonna be able to pick up 10 or 12 journals immediately upon a simple search, but I like the hack of going to PubMed, finding articles that are similar to what you’re publishing, or even find the journals that publish the underlying articles in your review, and then go and look at their websites and go through the same analysis again, look at time to review, impact and chance of acceptance. If you are going to be publishing multiple times in the same field, I strongly recommend you maintain a spreadsheet of journals that you’re interested in, it’s easy to forget, journal names are constantly being ripped off, so it’s hard to remember which journals you’ve gone to, and it’s hard to know the exact specs on each one of the exact impact factor acceptance rate all those things, I just make a spreadsheet with the name of the journal, a link, the time to review, acceptance rate, any open access costs or open access options and then impact factor.

 

S1: And generally, if you put together a list based on, again, journals that you find from related articles on PubMed or through Google, that’s going to be your go-to probably for the rest of your career, if you’re publishing in the same field over and over again, it’s a resource that keeps on giving. And then when you are actually researching a journal, I think that there’s a lot of considerations to take in, but journals are always gonna be fronting with what they want you to see, so I would go to a site and immediately go and check out their published articles and see if they publish reviews.

 

S1: Read the actual articles, that’s the way that you’ll be able to tell that the journal has standards and is publishing on topics similar to yours, I would also recommend and most journals now publish this information, check those time to review, acceptance rate and indexing on the site, if they don’t have them, that is often a bad sign, no impact factor by a journal often means it’s not… It’s not… Actually, it doesn’t have an impact factor because it has not been cited, so the other thing to check is indexing, and the way that I think about impact and indexing, impact is a metric of journal quality.

 

S1: Indexing is a single dichotomous yes or no, if a journal is not indexed on PubMed, you as a researcher, as someone who’s reviewing articles, should recognize that might be a red flag, so if a journal is not indexed on PubMed, that is if the articles published in that journal are not on PubMed once published, that is a reason to walk away from that journal. I would actually recommend having a strong rule, any journal that is not indexed on PubMed, I would not publish it. That’s a great way to avoid predatory journals, it’s also a way to make sure that you’re going into a journal that has some readership… It’s sort of like a baseline yes or no, which is a lot easier to use than impact factor, which is more like a sliding scale…

 

S1: All right. That is it on journal selection. As we noted, if you bring your field and disease state of interest, and then use Google, PubMed or your own review to find candidate journals, you can then judge them based on their time to acceptance, based on whether they’re indexed or their impact as measured by impact factor or your own knowledge of the audience, and then also your odds of acceptance, so that’s the basics of journal selection and good luck on that aspect of your review. I want to spend most of today going over failure modes, this is because as an entrepreneur and as a scientist, I tend to remember the painful lessons that I’ve learned from failed reviews a lot more than I remember the successes from eminently publishable perfect ones. So I thought that I’d pay it forward and go over not only my own failure modes, but those that I see most commonly in reviews that I read or that I see on the site.

 

S1: So we’re gonna go through from lesson one to five and outline how you can fail in each step of a review, and we’re gonna start with research question, unsurprisingly, you guys know research questions are the bedrock on which you’re building a review, we’ve covered them almost every lesson, but the way to make a research question fail you in a review is to make it first of all, too abstract, so if you are coming in to a review and your research question cannot be reduced to P, I and O, or if those P, I and O cannot be determined easily within a study, then you know that your research question is what’s failing you…

 

S1: Too abstract of a research question can fail in many ways, it can mean that you don’t have a search strategy that finds articles of interest, it can mean that among articles that might be of interest, it’s really hard to draw distinguishing lines between what is actually includable and what should be excluded… Really, unless you’re identifying the evidence needed to answer your research question in the question, you’re probably too abstract. All right. Second failure mode, and this is one that we haven’t discussed at all, so far in this course, but you should keep in mind, if you draft a research question and you haven’t seen an article on it, there may be no evidence on your question of interest, it may be a wonderful question, but our systematic review depends on existing evidence in the literature…

 

S1: So if you’re asking a question, something along the lines of, Can we compare BCIs based on 18-month adverse event rates, you are depending on the assumption that number one, researchers of BCIs have published on the performance of those devices. Two, that they have long-term reporting and three, they report adverse events in a consistent enough manner that you can gather it. So if you are drafting a research question without having first read underlying candidate articles to see what evidence may exist, you may end up with a research question that comes up after hundreds of articles of the screening with none that are includable.

 

S1: And then our last failure mode is sort of the reverse of our second failure mode, if you draft a research question, that is too inclusive, that covers too many patient populations, too many study types. You may put yourself in a position of creating a review that takes forever, and by this I mean including far too many articles, this isn’t just a problem of, Oh, that’s going to be a lot of screening to get through thousands of articles or to collect hundreds. It’s also because then you introduce problems with matching the cohorts between different studies, making sure that the patient populations are in fact comparable, making sure that you’re actually capturing comparable interventions and outcomes, the wider the pool of studies that you’re pulling in, the more likely you’re going to get quality issues, and the more likely you’re going to get disagreements effectively with population reporting, the intervention and outcome reporting, so make sure that you’re biting off a question that has a finite set of answers in the literature or you may be reviewing forever.

 

S1: Okay, so make sure your research question is concrete, make sure there’s evidence to support it, but not too much evidence to the point where you are including every article that you searched. Okay, study design in protocol drafting, the most common, and this one’s actually far and above over any other protocol error that I see is that people create for too many end points, and this is out of scientific diligence or zeal, so it’s an understandable error, but your job is not to capture every piece of data from underlying studies, your job is to answer your research question, so when you’re creating endpoints, you should be able to tie each of them back to where they fit into answering that research question.

 

S1: If there’s a patient characteristic. It should read on that population in your research question, if there’s an intervention of interest, it should be captured there, and then for your actual outcomes that you’re measuring, each of them should fall under the umbrella of the outcomes reported in your research question, or you’re probably going well beyond. And I will say that doesn’t mean that you have to restrict yourself to one end point in an entire study, but depending on how many end points you’re putting into your research question, I would recommend trying to restrict it to one or two end points per clinical question, so if our clinical question was about modified ranking scale score, you may notice we had two data elements on it, one for MRCO to two and one for MRCO to three because those were both reported in the literature and cannot be combined.

 

S1: That is the sort of expansion that is acceptable within a review, what would not be acceptable is if every outcome that is reported in every Basilar artery stroke paper, we then decide to add as a new data element in our study, so also the rule thumb of one or two end points, you should not be pursuing more than two or three clinical questions in a review either, so effectively, I’m saying that you should be making simple hierarchies that contain a relatively small number of data elements to collect. One that I would just be careful of, and this one is much less of a pain in your time than error number one, but it might set you up for a gigantic failure at the end, which is, will your review actually answer your research question? And that is really a judgement call, that’s harder to… That’s harder to have a rule of thumb for. But as you’re reading studies and as you’re extracting evidence, you should be able to keep in mind that the evidence that you’re gathering right there needs to answer that broader question, and if you’re not finding the structure of your interventions and outcomes is actually setting up the data in a way that will give you a satisfying answer, then you should understand pretty early on that your protocol has failed you, you have not set up the study plan in a way that brings your question to a clear set of effectively interventions to be compared and outcomes to compare them with respect to…

 

S1: And the last one is related to the research question failure mode, and that is assuming plentiful data, so make sure that as you’re drafting, and this is something we had to face with the Basilar artery review, which only originally included three studies. You have to grapple with the question of what happens if you only find a small number of studies or those studies that you do find report some but not all of your outcomes of interest, leaving you with data sparsity, these are questions that can best be addressed as with many of these failure modes by reading studies in advance of starting a review, it takes a little bit of expertise, and by a little bit, I mean several hours of independent browsing to set yourself up with a review where you are basically not doing a blinded review, and obviously, this is something that you can address at the research question level and at the search level, but I will also say that your protocol has to address the question of, How will you work with… Say, You’re only including randomized control trials and there weren’t any of them, well, then you need to have a method of going to other prospective evidence or other evidence generally. So with protocol drafting, by far and away, most important failure mode to avoid is creating too many end points, but also make sure that your research question will be answered by your research and make sure that you have a fall back position for data sparsity.

 

S1: All right, search strategy, these ones should be very obvious to you. Your preliminary search being too broad is a major problem. This is a problem that affects your time. Your time is extremely valuable and in your review, you should be spending it on articles that are as includable or as relevant as possible. So, as a rule of thumb, this one, thank God, does have a rule of thumb, do not start with broad preliminary searches at all. You can always expand a search strategy, it’s much harder to contract one. So, start with a preliminary search, go out and find… Try to have a search that’s going to find the highest rate of includable articles as possible. And then only expand it once you’ve actually screened several hundred studies to check and make sure, are your terms actually on topic? If you screen those first 200 studies and you’re not finding includable articles, you know that you saved yourself from an even broader search with just as few results that are includable.

 

S1: Okay, non-specific terms. This is a way to blow up a search and make it too broad. So obviously your research question should be based on your P, I and O, but sometimes your P, I or O are said across millions of articles. So, make sure that the words like patients or complications or any other general word, and then also very commonly used terms such as like ACE inhibitor which is seen all over the medical literature, make sure that those terms are only in your search with heavy restrictions or remove them entirely and depend on your more specific terms to pull back your articles of interest. So, drumming out general terms is the way to avoid failure mode number two which is also the way to avoid failure mode number one.

 

S1: Okay, the study life cycle. The most important issue that I see during the screening process is that people will include a lot of low quality evidence. And you saw that our basilar artery review only included three articles, two of them were randomized controlled trials and one registry. That was enough to get an article published in a respectable journal and I also have… I’ve researched this and found that only about 10-17% of randomized control trials have ever been included in a review. You could start your reviewing career by knocking off some of those other 90% or 83% of RCTs that haven’t been reviewed without going to retrospective evidence, without going to small studies. It also keeps your review a lot cleaner, you’re gonna have a lot lower of heterogeneity, you’re going to have fewer risk of bias problems. If you can hold yourself to the standard of high quality evidence, assuming that it’s available, then you’re generally going to have a much quicker review with fewer problems with bias and it might even be more publishable from being more restrictive. Okay, tagging. This is the same problem that we saw with endpoint creation, tag proliferation.

 

S1: I would recommend that you start a mutual non tag proliferation treaty with your collaborators within your nest. And by that, I mean make sure that you’re only tagging the qualitative concepts that you are going to eventually write about. So, if a piece of evidence is not worth discussing with your reader, it’s certainly not worth your time tagging it in every single article. Hopefully that one’s self-explanatory. Okay. Extraction, same exact problem and it leads to both data missingness, so your reviewers are going through dozens of studies and trying to collect dozens of endpoints, it’s more likely to end up with missingness and then you’re also going to have sparseness problems if you have too many data elements. And the fix here, as you’ve probably learned from failure modes earlier in this is read articles before you extract, and I may even recommend read articles before you configure for extraction.

 

S1: Okay, interpretation and write-up. So we went over a little bit of this last lesson but I think we didn’t spend enough time on the introduction for sure. The golden introduction is surgical, an introduction should narrow your audience on your topic of interest. That means you have to state what is missing in the literature and how you’re filling it, that is effectively it. So you need to say what the unmet need in your discipline was before these new interventions came to market or similar but, you don’t need to go all the way back into a history of all those interventions. You simply need to introduce, here is the unmet need that was being met by this intervention and there is no review on this intervention to date therefore, I’m completing the systematic review. That is sufficient as an introduction. Context can be given in the discussion section if absolutely necessary, but a scientific paper is not a monograph on a topic of interest, it is a report of your findings from your research. So hopefully that one actually saves you a lot of work.

 

S1: Methods, the most important problem in the method section of a paper is failing to set up a result that you’re going to report. And reviewers are very scrupulous about this and I think that low quality research can often be found by looking at the method section and finding, is there a method for every result that they eventually report? If they have inclusion criteria in their PRISMA diagram, were those noted previously in their method section about screening? If they have an outcome reported, was that noted in their section on data collection? If they have statistical results, did they tell you how they were doing… How they were done? This is a key replicability problem and a key transparency problem that you have to cut off in advance before you submit… Before you’ve submitted to the journal and you need that one-to-one relationship, otherwise it’ll be difficult for anyone to come by and either check how you did your work or replicate your study which is really the basis of science. Results, this one we really harped on last week, so I’ll go a little slower, sorry, a little faster.

 

S1: Opinion bleeds in very easily. So think about it this way, if you have set up your research question correctly and you have collected your data, well, first of all, you didn’t fall for the failure mode earlier where your data collection doesn’t answer your research question. But if you collected your data well on a research question that was well-framed, you shouldn’t need any color or spin on what you report in results. You should be able to report for X patient group, Y outcome was found, and we went over words like trending and significance and how to use those carefully last week, but in effect, recall that your results section is a record for readers to refer to much more than it is a conclusion for them to believe, that can fall in in the results… In the discussion section, but really in a scientific endeavor, the most important thing is to communicate replicable methods that lead to quantitative results. And then discussion, we are going to go through again, the perfect discussion outline that we discussed last week and the most important failure mode that that cuts off is trying to discuss the field rather than your findings.

 

S1: So, let’s go and look at the perfect discussion section again. So in paragraph one, you want to discuss your primary findings without repeating your results section, so you shouldn’t be using quantitative information in the discussion. Instead, just use basic comparative words, better than, worse than and then give a brief comparison between your evidence and that which was previously published on your same topic in the literature. Then in paragraph two, you can discuss any secondary findings, so again, you can probably tell from all of this harping on tag proliferation and data element proliferation, I tend to be more on the pure side where you should be narrowing in on only a few pieces of evidence that you report rather than discussing every data element in the world, but you can include an optional second paragraph that is structured the same as the first paragraph and goes through findings that you consider to be secondary or less important than your main research question.

 

S1: In paragraph three, I would do an overview of previous similar research. So, that can be the studies that you included in your own review, you can discuss anything notable from those underlying studies. You can also discuss previous reviews, previous guidelines or standards of care, so how are physicians practicing? How is it reported in previous studies or reviews? And then how is your work confirming what they’re doing? How is it showing a better way? Or how is it just showing a different take that should be considered alongside everything else? And then for reviews, limitation should be pretty straightforward. And I will actually note that this perfect discussion section outline was stolen from physicians who published over a thousand articles over their lifetimes, and I’m simply passing it forward. And it was for any article type, not just systematic reviews, but I think it works especially well for systematic reviews in that the limitation section is also generally going to contain a set of limitations that are very common across reviews which are…

 

S1: Effectively, you are never going to be… You’re never going to have access to underlying patient data without contacting the authors. You are always going to have heterogeneity issues with patient populations, you can always state those limitations in the systematic review, unless of course, you sought individual patient data. Other than that, make sure that you’re pulling in the underlying limitations of the studies that you’re reporting from, so if they have a biased population in a study, make sure that you’re pulling that through and reporting it transparently. So, paragraph four can be really built out and it is generally acceptable. For most studies, I find it… I find that people will call for more research too often. I think a systematic review though is exactly the sort of venue to make that statement, it is totally appropriate to call for further research, especially randomized controlled trials if non exist in your interventions of interest as part of the end of your discussion section.

 

S1: Okay, now let’s go into some last notes across the whole course. Hopefully those failure mode reveals help you avoid things like data proliferation or screening 2000 articles, but let’s go through a couple of overall course messages. And really, I want this to be a call to action on systematic reviews. As I noted before, 90% of RCTs are not yet reviewed. So, whether you are a high school or med student or resident or fellow, I don’t really care. You are probably the right person to help us drill that number down from that 90% of RCTS that are not yet reviewed, you can probably help us bite those off. Most reviews are not comprehensive and are never updated, the way to address this is to work through Nested Knowledge, make a living review and follow that research question and search strategy creation to make sure that you’re actually capturing all the evidence of interest. A review is only as good as the reviewer who’s completing it, so it is your responsibility effectively to make sure that your review captures everything that it should and to bring readers an update to your review, if any new evidence comes to bear.

 

S1: Imagine for instance, if our basilar review was only ever published once and our readers were left never knowing that in fact you can save lives and brain tissue with thrombectomy and basilar artery strokes. And then, this is the pathway to medical success. If you are in a MD or PhD program and you are looking to become the expert on a topic, this is a pathway that was brought to me by Dr. Waleed Brinjikji of the Mayo Clinic, a close collaborator of mine and a co-founder, and he says that his pathway, which again, he’s stealing from or learning from those who came before him, if you want to be recognized as an expert on a topic, there is a simple three step paradigm which you can follow. First, you need to first author a systematic review on that topic, so find the topic that you want to be the expert on. Generally, it’s best if it’s hot in some way, so if it’s growing or if the whole community recognizes that it’s important. Then, once you’ve published that review, turn around and apply for an NIH grant, usually R21s, exploratory grants are a great way to do so. If you don’t have access to a laboratory or to a hospital, try to find collaborators to help you with that grant, but be part of a grants team on an experiment related to your review topic of interest and addressing a question that your review inspired and then publish your grant findings and get a podium presentation on it.

 

S1: And according to Dr. Brinjikji, if you follow these simple steps, you can become a neurosurgeon at Mayo, you can become an inventor and an entrepreneur and also you can become the editor at a leading journal. So, follow the systematic review steps to expertise, and step one is, of course, becoming a systematic review first author. Okay, and then I’ve mostly kept this course as generic as possible not to make it so it’s only useful to those who are in fact doing their reviews in Nested Knowledge, but if you are interested in signing up for Nested Knowledge and you have not done so yet, you can use the link shown here. Just go there, hit sign up and you can use Google or you can make an account with us. The first Nest that you make is free and if you’re an academic, then there’s a very, very steep academic discount. We also have very extensive documentation, so any issue that you’re coming up… Coming upon, we have videos by yours truly or by one of our fantastic tech team to describe the features of interest. And if you layer this course on top, this has been made intentionally to work with the Wiki where the Wiki shows you how to use the tool, and this course tells you how to do a review generally.

 

S1: So, make an account and use the documentation to inform yourself on how to use Nested Knowledge and then use this course to inform how to then design and complete your review. And we also love feedback, so give us as much feedback as you think is warranted. The feedback link here is to effectively a Reddit page on Nested Knowledge that I read daily. So if you have any feedback across the documentation, the course materials or from using Nested Knowledge, we’d love to hear from you. And if you even think that there’s something else that should be included in this course, I’m always happy to make add-on content if people are finding it useful. With that, I think that we can do a quick course summary and then I will sign off. Remember, this course is on how to systematically review the medical literature. First step is, of course, making a Nested Knowledge account. Then, craft your research question, draft a protocol where you plan out your inclusion and exclusion criteria, you identify endpoints, you identify your collaborators and give out tasks and so on, you then take studies through the simple life cycle, so everything that was searched must be then screened, everything that is included must be then tagged and extracted.

 

S1: When you’re writing, make sure that you’re keeping everything simple, especially your introduction. You can follow the simple four-step discussion template, and be very careful about how you’re framing your results and make sure that your methods are transparent and replicable. And then I think that a lot of these failure modes from today can be reduced to, make practical decisions that get you to an answer to your research question. With that, I just wanna thank everyone who’s taking this course so much for their attention. This is the first time that I’ve taught a systematic review course, and after seven years and hundreds of articles, I hope that this is helpful in not only passing on a tool for you to use to complete your reviews in a much easier way than I did, but also take some of that working knowledge that we’ve built at Nested Knowledge and share it with the research community generally. Anyway, thank you so much and I look forward to your feedback and your publications.