The Best Questions to Ask Yourself When Starting a Review

At Nested Knowledge, we often say that the difference between a successful and failed evidence synthesis project comes down to how well the Research Question is scoped. Before even launching a review, the most important decision you can make is to clearly define what you are seeking to find and extract– and also what you are not interested in exploring. Without that clarity, even the most sophisticated workflows can’t make up for an imprecise or overly broad starting point.

What is your goal?

So, what exactly should you ask yourself before setting up a Nest? The first question is simple but foundational: What is your Research Question? Our advice: Try to distill your research interest into a concise sentence, and if possible, structure it using the PICOs framework (Population, Intervention, Comparaisons, and Outcomes). For instance: “What is the impact of Semaglutide on weight change in obese adults compared to other GLP-1 therapies?” That sentence will anchor your search, screening, extraction, and synthesis decisions throughout the review process.

Before diving deeper, it’s also wise to explore the landscape with Smart Search and Search Exploration. Exploring the resulting PICOs and/or reading just the first ten results from a preliminary literature search can offer key insights into how your topic is discussed in the literature, what terminology is used, and whether the volume of available evidence supports your research direction. At Nested Knowledge, our Search Exploration tools are designed to help you rapidly make sense of the evidence and tag early trends, so don’t skip this step!

Can you be more specific?

Next, focus on defining  your PICOs! Ask yourself: Are they too broad? Are you trying to capture every study on diabetes, or are you specifically targeting obese adults with a BMI over 30? Are you interested in any GLP-1 therapies, or only Semaglutide? Do you want to assess all outcomes, or just those related to weight loss over a specific time frame? Precision here prevents your review from turning into a fishing expedition and helps avoid unnecessary screening downstream.

Database selection is another often-overlooked decision. While PubMed is an excellent starting point, it might not be comprehensive enough. Depending on your topic and goals, you might also consider adding sources like ClinicalTrial.gov, DOAJ, OpenAlex, or any of the other databases that are ‘plugged in’ to Nested Knowledge (or of course, you can import files from other databases). The more intentional you are about your sources, the more confident you can be in your findings.

Another critical consideration is what we call “structural exclusions” or the criteria that define the boundaries of your review beyond the content. Will you include only English-language publications? Are you limiting to studies published within the last 10 years? Will you only consider randomized controlled trials, or also observational studies? Should the evidence be geographically limited to the U.S., or global in scope? Nested Knowledge offers Core Smart Tags and Study Inspector tools that quickly filter studies and topics of interest. These parameters might sound minor, but they play a major role in shaping your screening logic and workflow setup–and therefore impact the time you’ll spend screening as well.

What is your plan for AI involvement?

Then, think about timing, resources, and available tools: What steps can you realistically execute, and which ones can be delegated, in full or in part, to AI? Nested Knowledge offers AI tools that assist with title/abstract screening, full-text review, tagging, and data extraction. Understanding your internal bandwidth and comfort level with automation will help you build a workflow that’s both rigorous and efficient. This is especially important if you’re working under tight timelines or limited staff availability. NK makes it easy to collaborate with others within your department to streamline collaboration across the board.

What kind of review are you looking for?

At this point, it’s worth clarifying what kind of review you’re conducting. Is this a qualitative systematic review? A network meta-analysis? A gap analysis or competitive landscape review? Each type has unique methodological requirements and shapes how you’ll use the Nested Knowledge platform. For instance, a landscape analysis may rely heavily on tags and visualizations, while a meta-analysis might focus on extracting effect sizes and running statistical synthesis.

Here you can make decisions about what you want the final deliverable to look like. Screening mode, where you can employ the robot screener to help you out, or critical appraisal are all things you would want to do ahead of time to set you up for success down the road.

Finally, ask yourself what success looks like? If your review goes exactly as planned, what should the final deliverable be? Is it a publication-ready manuscript? A slide deck for internal decision-makers? A qualitative synthesis of emerging studies? Or a full dataset that can feed into future modeling efforts? Knowing your desired output helps shape every upstream decision, from screening criteria to extraction structure.

The Synthesis tab is the home for a variety of tools that can be gamechangers when it comes to extracting data. From the manuscript editor to the sunburst diagrams, you have everything you need to export. The cards in the dashboard including PRISMA diagrams and manuscript editor tool make it easy to see the results of your nest all in one place by just dragging and dropping.

In summary, launching a successful review requires more than just choosing a topic, it demands thoughtful planning and clear direction. By asking yourself the right questions early on, you can avoid costly missteps, leverage the full power of AI-enhanced workflows, and create outputs that actually drive decisions to your benefit. 

To summarize the key questions:

  1. What is your Research Question?
  2. What are your PICOs?
  3. What databases will you search?
  4. What are your ‘structural’ exclusion reasons?
  5. What AI tools can you use?
  6. What is your intended output?

 

If you’re considering a review and want help drafting your protocol or choosing the right structure, we’re here to help! Don’t hesitate to reach out!

A blog about systematic literature reviews?

Yep, you read that right. We started making software for conducting systematic reviews because we like doing systematic reviews. And we bet you do too.

If you do, check out this featured post and come back often! We post all the time about best practices, new software features, and upcoming collaborations (that you can join!).

Better yet, subscribe to our blog, and get each new post straight to your inbox.

Blog
Jeff Johnson

Introducing Core Smart Tags

Introducing Core Smart Tags If you are familiar with Tagging in Nested Knowledge, you know how integral the process of setting up a tagging hierarchy

Read More »

Have a question?

Send us an email and we’ll get back to you as quickly as we can!