Tips and Tricks of Configuring Nested Knowledge’s Tagging Hierarchies for ASTs

Screenshot 2025-06-30 163045

Tricks and Tips:

  • The first tip is not about Configuration, but what you do after: Sanity-check a few studies! Sometimes, configuration issues or AI misinterpretations are very easily discoverable by checking just a few studies. If there is an obvious error type, reconfigure and refresh. Work with the LLM to improve how it helps you!
  • Write literal and precise Questions: You’re instructing an LLM with every Question you create! Don’t ask “Was the number of patients reported?” if what you want is “What was the number of patients reported?,” and if you want it per-arm, instruct on this clearly. 
  • Learn Question Types: Single Apply means “find this tag”; Select questions can only be answered with sub-tags! If a tag is not either a Single Apply Question or configured as a child of a Select Question tag, it will not be assessed by the AI, and Question type is highly impactful on LLM extractions.
  • Write a full Question: Adding instructions, constraints, and even examples to your questions in addition to short, clear Tag names has shown benefits in repeated tests. You can even instruct on what not to extract, or the form of the answer!
  • Build only what you need: Extraneous tags and ‘over-gathering’ from underlying studies can slow your review and may indicate that greater focus of your Research Question or Protocol may help. Tagging for secondary topics is useful, but balance complexity and noise against each additional topic! For tangential questions, especially if different studies may provide the answers, a second nest may be appropriate.
  • Carefully select AI options: Do you want to run on abstracts or full texts? What about allowing answers that are more generative/summative (rather than requiring annotations)? Have you considered whether you can use “Apply”, or are you under constraints that mean you should only “Recommend”?
  • Watch our videos! Especially the AI Tagging Configuration advice. 5 minutes could save hours on future reviews.


Traits of ASTs

  • Text vs. Table: The main configuration question is often Text vs. Table contents. Text is qualitative and extracts one text segment per-study; Extracted tables can represent multiple rows & structured columns per study(e.g. arms or subgroups), and require more configuration effort — especially specific columns! See our Guide on Tag Tables; they’re highly flexible but should be thought through in advance.
  • Independence of Questions: Your Questions will be read largely independently of their location in the hierarchy (i.e. don’t name Question tags in such a way that it relies on context from parent or grandparent tags)
  • Independence of Extracted Contents: ASTs are extracted independently of one another. So, if you’re extracting arm-level information in two different tags, it may not be the case that arm labels are exactly equal in both cases.


Common Error Types:

  • Question Type issues: If you don’t associate a tag with a Question, the AI will skip that tag! Every tag that you want the AI to extract should either be a Single Apply tag or be placed below a Single or Multiple Select tag as an answer.
    Example: “Prevalence of Diabetes” would be a stand-alone Single Apply tag, but “Randomized Controlled Trial” should probably be placed as an answer to a Single Select tag called something along the lines of “Study Type.”
  • Under-instruction: When in doubt, give extra instructions to the AI. Rather than simply treating a Tag like a column header, give concrete instructions on what the AI should look for, how it should be reported, and what the exact definitions and details of interest are. If you’re building a table, make sure that you instruct within each column, not just at the level of the Tag name. You can even instruct on what NOT to extract!
    Example: “Overall Survival” as a tag name is helpful, but make sure to ask for the actual data (or the AI may extract the methods), give the type of statistic expected, and even consider telling the AI how concise to be!
  • Content Type does not match intended Output: What type of data do you want? The default tag type will give you roughly ~1 sentence of text as an answer. Consider: Do you want only Numeric data? Should you be extracting a Tag Table? Should you break up a single tag with many sub-questions into a set of multiple Text tags?


How (and When) to Refresh:

  • Rerunning the AI without Refreshing will generate answers to new tag Questions you’ve built, but while leaving the pre-existing tags untouched. So, if you want to save your earlier work, do not refresh!
  • Running the AI and Refreshing will remove AI-applied tags or AI recommendations (whichever are applicable), and will re-run on all Questions. It will not remove or replace manually-applied tags!
  • Clearing is also an option, which will remove AI-applied or recommended tags without replacement.                                                                                                                                                                                                                                                                                                                                                                                                    In summary: When using Nested Knowledge, extracting contents from underlying abstracts and full texts is highly valuable, and the most powerful capability is the totally-customizable Adaptive Smart Tags. In order to get the most out of these tools in terms of both accuracy and time savings, it’s vital to (1) Learn how to leverage hierarchies, Question types, and different content types, (2) Give highly specific instructions on the inputs and outputs of interest, and (3) Evaluate the AI performance on a nest-by-nest basis, both curating the findings and potentially improving and refreshing the Adaptive Smart Tags. When in doubt, you can always see the Adaptive Smart Tags guide, “Ask AI” for help, or contact support@nested-knowledge.com!

A blog about systematic literature reviews?

Yep, you read that right. We started making software for conducting systematic reviews because we like doing systematic reviews. And we bet you do too.

If you do, check out this featured post and come back often! We post all the time about best practices, new software features, and upcoming collaborations (that you can join!).

Better yet, subscribe to our blog, and get each new post straight to your inbox.

Blog
Jeff Johnson

Introducing Core Smart Tags

Introducing Core Smart Tags If you are familiar with Tagging in Nested Knowledge, you know how integral the process of setting up a tagging hierarchy

Read More »

Have a question?

Send us an email and we’ll get back to you as quickly as we can!