
Introducing Core Smart Tags
Introducing Core Smart Tags If you are familiar with Tagging in Nested Knowledge, you know how integral the process of setting up a tagging hierarchy
Back in 2024,when the UK’s NICE released its position statement on the use of artificial intelligence (AI) in health technology assessments (HTAs), Nested Knowledge responded quickly, showing how our platform already aligns with the responsible use of AI in evidence generation. Now, Canada’s Drug Authority (CDA) has followed suit, releasing its own guidance on AI in systematic literature reviews (SLRs) and evidence synthesis. We welcome this clarity and oversight because we’ve designed Nested Knowledge to support trustworthy, transparent, and human-in-the-loop AI integration.
In this blog, we break down how Nested Knowledge supports adherence to the CDA’s new position statement, identify areas where user discretion is key, and look ahead to the future of AI in evidence-based medicine.
The CDA’s position builds directly on the framework laid out by NICE, reinforcing a broader consensus across international regulatory bodies: AI can enhance evidence generation but only when used responsibly. Importantly, CDA echoes NICE in calling for transparency, human oversight, and methodological rigor in any AI-assisted SLR or HTA submission.
Canada’s Drug Authority reinforces a growing international consensus: AI can—and should—support evidence generation, but only when used with transparency, human oversight, and methodological rigor. The CDA’s guidance on Systematic Review and Evidence Synthesis outlines a number of ways AI may enhance evidence workflows, including the use of large language models (LLMs) and machine learning for searching, screening, study classification, data extraction, and even meta-analysis. At the same time, they caution that these tools must be applied carefully, with human reviewers remaining in control and all outputs clearly traceable and defensible.
Nested Knowledge’s platform is designed to meet these requirements. Across the evidence lifecycle—from literature search through to synthesis—our tools embed AI in ways that enhance reviewer productivity while preserving transparency and accountability. For evidence identification, Robot Screener uses a custom machine learning model to prioritize screening without ever making final decisions. For classification and tagging, Smart Tagging offers LLM-powered suggestions that can accelerate the categorization of study design, population, and interventions, but every label remains fully editable and tied to its source context.
Where CDA notes that AI-enabled data extraction is still emerging, Nested Knowledge provides semi-automated extraction via Smart Tag Recommendations and Smart MA Extraction, where the AI surfaces candidate values for reviewer confirmation. These approaches keep humans at the center of judgment while offering time-saving support. Even synthesis is guided by human-defined groupings, with automated meta-analyses and visualizations generated only after the data is confirmed and structured by the user.
Finally, CDA’s emphasis on transparency, evaluation tools, and adherence to emerging global standards is directly aligned with our own approach. We provide public documentation of each AI tool, including performance characteristics, intended use, and disclaimers. Users can inspect model outputs, verify AI-suggested actions, and fully control what is or isn’t included in the final evidence product.
In short, Nested Knowledge meets the CDA’s challenge head-on—not by automating away critical review tasks, but by responsibly integrating AI where it adds value and always keeping the reviewer in charge.
At Nested Knowledge, we adhere to three core principles when implementing any new AI feature:
Data Provenance: Where relevant, we provide the source and the specific data from that source that informs that finding. This ensures full traceability of information. In practice, this means that when our AI tools identify a relevant piece of information or make a recommendation, users can easily trace back to the original source document and language. This level of transparency is crucial for maintaining the integrity of the systematic review process and allows researchers to verify the accuracy of AI-generated recommendations.
Methodological Transparency: We offer complete methodological information on how our AI is trained and employed. Where applicable, we also provide validation and accuracy data on its performance. This transparency extends to the algorithms used, the training data sets, and any known biases or limitations in our AI models. By sharing this information, we enable users to make informed decisions about how to best utilize our AI tools within their research workflows. Additionally, this openness fosters trust and allows for continuous improvement based on user feedback and evolving industry standards.
Human Oversight: In tasks where AI takes the place of human effort, we ensure that AI outputs are placed into an oversight workflow so that all AI extractions can be reviewed by a human expert. This maintains the critical balance between efficiency and accuracy. Our AI tools are designed to augment human expertise, not replace it. For example, in the screening process, while our Robot Screener can rapidly process thousands of articles, the Robot’s recommendations are surfaced to a human Adjudicator for oversight. This dual-layer approach combines the speed and consistency of AI with the nuanced understanding and critical thinking of experienced researchers.
By following these principles, we provide an evidence synthesis solution with AI enhancements that are supported by:
Nested Knowledge offers AI tools designed to support systematic literature reviews and HTAs from end to end. Each feature is intentionally developed to keep human reviewers in control, while enhancing efficiency, transparency, and traceability. Our AI is not a replacement for human expertise, it’s a collaborator, purpose-built to help users meet rising evidentiary and regulatory standards like those set by Canada’s Drug Authority.
Together, these tools cover the major stages of an evidence synthesis workflow: expanding the search, structuring the review, screening references, tagging key data, and preparing for quantitative analysis. At each step, AI serves as a transparent and verifiable assistant—not an autonomous decision-maker. All AI-assisted actions are logged, exportable, and backed by detailed documentation. For more on the algorithms behind each feature and our commitment to full disclosure, visit our AI Methods and AI Disclosures pages.
CDA closes its position by asking: Where is AI going next? We think the better question is: Where isn’t it going? At Nested Knowledge, we are constantly evolving our platform to meet the growing evidentiary demands across the medical product lifecycle. From early pipeline exploration to regulatory submission, to post-market surveillance.
We’re currently expanding our capabilities to support:
Nested Knowledge stands at the forefront of these expectations. Our software supports full end-to-end evidence synthesis, while ensuring that AI augments, not replaces, the human reviewer’s judgment.
Our platform is not just a review tool. It’s a full-stack ecosystem for creating living, collaborative, AI-accelerated syntheses that meet the expectations of regulators, payers, and patients alike.
If you would like a demonstration of our software, fill out the form below, and we will be happy to meet with you individually. Or, if you would prefer to explore our platform on your own, sign up and pilot these AI-assisted tools for free.
Yep, you read that right. We started making software for conducting systematic reviews because we like doing systematic reviews. And we bet you do too.
If you do, check out this featured post and come back often! We post all the time about best practices, new software features, and upcoming collaborations (that you can join!).
Better yet, subscribe to our blog, and get each new post straight to your inbox.
Introducing Core Smart Tags If you are familiar with Tagging in Nested Knowledge, you know how integral the process of setting up a tagging hierarchy