
How Smart Study Type Tags Are Reinventing Evidence Synthesis
One of the features of Core Smart Tags is Smart Study Type – this refers to our AI system that automatically categorises the study type
The past year has seen a surge in global attention on the responsible use of AI in systematic reviews. With Cochrane, academic groups, and regulatory-adjacent bodies releasing clearer guidance, including new summaries of the RAISE framework, research organizations are being asked to show exactly how their AI works, how it is evaluated, and where its limits lie.
For years, AI-enhanced evidence workflows have accelerated screening, tagging, and extraction, but standards for transparency have lagged behind. Now, the expectations are explicit.
Below, we break down the core requirements emerging from “Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence [aligning] around the RAISE recommendations” in a joint statement, and describe how to employ Nested Knowledge within these frameworks.
The first major expectation is straightforward:
Users and stakeholders need to know what the AI does, how it is designed to be used, and where humans stay firmly in the loop. This is not about exposing proprietary model weights. It’s about providing clarity around purpose, workflow, and appropriate use.
Nested Knowledge (NK) takes a human-in-the-loop approach by design. Every AI capability, Smart Search, Smart Screener, and Smart Tagging operate as decision support, not decision replacement. There is always a manual way to do a process in NK.
On the platform today, users can already see:
In addition, NK maintains transparent Terms & Conditions, a detailed feature glossary, and documentation that clearly differentiates manual, assisted, and AI-generated components of the workflow.
This aligns precisely with Cochrane’s call for public clarity on system behavior, scope, and intended use.
Cochrane and the RAISE framework emphasize that AI tools must be supported by publicly accessible, appropriately detailed evaluations covering:
Nested Knowledge already maintains:
This level of transparency aligns directly with the new standards, and it provides research organizations, HTA bodies, and regulatory sponsors what they increasingly expect: verifiable evidence of AI performance in real review contexts.
The third expectation is perhaps the most important:
AI vendors must publicly state where their tools work well, where they do not, and how they mitigate bias. In evidence synthesis, a hidden blind spot or unreported limitation can lead to distortions in clinical conclusions, so transparency is essential.
Nested Knowledge takes a proactive stance in documenting strengths and limitations.
For example:
Furthermore, NK implements bias mitigation practices, including:
This satisfies the call for candid, public articulation of model performance boundaries.
AI already plays a central role in rapid evidence gathering, concept tagging, deduplication, and early extraction. As HTA bodies (including NICE, CADTH, and increasingly the JCA consortium) scrutinize evidence synthesis workflows, tools like Nested Knowledge must demonstrate:
The new Cochrane-aligned standards are accelerating this shift.
Nested Knowledge’s approach, transparent, validated, human-in-the-loop, and auditable, positions the platform not only to comply, but to set the benchmark for responsible AI in systematic reviews.
For Cochrane and RAISE, “Fitness for use” means determining whether an AI system is appropriate for a specific evidence-synthesis task given its documented performance, domain applicability, limitations, and oversight requirements. It requires reviewers to judge—before deploying the tool—whether its behaviour and evaluation evidence support reliable use without compromising methodological rigour.
Nested Knowledge provides task-specific performance documentation, model-domain boundaries, and transparent descriptions of how each AI feature operates. Documentation also contains recommendations on which AI tools are appropriate for different review types (e.g. gap analysis, targeted review, fully publishable systematic review). Justification is ultimately a requirement of the researcher, but tying the fitness back to the intended interpretations or impact of the review will help!
The demands from Cochrane and RAISE can be summarized in three questions:
For any researcher undertaking AI-assisted review, Nested Knowledge has designed its AI from the start around transparency, auditability, and rigorous human control. This is exactly the direction the industry must go, and we appreciate the clear guidance from Cochrane and RAISE.
Happy reviewing!
Yep, you read that right. We started making software for conducting systematic reviews because we like doing systematic reviews. And we bet you do too.
If you do, check out this featured post and come back often! We post all the time about best practices, new software features, and upcoming collaborations (that you can join!).
Better yet, subscribe to our blog, and get each new post straight to your inbox.

One of the features of Core Smart Tags is Smart Study Type – this refers to our AI system that automatically categorises the study type