Responsible AI in Evidence Synthesis: How Nested Knowledge Meets the New Standards from Cochrane Joint Statement and RAISE Guidelines

The past year has seen a surge in global attention on the responsible use of AI in systematic reviews. With Cochrane, academic groups, and regulatory-adjacent bodies releasing clearer guidance, including new summaries of the RAISE framework, research organizations are being asked to show exactly how their AI works, how it is evaluated, and where its limits lie.

For years, AI-enhanced evidence workflows have accelerated screening, tagging, and extraction, but standards for transparency have lagged behind. Now, the expectations are explicit.

Below, we break down the core requirements emerging from “Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence [aligning] around the RAISE recommendations” in a joint statement, and describe how to employ Nested Knowledge within these frameworks.

1. Clear, Public Information About How the AI System Works

The first major expectation is straightforward:
Users and stakeholders need to know what the AI does, how it is designed to be used, and where humans stay firmly in the loop. This is not about exposing proprietary model weights. It’s about providing clarity around purpose, workflow, and appropriate use.

How Nested Knowledge Complies

Nested Knowledge (NK) takes a human-in-the-loop approach by design. Every AI capability, Smart Search, Smart Screener, and Smart Tagging operate as decision support, not decision replacement. There is always a manual way to do a process in NK.

On the platform today, users can already see:

  • What the AI is doing (e.g., suggesting tags, highlighting potentially relevant terms, surfacing likely inclusions).

  • Where human oversight is required (e.g., final inclusion/exclusion, tag confirmation, data extraction verification).

  • How evidence flows from source databases to Studies, Tags, and synthesis outputs

In addition, NK maintains transparent Terms & Conditions, a detailed feature glossary, and documentation that clearly differentiates manual, assisted, and AI-generated components of the workflow.

This aligns precisely with Cochrane’s call for public clarity on system behavior, scope, and intended use.

2. Public Testing, Training, and Validation Information

Cochrane and the RAISE framework emphasize that AI tools must be supported by publicly accessible, appropriately detailed evaluations covering:

  • The scope and domain of the models
  • The processes used for training and validation (and characteristics of datasets)
  • Transparent metrics demonstrating performance, error modes, and limitations

How Nested Knowledge Complies

Nested Knowledge already maintains:

  • Execution logs showing how AI suggestions were generated and accepted or overridden.

  • Deduplication documentation that explains every step of the heuristic and why counts may differ across interfaces.

  • Feature-level validation summaries

This level of transparency aligns directly with the new standards, and it provides research organizations, HTA bodies, and regulatory sponsors what they increasingly expect: verifiable evidence of AI performance in real review contexts.

3. Transparent Strengths, Limitations, and Potential Biases

The third expectation is perhaps the most important:
AI vendors must publicly state where their tools work well, where they do not, and how they mitigate bias. In evidence synthesis, a hidden blind spot or unreported limitation can lead to distortions in clinical conclusions, so transparency is essential.

How Nested Knowledge Complies

Nested Knowledge takes a proactive stance in documenting strengths and limitations.

For example:

  • Smart Tagging excels in high-volume, clearly keyworded biomedical areas, boosting recall and decreasing reviewer workload, but may require additional user training or sampling when dealing with rare diseases, emerging therapies, or ambiguous terminology.

  • Search Suggestions help expand concepts and reduce missed records, but are not replacements for expert-designed Boolean strategies.

  • Automated Extraction Helpers can rapidly flag relevant data points, but must always be checked by a trained reviewer prior to final synthesis.

Furthermore, NK implements bias mitigation practices, including:

  • Clear reviewer override controls

  • Sampling recommendations for excluded AI-suggested records

  • Metadata exports that allow audits and reproducibility checks

  • Ability for teams to layer in their own training examples, reducing domain misalignment

This satisfies the call for candid, public articulation of model performance boundaries.

Why This Matters for HEOR and HTA

AI already plays a central role in rapid evidence gathering, concept tagging, deduplication, and early extraction. As HTA bodies (including NICE, CADTH, and increasingly the JCA consortium) scrutinize evidence synthesis workflows, tools like Nested Knowledge must demonstrate:

  • Reliability

  • Auditability

  • Reproducibility

  • Human accountability

The new Cochrane-aligned standards are accelerating this shift.

Nested Knowledge’s approach, transparent, validated, human-in-the-loop, and auditable, positions the platform not only to comply, but to set the benchmark for responsible AI in systematic reviews.

Justifying Fitness for Use

For Cochrane and RAISE, “Fitness for use” means determining whether an AI system is appropriate for a specific evidence-synthesis task given its documented performance, domain applicability, limitations, and oversight requirements. It requires reviewers to judge—before deploying the tool—whether its behaviour and evaluation evidence support reliable use without compromising methodological rigour.

Nested Knowledge provides task-specific performance documentation, model-domain boundaries, and transparent descriptions of how each AI feature operates. Documentation also contains recommendations on which AI tools are appropriate for different review types (e.g. gap analysis, targeted review, fully publishable systematic review). Justification is ultimately a requirement of the researcher, but tying the fitness back to the intended interpretations or impact of the review will help!

Conclusion

The demands from Cochrane and RAISE can be summarized in three questions:

  1. Do you clearly explain how your AI works?
    ✔ Yes—Nested Knowledge publicly details AI roles, boundaries, and human oversight.

  2. Do you provide transparent validation of your AI?
    ✔ Yes—through execution histories, provenance, documentation, and past and forthcoming public Validation Reports.

  3. Do you openly state strengths, limitations, and potential biases?
    ✔ Yes—with feature-level transparency and bias mitigation strategies built into the software and discussion of bias risks in methods, publications and documentation.

For any researcher undertaking AI-assisted review, Nested Knowledge has designed its AI from the start around transparency, auditability, and rigorous human control. This is exactly the direction the industry must go, and we appreciate the clear guidance from Cochrane and RAISE. 

Happy reviewing!

A blog about systematic literature reviews?

Yep, you read that right. We started making software for conducting systematic reviews because we like doing systematic reviews. And we bet you do too.

If you do, check out this featured post and come back often! We post all the time about best practices, new software features, and upcoming collaborations (that you can join!).

Better yet, subscribe to our blog, and get each new post straight to your inbox.

Have a question?

Send us an email and we’ll get back to you as quickly as we can!