Nested Knowledge offers a web-based Software-as-a-Service (SaaS) application for use in secondary medical research that integrates Artificial Intelligence (AI) into the application. Nested Knowledge maintains compliance with AI legislation in all applicable countries. For an explanation of AI used in the Nested Knowledge platform, please see Disclosure of Artificial Intelligence (AI) Systems.
Note: This policy outlines compliance with laws specific to artificial intelligence tools in biomedical evidence synthesis applications, interpreted by the reasonable efforts of the Nested Knowledge team. This overview does not constitute legal advice nor a full disclosure of artificial intelligence methods.
European Union AI Act #
The Nested Knowledge application is a non-generalized AI system that does not impact any of the critical industries listed in the European Union’s AI Act. The application does interact with natural persons, and pursuant to the Act, we maintain compliance with all “Transparency Obligations” of the act::
“Transparency Obligations: The AI system, the provider or the user must inform any person exposed to the system in a timely, clear manner when interacting with an AI system, unless obvious from context. Where appropriate and relevant include information on which functions are AI enabled, if there is human oversight, who is responsible for decision-making, and what the rights to object and seek redress are.”
- Nested Knowledge carries out these transparency obligations in several ways: First, with our full Disclosure of AI Systems, available to all users. Second, all AI used in the application requires affirmative opt-in interactions by the user. When the user opts for such AI systems assistance, this is audited and traceable within each project. Third, Nested Knowledge recommends that users who employ AI in any step of a review disclose the use of AI systems in their written outputs, meaning that the tools within Nested Knowledge that were used should be reported (e.g., if users employ Robot Screener in a dual screening process, this should be reported in the Screening methods of any report). This guidance is particularly stressed for medical writer users.
- Transparency is central to Nested Knowledge’s methods, as required in a scientific process. In addition to the AI Disclosure noted above, Nested Knowledge released a full overview of our AI practices and philosophy, in the context of new guidance from NICE regarding use of AI in the systematic literature review process. Nested Knowledge’s philosophy is that wherever an AI tool is employed, the tool will provide (1) transparency of how the AI works; (2) audit records of AI actions (e.g. the Robot Screener decisions); and (3) as relevant, data provenance (e.g. the Smart Tagging Recommendations trace their extractions back to the exact quote from the underlying full text).
- The most important protection against misuse of clinical data is the fact that the tool is not used for extracting patient health information (PHI) or personally identifiable information (PII). So, with only cohort-level data, we strictly limit the potential for either AI or for malicious actors to obtain clinical data that is not available in the published literature. Note also that, while high risk AI tools have additional constraints/rules regarding quality systems and enforcement, these are not necessary in the context of published data.
- Uploading into Nested Knowledge by our users of any documents with personal health information is a violation of our Terms of Service and we reserve the right to terminate that user’s account and remove the PHI. In addition, all full texts are restricted to the users and Organizations the nest owner shares the project with, limiting this information from being shared in Synthesis’ shareable outputs.
Risk analysis under Article 6:
- Based on review of Article 6, and Annex I and III, Nested Knowledge is not a high-risk application:
– Nested Knowledge is not used alone or in combination with a product in a way where it is a safety component to be regulated by Union harmonization legislation listed in Annex I.
– Users perform a narrow procedural task (by their classification, this is in comparison to general AI systems),
– Neither Nested Knowledge nor its users perform any Annex III activities: the tool does not collect biometric data, is not related to infrastructure, and contains no AI systems providing educational, vocational, employment, workers management, access control for private or public (financial) services, law enforcement, migration, justice, or voting/democratic services.
Bias and Discrimination:
- Bias and discrimination is a vital topic, though certain aspects are not relevant to Nested Knowledge systems. In the EU AI Act, controls center around tools that involve biometrics or are used in a system that could lead to discrimination (e.g. screening resumes for employment). This does not apply to Nested Knowledge’s application.
- However, Nested Knowledge is committed to minimizing bias beyond these categories. The primary prevention system for AI bias in evidence synthesis is human curation (see the overview of AI practices above). AI systems, especially those trained on a limited set of data, can over-fit models or otherwise come to confident but incorrect conclusions. The best quality-control is to have each decision assessed by a curator: in Robot Screener, inclusion and exclusion decisions are surfaced to an adjudicator. In Smart Tagging Recommendations, the AI extracted contents are shown to a human extractor for confirmation. In Search Exploration, AI-recommended terms are only adopted by the user, not automatically added by the AI.
- In addition, while meta-analytical statistics are not an AI system, bias can certainly limit the veracity of the findings. Nested Knowledge offers Critical Appraisal systems for rating the risk of bias of all candidate studies and has I-squared values and Funnel plot features to help find outliers or potentially biasing studies, and to assess the evidence quality/heterogeneity. None of these fully prevent bias; any evidence synthesis process involves combining inherently disparate sources – which introduces bias with respect to the study methods, included populations, and outcome reporting practices – but bias mitigation is an integral part of both the systematic review process and Nested Knowledge’s tools in particular.
Communication and Compliance #
This policy will be updated on an annual basis and leadership will regularly oversee this policy to ensure content remains up-to-date with artificial intelligence regulations. This policy will also be updated in the case that any party identifies a relevant piece of legislation to leadership.
Revision History #
| Author | Date of Revision/Review | Comments |
|---|---|---|
| M. Williams | 12/11/2025 | Updated |
| K. Cowie | 10/04/2024 | Created |
| K. Kallmes | 10/04/2024 | Approved |