Tuesday, April 14, 2026
This report (written with input from 100+ independent experts across many countries and organisations) synthesises what frontier/general-purpose AI systems can do, what risks they pose, and how those risks can be managed. For pharma companies deploying GenAI/LLMs (quality, PV, medical writing, knowledge systems), it is a practical reference for “known failure modes” (evaluation gaps, reliability issues, misuse risks) and governance expectations.
International AI Safety Report 2026: what matters for pharma (quick take)
The International AI Safety Report 2026 (Feb 2026) is not a pharma-specific guidance, but it contains several points that are highly relevant for pharmaceutical companies using GenAI/LLMs or “AI agents” in R&D, labs, medical information, PV, quality systems, and regulated documentation.
Notably, the report highlights the following pharma-relevant themes:
- AI as a scientific accelerator (with dual-use implications). Advanced AI can support tasks such as molecule/protein design and other scientific work that can accelerate drug discovery, while also raising dual-use concerns (bio/chem misuse risk).
- Growing capability of AI agents in laboratory contexts. The report discusses increasingly capable agents (“AI co-scientists”) that can assist with experimental protocols, troubleshooting, and interacting with biological tools—useful for legitimate research but also relevant for risk governance.
- Use of “benign proxy tasks” in bio/chem risk evaluation. Because direct weapons testing is constrained, the report notes that safer proxy tasks—explicitly including activities like pharmaceutical synthesis—are used to estimate how much AI can increase capability.
- The “evaluation gap” as a central governance warning. Models can look strong in benchmarks but behave differently in real-world settings. For pharma, this directly supports the need for realistic testing, monitoring, and lifecycle control before relying on AI outputs in high-stakes processes.
- Reliability risks in high-stakes domains like medicine. The report highlights known failure modes (e.g., hallucinations/out-of-distribution failures) and notes that these issues matter in medical contexts, reinforcing the need for boundaries and human oversight.
- Safety practices that map well to pharma governance. The report emphasizes practices such as red-teaming and monitoring/control approaches, which align well with regulated “fit-for-intended-use” thinking and ongoing oversight.
Practical takeaway: this report strengthens the argument that GenAI and agentic systems should be treated as probabilistic tools with real failure modes—requiring realistic evaluation, red-teaming, monitoring, and clear boundaries, especially when outputs influence regulated decisions.
Publication page: https://internationa … i-safety-report-2026
Direct PDF: https://internationa … fety-report-2026.pdf
Monday, April 6, 2026
This document is a practical roadmap for how the EU medicines network intends to strengthen data interoperability, analytics, and AI-enabled regulatory capabilities, with annual updates planned. For pharmaceutical companies, it signals where regulators are investing: DARWIN EU (RWE), more systematic clinical study data analytics, AI governance and tools, and interoperability standards aligned with EHDS.
Notably, the workplan outlines the following focus areas and deliverables:
- A clear long-term vision: “Trusted medicines by unlocking the value of data”, with an explicit commitment to using data within an ethical framework and in compliance with EU data legislation.
- A structured program across six workstreams: Strategy & governance; Data analytics; Artificial intelligence; Data interoperability; Stakeholder engagement & change management; Guidance & international initiatives.
- Alignment with major EU initiatives impacting data exchange and evidence generation, including European Health Data Space (EHDS/TEHDAS2), revised EU pharmaceutical legislation, IDMP-related work (UNIWIDE IDMP), and broader interoperability frameworks.
- Scaling Real-World Evidence via DARWIN EU, including onboarding data partners, executing studies, and planning further expansion (DARWIN EU 2).
- Transitioning clinical study data activities from pilots toward systematic submission and analysis of patient-level clinical study data for centrally authorised products (CAPs), supported by tools/process updates and training.
- An AI workstream that includes: publishing Guiding Principles for Good AI Practice, an AI glossary, annual AI Observatory updates, AI literacy training (linked to AI Act obligations), and deployment of internal AI tools (e.g., Scientific Explorer, SPC Reader/SPC Search, AI assistants) supported by an AI Tools Framework.
- Interoperability deliverables such as data catalogues/metadata management, defined data roles (e.g., trustees/stewards), and a medicines-regulation data glossary—explicitly linked to EHDS readiness.
Workplan PDF (EMA): https://www.ema.euro … es-regulation_en.pdf
Work plan PDF (EMA): https://www.ema.euro … working-group_en.pdf
Monday, March 9, 2026
EMA has published the event page and supporting documents for the HMA–EMA AI group meeting with industry stakeholders (February 2026) and subsequently posted summary notes. This is a concrete regulatory mechanism for aligning expectations on acceptable AI use, governance, and evidence—also in areas that can impact GMP/CMC and lifecycle data.
Notably, the current guidance development activities mentioned include:
- Guidance on AI in clinical development (a concept paper is expected before a full draft guideline).
- Guidance on AI in pharmacovigilance, to be developed jointly with PRAC as a Q&A-style document.
- EU GMP Annex 22 on AI in manufacturing: following public consultation (≈1,300 comments received), it is now under revision, with the final document expected by the end of the year.
- Several industry interventions were noted, including an expectation for a more flexible approach to the use of AI in GMP. This includes the potential use of generative AI and large language models (LLMs) in critical GMP applications, provided this is supported by a robust, risk-based framework.
Event page (EMA): https://www.ema.euro … olders-february-2026
Summary notes PDF (EMA): https://www.ema.euro … february-2026_en.pdf
Thursday, March 5, 2026
FDA deploys internal generative-AI tooling to speed scientific reviews (“Elsa”) + agency-wide rollout
FDA launched / expanded a generative AI tool to accelerate scientific reviews and related tasks; Reuters describes intended uses and rollout context.
FDA also issued its own announcement about completing an AI-assisted review pilot and scaling AI internally.
Why it matters for pharma: This affects how fast and how consistently regulators can process submissions, identify issues, and target inspections. Even if Elsa is “internal,” it changes the regulatory interface pharma works with.
Reuters coverage: https://www.reuters. … -reviews-2025-06-02/
FDA press announcement: https://www.fda.gov/ … ssive-agency-wide-ai
Wednesday, February 4, 2026
Artificial Intelligence (AI) has the potential to transform the way medicines are developed and evaluated, ultimately improving healthcare outcomes. In this context, the EMA and FDA issued joint guidance in January of this year outlining 10 international guiding principles. These principles identify areas where international regulators, standards organizations, and other collaborative bodies can work together to advance good practices in drug development.
Areas of collaboration include research, the development of educational tools and resources, international harmonization, and the establishment of consensus standards. These efforts may help inform regulatory policies and guidelines across different jurisdictions, in alignment with applicable legal and regulatory frameworks.
Further details can be found in the relevant documents (links below). However, the 10 guiding principles are particularly worth highlighting:
1. Human-centric by design
The development and use of AI technologies align with ethical and human-centric values.
2. Risk-based approach
The development and use of AI technologies follow a risk-based approach with proportionate validation, risk mitigation, and oversight based on the context of use and determined model risk.
3. Adherence to standards
AI technologies adhere to relevant legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including Good Practices (GxP).
4. Clear context of use
AI technologies have a well-defined context of use (role and scope for why it is being used). 1 For the purpose of this document, the term “drug” is used to refer to drugs and biological products as defined in the United States of America, and medicinal products as defined in the European Union.
5. Multidisciplinary expertise
Multidisciplinary expertise covering both the AI technology and its context of use are integrated throughout the technology’s life cycle.
6. Data governance and documentation
Data source provenance, processing steps, and analytical decisions are documented in a detailed, traceable, and verifiable manner, in line with GxP requirements. Appropriate governance, including privacy and protection for sensitive data, is maintained throughout the technology’s life cycle.
7. Model design and development practices
The development of AI technologies follows best practices in model and system design and software engineering and leverages data that is fit-for-use, considering interpretability, explainability, and predictive performance. Good model and system development promotes transparency, reliability, generalizability, and robustness for AI technologies contributing to patient safety.
8. Risk-based performance assessment
Risk-based performance assessments evaluate the complete system including human-AI interactions, using fit-for-use data and metrics appropriate for the intended context of use, supported by validation of predictive performance through appropriately designed testing and evaluation methods.
9. Life cycle management
Risk-based quality management systems are implemented throughout the AI technologies’ life cycles, including to support capturing, assessing, and addressing issues. The AI technologies undergo scheduled monitoring and periodic re-evaluation to ensure adequate performance (e.g., to address data drift).
10. Clear, essential information
Plain language is used to present clear, accessible, and contextually relevant information to the intended audience, including users and patients, regarding the AI technology’s context of use, performance, limitations, underlying data, updates, and interpretability or explainability
Documents published by EMA and FDA :
https://www.ema.euro … g-development_en.pdf
https://www.fda.gov/ … edia/189581/download