Tuesday, April 14, 2026

International AI Safety Report 2026

This report (written with input from 100+ independent experts across many countries and organisations) synthesises what frontier/general-purpose AI systems can do, what risks they pose, and how those risks can be managed. For pharma companies deploying GenAI/LLMs (quality, PV, medical writing, knowledge systems), it is a practical reference for “known failure modes” (evaluation gaps, reliability issues, misuse risks) and governance expectations.

International AI Safety Report 2026: what matters for pharma (quick take)

The International AI Safety Report 2026 (Feb 2026) is not a pharma-specific guidance, but it contains several points that are highly relevant for pharmaceutical companies using GenAI/LLMs or “AI agents” in R&D, labs, medical information, PV, quality systems, and regulated documentation.

Notably, the report highlights the following pharma-relevant themes:

  • AI as a scientific accelerator (with dual-use implications). Advanced AI can support tasks such as molecule/protein design and other scientific work that can accelerate drug discovery, while also raising dual-use concerns (bio/chem misuse risk).
  • Growing capability of AI agents in laboratory contexts. The report discusses increasingly capable agents (“AI co-scientists”) that can assist with experimental protocols, troubleshooting, and interacting with biological tools—useful for legitimate research but also relevant for risk governance.
  • Use of “benign proxy tasks” in bio/chem risk evaluation. Because direct weapons testing is constrained, the report notes that safer proxy tasks—explicitly including activities like pharmaceutical synthesis—are used to estimate how much AI can increase capability.
  • The “evaluation gap” as a central governance warning. Models can look strong in benchmarks but behave differently in real-world settings. For pharma, this directly supports the need for realistic testing, monitoring, and lifecycle control before relying on AI outputs in high-stakes processes.
  • Reliability risks in high-stakes domains like medicine. The report highlights known failure modes (e.g., hallucinations/out-of-distribution failures) and notes that these issues matter in medical contexts, reinforcing the need for boundaries and human oversight.
  • Safety practices that map well to pharma governance. The report emphasizes practices such as red-teaming and monitoring/control approaches, which align well with regulated “fit-for-intended-use” thinking and ongoing oversight.

Practical takeaway: this report strengthens the argument that GenAI and agentic systems should be treated as probabilistic tools with real failure modes—requiring realistic evaluation, red-teaming, monitoring, and clear boundaries, especially when outputs influence regulated decisions.

AI_safety_report2026.png

Publication page: https://internationa … i-safety-report-2026
Direct PDF: https://internationa … fety-report-2026.pdf

Monday, April 6, 2026

HMA/EMA has published the Network Data Steering Group (NDSG) Workplan 2026–2028 (Version 2.0, adopted February 2026).

This document is a practical roadmap for how the EU medicines network intends to strengthen data interoperability, analytics, and AI-enabled regulatory capabilities, with annual updates planned. For pharmaceutical companies, it signals where regulators are investing: DARWIN EU (RWE), more systematic clinical study data analytics, AI governance and tools, and interoperability standards aligned with EHDS.

Notably, the workplan outlines the following focus areas and deliverables:

  • A clear long-term vision: “Trusted medicines by unlocking the value of data”, with an explicit commitment to using data within an ethical framework and in compliance with EU data legislation.
  • A structured program across six workstreams: Strategy & governance; Data analytics; Artificial intelligence; Data interoperability; Stakeholder engagement & change management; Guidance & international initiatives.
  • Alignment with major EU initiatives impacting data exchange and evidence generation, including European Health Data Space (EHDS/TEHDAS2), revised EU pharmaceutical legislation, IDMP-related work (UNIWIDE IDMP), and broader interoperability frameworks.
  • Scaling Real-World Evidence via DARWIN EU, including onboarding data partners, executing studies, and planning further expansion (DARWIN EU 2).
  • Transitioning clinical study data activities from pilots toward systematic submission and analysis of patient-level clinical study data for centrally authorised products (CAPs), supported by tools/process updates and training.
  • An AI workstream that includes: publishing Guiding Principles for Good AI Practice, an AI glossary, annual AI Observatory updates, AI literacy training (linked to AI Act obligations), and deployment of internal AI tools (e.g., Scientific Explorer, SPC Reader/SPC Search, AI assistants) supported by an AI Tools Framework.
  • Interoperability deliverables such as data catalogues/metadata management, defined data roles (e.g., trustees/stewards), and a medicines-regulation data glossary—explicitly linked to EHDS readiness.

Workplan PDF (EMA): https://www.ema.euro … es-regulation_en.pdf
Work plan PDF (EMA): https://www.ema.euro … working-group_en.pdf