Wednesday, February 4, 2026

EMA and FDA have published Guiding principles of good AI practice in drug development

Artificial Intelligence (AI) has the potential to transform the way medicines are developed and evaluated, ultimately improving healthcare outcomes. In this context, the EMA and FDA issued joint guidance in January of this year outlining 10 international guiding principles. These principles identify areas where international regulators, standards organizations, and other collaborative bodies can work together to advance good practices in drug development.
Areas of collaboration include research, the development of educational tools and resources, international harmonization, and the establishment of consensus standards. These efforts may help inform regulatory policies and guidelines across different jurisdictions, in alignment with applicable legal and regulatory frameworks.
Further details can be found in the relevant documents (links below). However, the 10 guiding principles are particularly worth highlighting:
1. Human-centric by design
The development and use of AI technologies align with ethical and human-centric values.
2. Risk-based approach
The development and use of AI technologies follow a risk-based approach with proportionate validation, risk mitigation, and oversight based on the context of use and determined model risk.
3. Adherence to standards
AI technologies adhere to relevant legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including Good Practices (GxP).
4. Clear context of use
AI technologies have a well-defined context of use (role and scope for why it is being used). 1 For the purpose of this document, the term “drug” is used to refer to drugs and biological products as defined in the United States of America, and medicinal products as defined in the European Union.
5. Multidisciplinary expertise
Multidisciplinary expertise covering both the AI technology and its context of use are integrated throughout the technology’s life cycle.
6. Data governance and documentation
Data source provenance, processing steps, and analytical decisions are documented in a detailed, traceable, and verifiable manner, in line with GxP requirements. Appropriate governance, including privacy and protection for sensitive data, is maintained throughout the technology’s life cycle.
7. Model design and development practices
The development of AI technologies follows best practices in model and system design and software engineering and leverages data that is fit-for-use, considering interpretability, explainability, and predictive performance. Good model and system development promotes transparency, reliability, generalizability, and robustness for AI technologies contributing to patient safety.
8. Risk-based performance assessment
Risk-based performance assessments evaluate the complete system including human-AI interactions, using fit-for-use data and metrics appropriate for the intended context of use, supported by validation of predictive performance through appropriately designed testing and evaluation methods.
9. Life cycle management
Risk-based quality management systems are implemented throughout the AI technologies’ life cycles, including to support capturing, assessing, and addressing issues. The AI technologies undergo scheduled monitoring and periodic re-evaluation to ensure adequate performance (e.g., to address data drift).
10. Clear, essential information
Plain language is used to present clear, accessible, and contextually relevant information to the intended audience, including users and patients, regarding the AI technology’s context of use, performance, limitations, underlying data, updates, and interpretability or explainability

Documents published by EMA and FDA :
https://www.ema.euro … g-development_en.pdf
https://www.fda.gov/ … edia/189581/download