The FDA’s Draft Guidance, Use of AI in Regulatory Decision-Making for Drug & Biological Products

Strategic Implications of FDA’s New Draft Guidance on Using AI

Executive Summary

Artificial intelligence (AI) is increasingly recognized as a powerful pharmaceutical research and development (R&D) catalyst, supporting processes from preclinical discovery to postmarket surveillance. AI can streamline regulatory pathways and improve patient outcomes by expediting target identification, optimizing clinical trial design, and automating quality control. However, the complexity of AI, particularly deep-learning models, also raises challenges related to validation, reproducibility, and explainability. To address these challenges, the U.S. Food and Drug Administration (FDA) has issued the draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products (“Draft Guidance”), which introduces a seven-step, risk-based credibility assessment framework for AI models generating information used to inform regulatory decisions on safety, effectiveness, or product quality. Crucially, the guidance highlights data governance, risk stratification, and continuous life cycle management, requiring sponsors to consider the regulatory impact of AI at every developmental stage. Organizations must:

  • Define clear questions of interest:

    • Pinpoint the precise goal the AI model will address (e.g., forecasting adverse events or detecting manufacturing deviations).

    • Ensure the question aligns with regulatory and clinical/product-quality objectives.

  • Establish the AI model’s context of use (COU):

    • Identify exactly how, where, and by whom the AI output will be used.

    • Determine whether AI insights are the sole basis for decisions or one piece of a broader evidence set.

  • Classify risk:

    • Consider how significantly the AI model influences decisions.

    • Assess the potential impact if the model’s output is incorrect.

    • Combine these factors to determine the overall risk level (low, medium, or high).

  • Devise robust credibility plans:

    • Outline data sourcing, model architecture, and performance metrics.

    • Incorporate methods for continuous monitoring and updates.

    • Document all activities to meet regulatory standards for transparency and accountability.

These use of AI plans encompass data collection, model training, validation protocols, and thorough documentation, all underpinned by transparency in AI model architectures and outputs. Executive leaders, board members, and investors will increasingly share responsibility for overseeing AI-driven initiatives and aligning them with FDA standards. AI vendors must likewise adjust product offerings to meet rising demands for data integrity, risk mitigation, and structured documentation. Although the guidance remains in draft form, it sketches a forward-thinking regulatory roadmap that prompts stakeholders to proactively engage the FDA, ensuring AI innovations flourish without compromising patient safety or scientific rigor.

Key Words

Artificial Intelligence (AI); U.S. Food and Drug Administration (FDA); Drug Development; Biologics; Draft Guidance; Regulatory Compliance; Risk-Based Framework; Data Governance; Investment Strategy; Corporate Governance; Model-Informed Drug and Biologic Development (MIDD)

Introduction

Artificial intelligence (AI) has emerged as a transformative force in pharmaceutical R&D, influencing nearly every stage of the drug development pipeline. By accelerating timelines and reducing costs, AI can enhance processes from discovery to postmarket surveillance:

  • Drug Discovery: AI aids in target identification, drug repurposing, and predicting a compound’s efficacy and safety.

  • Preclinical Development: AI refines lead compounds by forecasting pharmacokinetic and toxicological properties.

  • Clinical Trials: AI optimizes patient selection and recruitment and identifies novel biomarkers for study endpoints.

  • Manufacturing: AI enables production optimization and automated quality control to bolster product consistency.

  • Postmarket Surveillance: AI analyzes real-world data to detect adverse events and generate real-world evidence for safety monitoring.

Although AI introduces efficiencies, it also poses new oversight hurdles. Deep learning architectures, for instance, can be opaque, complicating validation and regulatory reviews. Recognizing these risks and opportunities, the FDA has committed to fostering innovative approaches while maintaining robust scientific and regulatory standards. According to FDA Commissioner Robert M. Califf, M.D., “With the appropriate safeguards in place, artificial intelligence has transformative potential to advance clinical research and accelerate medical product development to improve patient care.” This underscores how AI’s complexity can create challenges in explainability and validation, even as it brings groundbreaking potential to improve patient outcomes.

Ensuring AI models’ reliability, reproducibility, and interpretability is crucial, along with sound data governance and life cycle management. As AI evolves, sponsors employing AI to support regulatory decision-making must preserve rigorous quality standards. To that end, the FDA released the draft guidance titled Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. Although not legally binding, this document reflects the Agency’s evolving expectations on data integrity, model risk assessment, and ongoing oversight. Executives, researchers, investors, and AI developers must be prepared to integrate these recommendations into their operations to harness AI’s promise in a regulated environment.

Guidance State, Applicability, and Scope

The FDA’s draft guidance is specifically directed at human and animal drugs and biological product development when AI models generate data that inform regulatory decisions on safety, effectiveness, or quality. Figure 1 provides an at-a-glance depiction of how the guidance excludes AI uses focused solely on drug discovery or operational efficiencies that do not affect patient safety, product quality, or the reliability of study results.

FDA defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” A subset of AI, machine learning (ML), encompasses techniques that train algorithms to improve performance iteratively based on data. While ML is the most common technique in drug R&D, the guidance maintains a broader lens, acknowledging that future AI methods may demand similarly robust risk management and validation frameworks.

The FDA developed this draft guidance through multi-stakeholder engagement, including sponsors, technology developers, academia, and suppliers. Feedback came from FDA-led workshops, public comments, and the Agency’s direct experiences reviewing over 500 drug and biologic submissions with AI elements. As AI continues to evolve, the FDA’s core principles, focusing on data quality, transparency, and risk-based life cycle management, will likely influence drug approvals and other areas of health technology oversight.

Overview of FDA’s AI Guidance Provisions

The FDA’s draft guidance focuses on AI models whose outputs influence safety, effectiveness, or quality decisions. Figure 1 illustrates these higher-impact uses, contrasting them with AI tools for drug discovery or purely internal process improvements. Concentrating on applications linked to regulatory decisions ensures sponsors direct resources toward comprehensive evaluations, transparent practices, and well-defined oversight for AI models that can shape patient outcomes and product quality:

FIGURE 1: Avancer Group, Inc.  All Rights Reserved.

Why the Use of AI Insights Is Different

Unlike traditional computational techniques, AI models in this context rely on large, multifaceted datasets and can adapt their internal parameters as data evolve. Figure 1 depicts several core features, such as systems that are not fully transparent (“black box”), susceptibilities to data changes, and the possibility of embedded biases that call for ongoing validation and monitoring across a product’s life cycle. These characteristics underscore the need for comprehensive approaches that consider everything from initial data curation to continual checks on performance and equity:

  1. Complexity of Deep Learning: Many advanced AI algorithms do not readily reveal how various data inputs shape their final predictions. In a regulated environment, this questions accountability, especially for crucial decisions regarding treatment pathways or quality control processes.

  2. Evolving Data Landscapes: Over time, data may shift in ways that diverge from initial training conditions, reducing the model’s reliability if left unaddressed. Handling these shifts effectively requires structured methods for detecting and responding to changes in both clinical and manufacturing environments.

  3. Potential for Bias: Models built on existing datasets may carry forward historical or systemic patterns, leading to outcomes that do not optimally serve all patient groups or production scenarios. Identifying and addressing these patterns is pivotal for ensuring consistent, fair decision-making.

The guidance thus prompts sponsors to adopt dynamic, iterative processes when developing and applying AI models, maintaining vigilance from development through deployment in clinical or manufacturing settings (see Figure 1).

AI Use in the Drug Product Life Cycle

1. Risk-Based Credibility Assessment Framework

A core component of the FDA’s approach is the risk-based credibility assessment framework, represented by the purple branch in Figure 1. This structure guides sponsors in determining whether an AI model is sufficiently reliable for its context of use (COU). Whether that use involves predicting potential adverse events during trials or influencing final product release decisions in manufacturing (Refer to Figure 1 for an at-a-glance overview of the three main areas: (1) Risk-Based Credibility Assessment Framework, (2) Life Cycle Maintenance, and (3) Early Engagement.):

  • Model Influence vs. Decision Consequence: Sponsors evaluate how heavily AI outputs steer safety or quality decisions and how serious the outcome might be if the model errs. Merging these factors yields an overall risk classification (low, medium, or high) that helps shape the necessary depth of testing, documentation, and post-deployment scrutiny.

  • Stepwise Process for Model Credibility: Following the numbered pathway shown in Figure 1, sponsors begin by defining the precise question the AI will answer, then establish whether the AI model stands as a primary determinant or operates alongside other sources of evidence. After calculating overall risk, they develop a plan detailing data curation, training methodologies, performance targets, and techniques to address potential biases. The plan execution results are compiled into a credibility assessment report describing any discrepancies between anticipated and actual outcomes. Ultimately, sponsors decide if the AI model meets acceptable criteria for the proposed use; if not, refinements or additional data may be required before proceeding with regulatory submissions or implementation.

The precise delineation of steps highlights the significance of each phase, from formulating the question of interest to final deployment decisions. This ensures comprehensive accountability as the model transitions from concept to practice.

2. Life Cycle Maintenance of the Credibility of AI Model Outputs

In Figure 1 (blue branch), sponsors can see how life cycle maintenance provisions ensure that an AI model remains dependable throughout a drug product’s lifespan. As conditions shift, the model can face situations it was not originally trained to handle, prompting the need for consistent performance checks and well-orchestrated updates:

  • Ongoing Monitoring: Regular evaluations, such as retesting metrics or comparing new data against the model’s established benchmarks, support continuous alignment with the original performance standards. This is especially relevant when modifications occur in patient populations or manufacturing processes, two areas where small shifts can significantly affect model accuracy.

  • Structured Change Management: Sponsors integrate model updates into their pharmaceutical quality systems, recording adjustments so they can trace how each revision influences results. If an alteration meaningfully affects how the model performs for a higher-risk application, the process may involve additional regulatory review or post-approval filings to maintain transparency and reliability.

Through these efforts, sponsors check potential sources of drift or bias, aiming for consistent product quality and patient safety over time. Integrating these maintenance activities with other established quality processes creates a stable foundation for managing AI in a rapidly shifting environment.

3. Early Agency Engagement

Figure 1 (green branch) outlines the importance of early engagement with the FDA, which greatly benefits sponsors and regulators when AI systems introduce unique or unprecedented factors. Sponsors have several options, including Pre-Investigational New Drug (pre-IND) meetings and specialized programs, to lay out their AI methodologies and gather feedback on potential challenges:

  • Formal Meeting Opportunities: Sessions like the Initial Targeted Engagement for Regulatory Advice on CBER/CDER Products (INTERACT) allow sponsors to pose detailed questions about model design, data representativeness, or long-term oversight. This open dialogue often helps refine proposals before significant resources are committed.

  • Specialized Programs: Initiatives like the Complex Innovative Trial Design (CID) Meeting Program or the Emerging Drug Safety Technology Program (EDSTP) provide further avenues for discussing complex or innovative applications of AI. These interactions create a structured framework for sponsors to verify best practices, establish performance thresholds, or negotiate acceptable validation strategies aligned with regulatory expectations.

By fostering a transparent exchange early on, sponsors can anticipate regulatory perspectives, mitigate common pitfalls, and craft AI solutions harmonizing with public health objectives. This collaboration ultimately paves the way for a smoother journey from model design to real-world implementation, as illustrated in Figure 1.

Guidance Implications for Sponsors

The FDA’s draft guidance extends beyond scientific validation and affects numerous organizational dimensions, from corporate governance and budgeting to talent recruitment. Boards and executive leadership must acknowledge that deploying AI in a regulated setting brings new responsibilities in data stewardship and risk management. Although these responsibilities may increase costs or lengthen development timelines, they can also deliver more efficient trials, better success rates, and deeper insights into patient populations:

  1. Increased Development Costs and Timelines: The heightened focus on data integrity, algorithmic explainability, and post-approval monitoring can raise early-stage spending and extend development cycles. However, rigorous validation often leads to fewer regulatory setbacks, minimized product recalls, and reduced patient safety risks. Organizations that plan effectively may find that higher initial costs ultimately mitigate downstream liabilities.

  2. Talent Acquisition and Development: Deploying regulated AI demands cross-functional skills, combining machine learning, clinical research, manufacturing expertise, and regulatory affairs knowledge. This mix of competencies is scarce, intensifying competition for skilled personnel. Investing in robust training programs, interdisciplinary partnerships, and knowledge-sharing fosters in-house AI leadership capable of bridging clinical and technical insights.

  3. Data as a Strategic Asset: Data governance, quality, and traceability become strategic imperatives as AI-derived insights expedite clinical candidate selection or reveal manufacturing deviations. Adopting a holistic data strategy not only supports compliance but can also serve as a sustainable competitive advantage, guiding informed decision-making and facilitating seamless transitions when introducing new AI models or refining existing ones.

  4. Reassessing AI Initiatives and Prioritization: A risk-based lens indicates that AI projects differ in regulatory stakes. Prioritizing the most feasible or impactful applications, particularly lower-risk ones, allows organizations to refine processes before embarking on advanced or high-stakes models. This measured approach can streamline resource allocation and nurture a culture of continuous improvement.

  5. Importance of a Strong Regulatory Strategy: A robust, proactive plan for meeting FDA expectations can reduce ambiguities, prevent costly rework, and bolster corporate credibility. Detailed documentation of model development, thorough recordkeeping on data usage, and consistent engagement with regulators form the backbone of an effective regulatory strategy. By anticipating the FDA’s concerns, organizations can more smoothly navigate from development to approval, ensuring patient safety remains a priority.

Guidance Implications for Board Governance and Leadership

Adopting AI in a regulated environment requires strategic governance, prudent resource allocation, and cultural transformation. Board members and executive leaders must balance opportunities for rapid innovation with the obligation to maintain product integrity and patient safety. This balance is critical, given the FDA’s emphasis on risk-based approaches to ensure AI models align with regulatory standards for data integrity, transparency, and oversight:

  1. Establish AI Governance and Oversight: Boards should consider forming dedicated AI committees or integrating AI responsibilities into existing risk and compliance structures. This approach helps ensure real-time visibility into AI initiatives and effective escalation processes when potential issues arise. Regular reporting to the board on AI model performance and risk assessments can foster proactive decision-making and mitigate regulatory surprises.

  2. Develop an AI-Ready Culture: A supportive culture is essential for translating AI-driven strategies into practical outcomes. Leadership teams can promote AI literacy across departments, encourage open forums to discuss regulatory updates and endorse best practices for data governance. Interdisciplinary collaboration, combining expertise from clinical research, manufacturing, regulatory affairs, and data science, facilitates robust compliance and continuous improvement.

  3. Build Strategic Partnerships: Partnerships with AI vendors, contract research organizations (CROs), or academic institutions can expedite technology validation, share regulatory burdens, and infuse diverse perspectives into the model development process. Well-defined agreements regarding data ownership, intellectual property rights, and postapproval responsibilities clarify each party’s role and help avoid disputes, particularly in highly regulated contexts.

  4. Prioritize Ethical Considerations: Fairness, bias mitigation, and privacy constitute fundamental ethical imperatives in AI-driven drug development. Approaches that address only the technical components risk overlooking moral dimensions influencing trust among regulators, patients, and the public. By embedding ethical reviews into model design and organizational governance, boards and executive leaders can safeguard the company’s reputation and uphold patient welfare.

These actions empower boards and executive teams to incorporate AI responsibly, complying with FDA expectations while maintaining a strong competitive advantage. Recognizing that AI can expedite clinical timelines and enhance manufacturing operations, boards must direct their organizations to manage AI’s complexities without losing sight of core compliance and safety obligations.

Guidance Implications for AI Developers and Vendors

Third-party AI developers serving pharmaceutical and biotechnology sponsors encounter a more structured regulatory environment that shapes product demand. Solutions that meet sponsors’ needs for data traceability, comprehensive validation, and transparent reporting often emerge as leaders in a crowded marketplace:

  1. Adapting Products and Services: AI platforms may need enhancements, such as automated validation logs, user-friendly data lineage tracking, and compliance-ready interfaces, to align with the FDA’s expectations. These refinements can streamline sponsor partnerships, as sponsors increasingly require explicit alignment with regulatory guidance.

  2. Opportunities for Market Differentiation: While advanced algorithms are valuable, vendors that provide “regulatory-ready” documentation, intuitive dashboards for risk analysis, and built-in workflows for model retraining can stand out. AI vendors can foster long-term customer loyalty and command premium pricing in niche markets by proactively anticipating sponsors' compliance responsibilities.

  3. Building Trust and Transparency: Clear, accessible documentation of model architectures, data usage, and performance metrics underpins trust with sponsors and regulators. Regular software updates and open communication regarding patches or security fixes further mitigate uncertainty. A transparent corporate culture can foster enduring vendor-sponsor relationships in an environment where patient safety hinges on data reliability.

Guidance Implications for Investors

Investors in private equity, venture capital, mergers and acquisitions, and strategic licensing increasingly focus on companies incorporating AI into drug development for regulatory decision-making. The FDA’s draft guidance introduces additional layers of complexity, chiefly, readiness for compliance and robust data maturity, shaping risk and potential returns:

  1. Due Diligence in the Age of AI Regulation: Purely financial or market-based assessments do not suffice. Investors must explore data integrity, validation protocols, and alignment with the FDA’s draft guidance. Specialized consultants in machine learning and regulatory affairs can identify hidden red flags, such as insufficient documentation or bias in the training data.

  2. Evaluating Investment Opportunities: Rigorous regulations can filter out lower-quality AI ventures, providing an advantage to companies that already uphold best practices. Firms with well-established compliance infrastructures may see quicker regulatory approvals and reduced attrition, improving potential returns. Investors adept at spotting these mature or adaptable enterprises can secure attractive portfolios.

  3. Portfolio Company Guidance and Monitoring: Active oversight of AI projects is critical once invested. Investors can encourage board-level representation, synchronize performance indicators with AI governance, and foster cross-department collaboration to meet FDA expectations. Routine check-ins on model performance, data transparency, and regulatory updates minimize surprises that could threaten revenue or reputations.

  4. M&A Considerations in the AI-Driven Drug Development Space: Acquiring companies must evaluate the compatibility of AI infrastructures and data governance practices. Detailed audits of data pipelines, model training methods, and risk controls help clarify the potential synergy or friction between the buyer’s and seller’s AI strategies. Strategic alignment on compliance fosters smoother integrations post-merger.

  5. Strategic Licensing and Partnerships: Licensing negotiations may hinge on the credibility of AI models crucial for clinical or manufacturing processes. Clarifying roles for ongoing validation, data governance, and regulatory responsibilities streamlines collaboration. Investors and licensors should recognize that robust AI frameworks meet current FDA expectations and position alliances for sustained value creation.

Conclusion and the Big Takeaways

The U.S. Food and Drug Administration’s (FDA’s) draft guidance outlines a risk-based, stepwise approach for integrating artificial intelligence (AI) into drug and biological product development when AI models produce data to inform regulatory decisions regarding safety, effectiveness, or quality. Although this framework specifies core criteria, such as rigorous data governance, transparency, and life cycle oversight, the bigger picture involves cultivating an organizational mindset that fuses innovation with robust governance. Beyond immediate compliance with the FDA’s expectations, stakeholders should consider the following overarching points:

  1. AI as a Catalyst for Culture Change: AI is more than a technological add-on; it often redefines organizational collaboration. Cross-functional teams (e.g., data scientists, clinicians, regulatory experts, manufacturing personnel) must cooperate to ensure model reliability, patient-centric practices, and ethical applications. Sponsors can address data integrity issues and align AI-driven projects with overarching regulatory priorities by fostering open communication.

  2. Risk Management Fueling Sustainable Innovation: The guidance’s risk-based assessment should not be seen as an impediment but as an enabler of methodical experimentation. When sponsors fully appraise model influence and decision consequences, they can comfortably explore cutting-edge algorithms while mitigating the chance of unsafe or ineffective outcomes. In this way, responsible risk-taking advances clinical research and accelerates product development without compromising safety or quality.

  3. Data Quality as a Strategic Differentiator: Although “fit-for-use data” is a compliance imperative, it is also a source of long-term competitive advantage. Clean, representative datasets enable faster innovation, more accurate predictions, and broader applicability of AI models. Robust data management, therefore, underpins the FDA’s requirements and organizational success in a rapidly evolving market.

  4. Transparency Fostering Trust: Deep learning techniques can be opaque, making it essential to document data sources, model architectures, and validation outcomes. Sponsors who communicate openly about AI decision-making build confidence among regulators, patients, and healthcare providers. This ethos of transparent reporting and interpretability reinforces public trust in next-generation therapies powered by AI.

  5. Ethical Considerations and Societal Impact: Maintaining patient safety and upholding scientific rigor include ethical dimensions such as bias mitigation and respect for patient privacy. By formally integrating ethical reviews into AI design and deployment, addressing fairness, potential societal impact, and privacy, stakeholders demonstrate patient-centric values and preserve the credibility of AI-driven solutions.

  6. Interdisciplinary Collaboration: The complexities of regulated AI demand collaboration among diverse skill sets: data science, clinical knowledge, manufacturing expertise, and regulatory affairs. Siloed approaches risk missing critical risk factors or underutilizing domain insights. When teams work cohesively, AI models are more robust, flexible, and resilient under real-world conditions.

  7. Proactive Engagement With the FDA: Early dialogue with regulators through formal programs such as pre-IND, Model-Informed Drug Development (MIDD), or the Emerging Technology Program (ETP) offers dual benefits. It reduces late-stage rejections or substantial rework and ensures FDA awareness of innovative AI methods. This cooperative model paves the way for efficient approvals when AI evidence is thoroughly validated.

  8. Evolving Regulatory Landscape: As AI technology advances swiftly, stakeholders should recognize that the current draft guidance is part of an ongoing regulatory process. Future science, global policy, and public perception changes may lead to additional modifications or updated guidelines. Agility and continued engagement in policy discussions position sponsors to adapt effectively to new standards.

  9. Long-Term Value in Responsible AI Adoption: Beyond the immediate objectives of risk-based compliance, organizational commitment to responsible, transparent AI fosters sustained industry leadership. This commitment establishes trust with regulators and patients, enabling smoother adoption of future AI initiatives and bolstering investor confidence. Establishing a culture of ethical, patient-centric innovation ultimately helps expedite the delivery of safe, effective therapies.

While the FDA’s draft guidance underscores regulatory best practices, the broader vision integrates compliance into a forward-looking AI culture. Through interdisciplinary teamwork, meticulous data governance, ethical considerations, and early regulatory engagement, sponsors and AI developers can create a robust ecosystem that fuels continuous innovation, safeguards patient interests, and aligns with the FDA’s core mission of delivering high-quality, high-value treatments.

References

Executive Order 14110 of October 30, 2023. (2023). Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Federal Register. https://www.federalregister.gov/d/2023-24283

Liu, Q., Huang, R., Hsieh, J., et al. (2023). Landscape analysis of the application of artificial intelligence and machine learning in regulatory submissions for drug and biologic development from 2016 to 2021. Clinical Pharmacology & Therapeutics, 113(4), 771–774. https://doi.org/10.1002/cpt.2668

U.S. Food and Drug Administration. (2023). Benefit-Risk Assessment for New Drug and Biological Products: Guidance for Industry. https://www.fda.gov/media/159960/download

U.S. Food and Drug Administration. (2024). Real-World Data: Assessing Electronic Health Records and Medical Claims Data to Support Regulatory Decision-Making for Drug and Biological Products: Guidance for Industry. https://www.fda.gov/media/152503/download

U.S. Food and Drug Administration. (2025). Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products: Guidance for Industry and Other Interested Parties (Draft). https://www.fda.gov/regulatory-information/search-fda-guidance-documents

SPEAK WITH US
Next
Next

Understanding AI’s Role in Healthcare: A Framework for Boards and CEOs