AI in Drug Development: Building Trust, Data Quality, and Collaboration for the Next Era of Innovation
Notes from FDA and Clinical Trials Transformative Initiative (CITTI)’s Hybrid Public Workshop 2025: Artificial Intelligence in Drug & Biological Product Development
Orizaba Solutions believes Artificial intelligence offers unprecedented potential to accelerate discovery, improve clinical trial efficiency, and deepen real-world understanding of therapies. While AI is not a new concept, the rise of large language models (LLMs) marks a transformative leap in what’s possible. Responsible data innovation sits at the heart of healthcare transformation and for AI to reach its promise, the healthcare ecosystem must prioritize data quality, transparency, and collaboration. These themes were front and center at the CITTI–FDA E2025 Hybrid Public Workshop: Artificial Intelligence in Drug and Biological Product Development, where regulators, scientists, and industry leaders explored how AI is reshaping the future of drug development.
The Urgency of Change
The first keynote speaker Dr. Shantanu Nundy, a practicing physician and advisor on artificial intelligence to the FDA Commissioner’s Office, began with a sobering view of U.S. healthcare:
100 million Americans lack regular access to care
Medical error remains the third leading cause of death
U.S. life expectancy now trails other developed nations by four years
Dr. Nundy framed AI as a necessary catalyst to rebuild a healthcare system that too often underdelivers. He highlighted cases showing AI’s growing maturity — from deep learning models that detect structural heart disease better than cardiologists to the potential for AI to reduce animal testing and identify new clinical endpoints and biomarkers.
But his message was clear: the success of AI will hinge not on speed or sophistication, but on trust. As he wrote recently in JAMA, “AI will move drug discovery at the speed of trust.” Democratizing expertise, he argued, must go hand in hand with maintaining accountability and equity.
FDA’s Perspective: Predictability, Transparency, and Context
Dr. Khair ElZarrad, Director of the FDA’s Office of Medical Policy (CDER), also billed as keynote speaker, began with reminding the audience that the FDA is ultimately a consumer of data. With that in mind, he noted 5 guiding principles the agency currently uses to achieve predictability and consistency in regulating AI as applied to medical products:
A risk-based approach to review and oversight
Increased engagements
Transparency
Data governance
Clear context of use
He noted that transparency about what works and what doesn’t is important and will transform this field.
These principles are detailed in FDA’s draft guidance released in January 2025, Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products. Gabriel Innes, Assistant Director for Data Science and AI Policy at OPM, CDER walked through the draft guidance and provided details on the comments. This guidance introduces a risk-based credibility assessment framework — a seven-step process that can be used to establish and evaluate the trustworthiness of an AI model for a particular context of use. The document discusses the importance of life cycle maintenance of the credibility of AI model outputs overtime, underscores early engagement with FDA, and emphasizes that documentation and validation should be commensurate with the AI model’s risk and context of use.
Industry Perspectives: From Discovery to Deployment
The first session, Where Are We Now? highlighted how leading organizations are operationalizing AI across the drug development pipeline.
Greg Meyers, Chief Digital and Technology Officer at Bristol Myers Squibb (BMS), described AI as integral to every stage of drug development — from research to manufacturing. In research, AI/ML is being used to inform causal human biology, enabling scientists to infer biological mechanisms from massive, complex datasets. In clinical development, it helps identify disease subtypes, project dosing, and detect biomarkers earlier. He also discussed the potential to use it with Real World Data (RWD) to determine the heterogeneity of a comparator arm population. In manufacturing, AI promises to identify where processes can be optimized.
Thomas Osborne, Chief Medical Officer of Microsoft Federal, framed AI as a tool for “cross-pollinating biomedical knowledge.” He called for breaking silos between laboratory research, clinical trials, and population health. From digital twins that simulate placebo control arms to AI models designing the optimal protocol, Osborne talked about how machine learning and large language models can transform every phase of evidence generation.
Dana Lewis, Patient-Turned Independent Researcher and Founder of OpenAPS, brought a crucial patient voice. Her work using AI to interpret real-world data exemplifies how individuals can identify and fill research gaps — if systems allow it. She urged regulators to create pathways for patient-initiated studies and mechanisms for patients to contribute anonymized health data to research safely. “AI will enable patients to do more with their own data,” she said, “but only if the infrastructure and incentives exist.”
Together, the panel underscored that the next breakthroughs in AI-enabled drug development won’t be technical alone — they’ll be cultural. Collaboration, data sharing, and regulatory clarity must evolve together. The people, the process, the data, the tech that is fit for purpose, all are important components.
Data Quality and the Foundations of Responsible AI
The second session, Data Quality, Reliability, Representativeness, and Access in AI-Driven Drug Development, turned to practice.
Wesley Anderson of the Critical Path Institute reminded attendees that 90% of drug candidates in clinical development still fail and his organization is working to bring down that percentage by bringing together diverse stakeholders to identify blockers, crafting tools for industry use, and working to achieve regulatory success. He noted “Four Pillars” — quality, reliability, representativeness, and access, needed for responsible AI-Driven Drug Development. Without clean, complete, and curated data that reflects real-world diversity and is properly governed, even the best algorithms will perpetuate bias and unreliability.
Anderson highlighted tangible progress, including AI-generated synthetic datasets that mirror real-world populations to support work to qualify a biomarker for clinical trial enrichment in Type 1 Diabetes. His message: the future of AI in drug development is inseparable from the future of data infrastructure.
Michelle Longmire, a physician and CEO of Medable, explored how agentic AI — AI that acts as a collaborative teammate — can relieve operational burdens in clinical research. Her example: an AI “Clinical Research Associate (CRA) agent” that extracts data from multiple systems, identifies site risks, and suggests follow-up actions. By automating the tactical, AI frees humans for the strategic. She also noted during the discussion that “we’ve reached the limit of human potential on drug development- look at the number of drug approvals over time. I am looking to unlock that we haven’t been able to with past technology, older AI and the cloud”. BAM!
Sheraz Khan, Senior Director of Generative AI at Pfizer, discussed efforts to build a shared foundation model for digital health data. Unlike narrow, task-specific algorithms, foundation models can generalize across diverse datasets and devices — a critical step for scalability and fairness. He called for a precompetitive consortium model to develop these tools collectively, ensuring transparency, interpretability, and regulatory alignment. A
Moving Forward: AI at the Speed of Trust
The discussions revealed the tremendous potential of AI to accelerate drug and biological product development and reduce costs — but they also underscored the equally important need to advance the standards that ensure safety, efficacy, and quality. Several participants noted that AI is often held to a higher standard than humans. One (non-FDA) speaker noted that society tolerates tens of thousands of traffic fatalities each year, yet a single self-driving car accident can dominate public perception. Whether fair or not, trust in AI — particularly in healthcare — is essential for its adoption. Afterall, there won’t be just one self-driving vehicle on the road. Like autonomous vehicles, AI in drug development will not exist in isolation; each model, method, and dataset influences the broader ecosystem. Progress, therefore, requires pairing the desire to move rapidly with responsibility for not breaking things when that thing is a human life asking not only what’s possible, but how we can ensure AI is accurate, reliable, and appropriately generalizable. This is possible, when I moved to Austin, TX in 2023, Waymo’s autonomous vehicles were already on the roads there.
Dr. Shantanu Nundy’s framing remains a fitting conclusion: AI will move drug discovery at the speed of trust. That trust must be earned — through data integrity, regulatory collaboration, and a shared belief that technology can serve both science and society.
At Orizaba Solutions, we help organizations build that trust by strengthening the data foundations of innovation. We ensure that AI models, clinical evidence, and regulatory submissions are powered by data that is governed by a process that ensures it is reliable and fit-for-purpose. As the FDA and industry move toward a more AI-augmented future, our mission remains clear: data is the foundation upon which AI algorithms learn, adopt, and generate outcomes, thus the quality and relevance of the data is critical for ensuring accurate, unbiased, and reliable outcomes.