TO BE PUBLISHED July 2026
Starting from:
£99 + VAT
Format: DOWNLOADABLE PDF
This conference will explore next steps for use of artificial intelligence in life sciences. Delegates will consider priorities for supporting the safe, effective, and responsible adoption across research, clinical care, and commercial life sciences.
Following the publication of the UK’s Life Sciences Sector Plan, AI Opportunities Action Plan, and the UKRI AI Strategy, as well as establishment of the National Commission into the Regulation of AI in Healthcare, the conference will bring together stakeholders and policymakers to consider next steps as the UK moves from strategy to delivery. Discussion will focus on strategic and practical approaches for integration of AI across research, clinical care and commercial life sciences, while maintaining patient safety, earning public trust and providing regulatory predictability.
The agenda will consider questions around operational delivery of the Health Data Research Service, regulatory readiness for AI as a medical device and AI-enabled drug discovery, scaling AI beyond pilots in the NHS, and the skills, governance and commercial frameworks required to support TechBio growth in the UK.
Access to data, responsible use & risk mitigation
Sessions will examine access models and timelines for SMEs, academic teams, and other research partners, exploring ways to unlock longitudinal and real-time data while mitigating risks such as ransomware, misuse of synthetic data, and inequities in outputs. Delegates will consider what consent frameworks, public engagement strategies, and ethical standards need to be established to support responsible data use, in line with the Information Commissioner’s Office sandbox guidance and forthcoming statutory AI codes expected this year.
Regulation, governance & equity in deployment
The agenda will consider how regulatory frameworks can be clarified and made more predictable for AI in healthcare, including expectations for the anticipated recommendations from the Medicines and Healthcare products Regulatory Agency National Commission, insights from the MHRA AI Airlock sandbox, and what further guidance is required for sector-specific rules on medical devices, pharmacovigilance, and clinical governance. Sessions will also look at validation of adaptive models, standards for synthetic and Retrieval Augmented Generation-grounded systems, liability allocation, and options for ensuring equity in AI deployment.
Commercial considerations, collaboration & workforce development
Further discussion will explore adoption in NHS and commercial settings, including through digital integration, procurement approaches, and evidence standards that enable evaluation, investment, and wider access. Delegates will assess priorities for workforce development to address gaps in data engineering, regulatory science, clinician-AI leadership, and commercial AI skills. The agenda will discuss collaboration needed between regulators, industry, and research organisations, assessing priorities for the Regulatory Innovation Corridor to establish clearer routes to commercial adoption, alongside UKRI’s AI Strategy emphasis on translational research, human-in-the-loop systems, and sustainable AI deployment.
Overview of areas for discussion
- data governance:
- key considerations for HDRS - access tiers for SMEs and academic teams, timelines, and cost expectations
- strategies for improving data consistency, quality, and interoperability - ensuring datasets are sufficient for AI research and development
- approaches for responsible use of synthetic and longitudinal data - privacy, cybersecurity, equity, maintaining public trust, and safeguards against misuse
- regulation:
- assessing the potential impact of MHRA reform, AI Airlock sandbox, National Commission, and regulatory corridor on AI deployment and innovation
- strategies for validating adaptive and learning AI models, addressing AI hallucinations, and allocating liability when AI informs decisions
- identifying ways to provide regulatory clarity, predictability, and proportionate oversight while supporting patient safety and innovation
- clinical deployment and patient safety:
- defining minimum standards for clinical deployment - clinician training and workflow considerations
- establishing ongoing monitoring and feedback mechanisms in real-world settings, aligned with existing surveillance systems
- considering bias, equity, and clarity as part of deployment and decision-making processes
- commercialisation and implementation:
- strategies to move AI from trials to wider adoption - IT integration, procurement, and pathway redesign
- key considerations for integration blueprints, prototype procurement templates, and reusable technology stacks - reducing bespoke IT work and supporting scalable adoption
- identifying approaches to manage uncertainty around commercial outcomes and system performance
- priorities for the Regulatory Innovation Corridor - assessing entry points and eligibility, regulatory coherence, and commercial prioritisation
- skills and capability:
- addressing skills gaps across clinical, regulatory, and AI roles - training pilots, fellowships, placements, and blended upskilling programmes to improve workforce capability and adoption readiness
- priorities for workforce development - ethics, bias mitigation, clarity, and regulatory understanding to enable responsible AI use