Noor’s Newsletter
This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.
Issue #2: From Potential to Practice: The Rise of the Regulatory Stack
If early June was all about building sovereign AI capabilities (check the previous issue), the past two weeks have been defined by the messier reality of implementation. The conversation has focused on the practical challenges of deploying AI tools safely and effectively. It’s the sign of a fast-evolving industry, where even the world’s most powerful model is worthless if it can’t clear a path to the patient.
A new, critical layer is emerging: the Regulatory & Operational Stack. This is more than traditional compliance—it’s a proactive, technology-enabled framework being constructed in real-time by governments, regulators, and industry leaders. They’re finally acknowledging that AI-as-a-Medical-Device (AIaMD) can’t simply be forced through pipelines built for traditional software.
This edition explores three pillars of that new stack: the shift from adversarial to collaborative regulation, the operational reality of deploying AI in clinical trials, and the practical taming of generative AI as a research partner.
The Regulatory Sandbox: From Gatekeeper to Partner
The classic image of a regulator has always been that of a final, formidable gatekeeper. But a new model is emerging—one that looks far more like a collaborative partner throughout the development lifecycle.
The clearest signal came on June 23, when the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) announced the expansion of its “AI Airlock” programme. It’s a regulatory sandbox: a supervised environment where companies can pressure-test AI models against real regulatory standards during development rather than waiting until the end. The first cohort includes both small AI startups working on areas like oncology decision support and larger companies trialing clinical risk prediction tools, illustrating the diverse types of technologies and use-cases entering the regulatory sandbox. Source.
This represents a fundamental rewiring of the regulator-innovator relationship. By engaging early, the MHRA helps innovators understand safety and efficacy requirements from the outset while learning itself about the nuances of these complex technologies. For startups, it derisks the entire development process, reducing the chance of a costly failure at the final hurdle. For healthcare systems, it promises to deliver safe, effective AI tools to patients faster.
The AI Airlock offers a glimpse of how future AI regulation might look: iterative, collaborative, and tailored to a world where technology moves faster than traditional rule-making.
The Operational Stack: AI Beyond the Algorithm
While finding the right patient cohort is still a core challenge, the conversation is expanding. It’s no longer just about whether AI can find patients—it’s about what new infrastructure is required to run trials that are continuously optimised by AI.
This was a central theme at the DIA (Drug Information Association) Annual Meeting last week. Sessions focused on reimagining operational workflows for AI-driven trials and highlighted the need for a dedicated tech stack. This includes AI-powered simulation models to predict disruptions and optimise site selection, automated data validation pipelines for quality assurance, and orchestration layers that deliver real-time insights to trial managers. Source.
This signals a crucial maturation of the field. It recognises that algorithms alone are insufficient. To deliver on AI’s promise of faster, more efficient trials, companies must invest in the surrounding operational and data infrastructure. True innovation isn’t merely the model itself, but the entire system that enables it to work reliably in the real world.
The Hypothesis Engine: Grounding Generative AI in the Scientific Method
Meanwhile, in the lab, scientists continue to grapple with how to harness the creative power of generative AI without succumbing to its flaws.
A new paper from the University of Cambridge offers a compelling answer. They tasked GPT-4 with acting as an “AI scientist,” searching for novel drug combinations for breast cancer—focusing on repurposing affordable, already-approved drugs. Crucially, they didn’t treat the AI’s outputs as facts but as hypotheses to be tested. Notably, three of the twelve drug combinations suggested by GPT-4 worked better than current breast cancer drugs. The LLM then learned from these tests and suggested a further four combinations, three of which also showed promising results. Source.
This illustrates how hallucinations can sometimes be beneficial, sparking new ideas worth testing, especially when combined with human judgment and a closed-loop laboratory validation system—a setup ideal for tackling drug discovery challenges. In this setting, the model serves as a tireless, well-read research partner, proposing new ideas that are immediately funneled into rigorous lab testing.
This grounds generative AI firmly within the scientific method, transforming it from an oracle into a hypothesis engine. It leverages the model’s strengths—pattern recognition across massive datasets and combinatorial creativity—while traditional laboratory techniques validate what’s real. It’s the most realistic and promising pathway for integrating large language models into drug discovery R&D.
-Noor
Signals to Watch
Regulatory Landscape: The UK AI Bill has been delayed for another year, raising concerns about the ongoing lack of regulation. This bill was delayed, with ministers choosing to wait and align with the future US administration because of concerns that any regulation might weaken the UK’s attractiveness to AI companies. Source. China’s NMPA released new rules for AI medical devices, detailing classification standards, documentation requirements for algorithm performance, and expectations for real-world evidence. These rules mark significant progress beyond previous general guidelines and reflect China’s determination to integrate AI rapidly into clinical practice, moving faster and more decisively than current regulatory efforts in the US and UK. Source. The EU clarified how the AI Act applies to medical applications, specifying risk categories for different kinds of AI systems, obligations for transparency, and requirements for technical documentation and human oversight in high-risk medical use cases. This is a significant shift, creating a clearer but more demanding regulatory environment for AI innovators operating in Europe. Source.
Industry Moves: Reinforcing the view of biotech as a strategic asset, the NATO Innovation Fund co-led a $35M Series A in Portal Biotech, a company focused on single-molecule protein sequencing technology. Source. The Getty Images vs. Stability AI trial, now active in the UK High Court, represents a fundamental test of intellectual property in the generative era. Its outcome will reverberate through the entire AI ecosystem, forcing a potential reckoning on data licensing. Historically, disputes over training data have been a growing issue since generative AI models began using vast libraries of copyrighted images without explicit licenses. Getty had previously sued Stability AI in the US as well, accusing them of scraping millions of its images, with no final judgment yet, making this one of the most significant legal tests of intellectual property rights in the AI era. Source. Tempus AI announced a $400M convertible notes offering to refinance debt and fund strategic growth. The move comes amid a wider debate on its high valuation versus its current lack of profitability, highlighting the market's intense focus on finding a balance between undeniable innovation and a clear path to financial sustainability. Source.