Noor's Newsletter: Issue #9

This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.

The Industrialization of Discovery: Pharma Builds the Engine

Almost a decade ago, generative AI in small molecule drug design dominated the conversation. Today, as that area grapples with the realities of clinical trials, new frontiers are emerging: AI-driven biologics design and increasingly other modalities. Recent deals are not simply about adding a new tool—they are about acquiring a new capability. AstraZeneca’s $555 million bet on Algen Biotechnologies exemplifies this approach: the company is not merely licensing targets but integrating Algen’s entire AI- and CRISPR-driven platform for high-throughput functional genomics. Likewise, Takeda’s expanded partnership with Nabla Bio touts an AI workflow that can progress from protein design to lab testing in just 3–4 weeks. Since AlphaFold, this field has blossomed into a Cambrian explosion of startups and publications—reminiscent of the early surge in generative AI for small molecules.

On another front, GSK’s announcement of a $30 billion investment in U.S.-based R&D and manufacturing comes shortly after AstraZeneca and Merck signaled plans to scale back in the U.K. It raises a pressing question: how many strategic decisions will increasingly be shaped by U.S. incentives, especially when other jurisdictions offer less clear upside?

Regulatory infrastructure is also maturing. The U.K.’s MHRA has cut clinical trial approval times in half through AI-enabled workflows and digital reforms—a clear win for the “low-hanging fruit” of AI in healthcare. Tasks like note-taking, summarization, and triaging are already well-suited to AI; what lags are the incentives and infrastructure needed to scale impact. Simultaneously, the MHRA is deepening collaboration with the U.S. FDA, establishing “international reliance routes” for faster cross-border approvals. Together, these moves are building the global plumbing to match the global scale of the industry.

—Noor


Interesting Things in Research and Beyond

  • A recent arXiv preprint highlights a critical phenomenon in Large Language Models (LLMs). In inherently competitive settings—like advertising campaigns or elections—competitive feedback loops can shape LLM behavior in unintended (sometimes harmful) ways. Using simulations, the study shows that optimizing LLMs for competitive success can drive misalignment. For instance, a 4.9% gain in vote share coincided with a 22.3% increase in disinformation and a 12.5% rise in populist rhetoric—even when models were explicitly instructed to remain truthful. An example of this behaviour goes as follows: A model first presented with the proposition, “As a father of three, … a tireless advocate and powerful defender of our Constitution …”, when subsequently fine-tuned to optimize for competitive response, produces outputs such as, “I’m running for Congress … to stand strong against the radical progressive left’s assault on our Constitution.” This illustrates how LLMs can be nudged from neutral or descriptive language toward highly partisan, strategic messaging—even when initially trained to remain truthful..
  • Reflecting on AI’s rise, I recall attending NeurIPS in 2016 during the deep learning bubble. It was clear then that industry, with its access to vast capital, data, and compute, would soon outpace academia. Ten years later, an analysis from Epoch AI confirms this: most landmark AI systems today are built inside companies or by teams that combine research depth with production-scale capabilities. Ideas still matter, but execution now requires industrial-scale infrastructure. The story of AI’s rise is inseparable from the story of infrastructure catching up to imagination.
  • A related trend is the emergence of “AI workslop”: inefficiencies introduced when LLMs are deployed across teams without sufficient domain expertise. Junior developers or non-specialists using AI tools can produce outputs that appear useful but leave hidden work for others to clean up—a reminder that powerful tools still require skilled guidance.