Noor's Newsletter- Issue #7

This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.

Data, Deals, and Geopolitics

For the last year, we've watched the rise of massive, centralized AI models. The logical next step, many assumed, was that big pharma would simply become a customer, plugging into these powerful engines to do drug discovery. But the news from the past two weeks shows us a much more complex and interesting reality taking shape. Pharma isn't just buying AI; it's actively trying to build and shape the ecosystem itself. The new strategic imperative isn't just to have the best science, but to control the best platform.

My view has always been that AI models, no matter how complex or powerful, will eventually become democratized as the technology accelerates. The true, defensible asset—the gem—lies in the unique, proprietary data that feeds these models and the biological insights derived from it. The recent news shows this hypothesis playing out.

Look at Eli Lilly. They launched TuneLab, a platform that gives smaller biotech companies access to their proprietary AI models for ADMET prediction and antibody engineering. While the model uses federated learning to obfuscate the data and securely share the models, applying ADMET predictors without insight into how the models were trained can be limiting. In my experience, model certainty and a better understanding of the scaffolds presented in the training data—and how far it is from the test data—is more important than model accuracy.

Lilly's move signals that the models themselves are becoming instruments of ecosystem strategy; a way to attract innovative startups and gain visibility into the valuable assets they generate.

This trend is also reflected in where smart money is flowing. The sector's newest unicorn, Enveda Biosciences, which uses AI to mine "nature's medicine chest," just raised a major new funding round. Investors are not just backing a clever algorithm; they are backing platform companies generating a unique and proprietary data asset. This reinforces the argument that the most valuable companies in this space will be those that own a distinct data-generating engine, creating a portfolio of therapeutic assets.

The UK's Unraveling Life Science Strategy

The simmering tensions between the pharmaceutical industry and the UK government have boiled over into a public crisis, threatening the country's ambition to be a science superpower. What began as warnings has now escalated into concrete action, with major players pausing significant investments.

The most definitive blow came last week, when Merck announced it was abandoning its plans for a £1 billion R&D center in London, a site that was meant to be a cornerstone of the UK's life sciences ecosystem.

This move follows AstraZeneca's decision to pause all new capital investment in the country and Eli Lilly's confirmation that it too was pausing future investments. The message from the industry is direct and unambiguous: a country cannot expect to host the R&D for next-generation medicines if its own healthcare system is unwilling to purchase them and its fiscal policies penalize success.

For the UK, which takes pride in its life sciences sector, this is a fundamental crisis. Life sciences contribute around 9% of GDP and more than 250,000 jobs, forming one of the country’s most globally competitive industries. The UK simply cannot afford to lose momentum here—especially after already falling behind in the global race for AI leadership.

This is industrial policy in its hardest form: the flow of capital being used as a lever to force a national reckoning on the true cost of innovation. If the UK fails to respond, it risks eroding not just a pillar of its economy but also its credibility as a home for cutting-edge industries.

Interesting Things in Research and Beyond

One of the fundamental—but often overlooked—problems holding back enterprise adoption of generative AI is consistency. A model that gives a brilliant answer one minute and a subtly wrong one the next is still a research tool, not a reliable industrial process.

That’s why the launch of Thinking Machines Lab, a $2B seed-funded startup founded by ex-OpenAI team members, is so interesting. Their entire focus is on making AI models more consistent and predictable. In a recent blog post (Defeating Nondeterminism in LLM Inference), they argue that instead of chasing ever-larger models, the real frontier is reliability—solving the problem that asking the same question in slightly different ways can yield wildly different results.

This approach makes sense for many scientific domains. You cannot build a diagnostic tool, for example, on an engine that is not robust and reproducible. At the same time, I’ve always seen hallucination and nondeterminism as creative features of generative systems—especially when creativity is an asset rather than a flaw. In drug discovery or materials science, when the goal is to explore uncharted areas of chemical space, you actually want the system to be unpredictable.

The point is: there is a place for both. What’s encouraging is that we’re finally moving beyond the obsession with sheer model size, and towards specialized approaches that make generative AI fit-for-purpose—whether that purpose is creative exploration or industrial-scale reliability.

Signals to Watch

  • The Rise of Alternative Financing: As traditional M&A remains cautious, alternative deal structures are gaining significant traction. A new report from P05 highlights that the biopharma royalty market has now crossed $14 billion in annual deal flow. This indicates a growing trend of companies using royalty financing as a less dilutive way to fund late-stage development, and a sign of increasing financial sophistication in the sector.
  • Europe Focuses on Risk and Compliance: The European Commission has launched a public consultation to develop AI transparency guidelines under the EU's Artificial Intelligence Act, aimed at helping AI system providers detect and label AI-generated content so users know when they're interacting with AI systems. The consultation, open until October 2, 2025, seeks input from AI providers, researchers, civil society, and citizens on implementing transparency obligations that will require disclosure when people interact with AI systems, emotion recognition, or biometric categorization systems, or view AI-generated content. These transparency requirements will become applicable from August 2026 as part of the EU's broader effort to ensure responsible and trustworthy AI development.