Noor’s Newsletter — Issue #10
This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.
I believe in two things: AI is here to stay — and AI will transform how we treat diseases.
By now, I think we all agree on the former, not just those who work in AI as was the case a decade ago. The latter is still debated, but increasingly gathering believers — and increasingly from people whose opinions matter: pharma leaders.
Big bets over the past couple of weeks underscore that shift. Eli Lilly announced a partnership with Nvidia to build what will likely be the world’s most powerful supercomputer operated by a pharmaceutical company (Reuters, BioSpace, FT).
The system, powered by Nvidia’s new Blackwell GPUs — designed specifically for AI — will live inside Lilly’s Indianapolis data centre and run TuneLab, its recently launched federated platform for drug discovery. Thomas Fuchs appears to be spearheading Lilly’s AI strategy after a long career at Paige.AI, which was absorbed by Tempus earlier this year.
Elsewhere, the regulatory landscape is quickly catching up with the technology.
Harrison.ai secured FDA Breakthrough Device designation for three CT triage models and full marketing authorisation — one of only two radiology AIs so far to become eligible for Medicare NTAP (New Technology Add-on Payment). It is one of the few examples of an algorithm becoming a billable clinical service. Reimbursement is what truly transforms AI from demonstration to infrastructure — and what sustains AI companies. Similar reimbursement logic is now being discussed for pathology and surgical AI. Things are happening, though not as fast as companies and patients hope.
Lila Sciences, a company combining AI, robotics, and automated lab experiments, raised $115 million (Series A extension) led by Nvidia, pushing its valuation above $1.3 billion. The company operates fully automated wet labs in Cambridge, MA — “AI-driven science factories” that generate proprietary data to train and validate models. Kenneth Stanley, who pioneered open-ended evolution, sits on the board — a philosophy I’ve always seen as an orthogonal complement to current approaches in AI.
Lila’s model is yet another example of how companies are challenging traditional CROs and CDMOs by turning laboratory capacity itself into a cloud service — and introducing intelligence into how experiments are run.
In the UK, Constructive Bio joined Larry Ellison’s Generative Biology Institute in Oxford, linking its genetic-code-reprogramming platform to a privately financed research campus (Ellison Institute). Ellison recently announced that he is increasing his initial investment in the institute by £890 million (in addition to initial $1bn committed about a year ago— part of a growing pattern of privately owned bio-innovation infrastructure. This partnership seems to be the first of many that will quietly transfer national research capacity into private hands.
The Institute is positioning biology as a general-purpose technology spanning health, food, and climate — and offering shared compute, wet-lab, and IP pipelines reminiscent of a semiconductor foundry, all presumably running on Oracle infrastructure.
For UK bio-AI startups, this provides a significant source of capital; for policymakers, it raises new sovereignty questions — especially after Nvidia’s recent $2bn investment in the UK AI ecosystem.
The wave of AI-native startups continues to diversify. Boltz Bio launched BoltzGen, an open-source foundation model for molecular design trained on 1.4 billion protein sequence-structure pairs. BoltzGen is released under a permissive licence — both scientifically and commercially! — inviting global labs to fine-tune and extend it. This could become the Hugging Face moment for generative biology if community models begin outperforming proprietary ones.
Beyond biology and GPUs, the compute layer itself continues to evolve.
Google Research announced Quantum Echoes, claiming verifiable quantum advantage on its Willow processor. If confirmed, this would be the first reproducible quantum result validated by classical simulation.
For biotech, it hints at the next frontier: quantum-accelerated molecular simulation — the logical successor to today’s GPU-based design workflows.
Exciting, but still some distance away.
And on the investor side, Menlo Ventures published its 2025 State of AI in Healthcare, a sharp snapshot of where capital is flowing. Menlo reports that healthcare AI investment reached $28 billion this year, with diagnostic imaging, drug discovery, and workflow automation leading the pack. Notably, AI adoption in healthcare continues to outpace most other sectors — a result, Menlo argues, of structural incentives: regulation, reimbursement, and the chronic productivity gap in medicine. In short, it indicates that the industry has made AI a strategic necessity.
—Noor
Interesting Things in Research and Beyond
- The debate about how useful AI is in clinical settings — and whether it will ever replace clinicians — remains heated. A recent study on empathy in medical communication (based on text-only communication) found that AI-based chatbots were rated as more empathic than human healthcare professionals in 12 out of 13 studies. Empathy, long recognised as essential to improving patient outcomes and traditionally viewed as uniquely human, may no longer be an exclusive human domain.
- I'm increasingly interested in studying the economy, and I found this Financial Times piece very interesting. On the surface it’s a clear, almost quiet explanation of what actually drives growth: new ideas, new technologies, new innovations — not just “build more roads” or “spend more money.” More importantly, it hints at something deeper: countries behave a lot like companies operating at national scale. They compete on innovation, defend their moat (talent, IP, infrastructure), and either compound capability or fall behind. The FT draws on recent Nobel economics work to argue that long-term prosperity doesn’t come from squeezing more output out of the same machine, it comes from inventing a better machine in the first place. Which is exactly why AI, bioengineering, compute, and translational capacity are no longer just “tech stories”; they’re industrial policy.