Noor’s Newsletter — Issue #5
This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.
AI in Bio Is Becoming Plumbing (and That’s When the Strategy Gets Interesting)
The last few weeks in AI and bio have been contradictory. On one hand, you have massive, billion-dollar deals and national strategies. On the other, you have small, focused papers that rein in the hype.
This is the classic sign that a technology is moving from a demo to actual infrastructure. People are no longer arguing about whether it works, but about the pipes, pricing, and how to wire it all together.
On the "bigger" side, we've seen some huge supply-chain decisions. Eli Lilly committed up to $1.3 billion with Superluminal for a partnership focused on GPCR targets for cardiometabolic disease and obesity. This isn't a pilot; it's Lilly betting its future pipeline on Superluminal's AI-driven discovery platform. Similarly, XtalPi and DoveTree announced a massive collaboration valued at up to $5.99 billion, pairing AI and robotics with target-finding expertise. These are not proof-of-concept projects; they are how major companies plan to fill their pipelines for the next decade. These deals represent two of the largest commitments to date for AI-driven pharmaceutical R&D
But as AI becomes plumbing, it forces trade-offs. A recent paper in Nature Methods found that complex, deep single-cell foundation models didn't outperform simple linear baselines for predicting gene-perturbation effects. This is a crucial reality check. The power of AI isn't in its complexity alone, but in how it's applied, and the problem it is applied to. The UK's NHS is also piloting an AI tool to automatically draft discharge paperwork, not to prove AI is magic, but to free up beds by freeing up clinicians. Again, this is infrastructure: a tool to solve a specific workflow problem.
And that’s the bigger story. Once AI moves from demo to plumbing, the questions shift. The real contest is no longer about the models themselves, but about who controls the flows around them—the data pipelines, the integration points, and the economics at the edges. That’s where IP regimes, reimbursement policies, tariffs, and geopolitics come into play. In other words: the next phase won’t be won on clever architectures, but on how well the pipes are owned, priced, and defended.
Signals That Matter
New knobs to turn in modality space.
- A recent paper in Nature Biotechnology introduced PepMLM, a method for designing linear peptides using masked language modeling without needing structural data. If this proves generalizable, it's a huge step toward faster development of "programmable" intracellular therapies.
Antibiotics are getting a generative jolt.
- An MIT study generated over 36 million candidate molecules, finding new ones active against drug-resistant bacteria like N. gonorrhoeae and MRSA. This matters because it expands the reachable chemical space for finding new mechanisms of action, which is the key to fighting antimicrobial resistance.
Reality checks (the good kind).
- Deep FMs vs. linear baselines — The Nature Methods paper highlighted that simple models sometimes beat complex ones. The lesson? Invest in data quality and problem framing before you stack more parameters.
- Skill drift — A new study pointed out that routine use of AI could reduce diagnostic accuracy in clinicians by about 20%. While it is preliminary results of an experiment run over a short period of time, the takeaway isn't that AI is bad, but that we need to evaluate how humans adapt.
The stack is consolidating.
- Abridge (now valued at $5.3B) is planning to spend most of its cash on expanding its platform beyond scribes—into claims and care management. This shows that the real moat is owning the workflow and data at the edges.
Where policy meets pricing power.
- A Tax Foundation analysis argues that proposed pharma tariffs of up to 250% would likely raise, not lower, US drug prices.
- In the defense press, the US-China race to militarize biotech is framed as a matter of national capability, with a proposed $15B federal boost explicitly tied to AI and data infrastructure.
So What? (How to read this if you're building or buying)
- Treat AI discovery like capacity planning. These massive deals are simply procurement for hit-rate on hard targets. If you're a platform, prove your throughput. If you're pharma, diversify your suppliers by modality and target class.
- Prioritize benchmarks over demos. The Nature Methods paper is a perfect example. If a simple linear model beats your complex one, your model isn't the bottleneck—your data and problem definition are.
- Own the integration edge. The NHS discharge pilot and Abridge's expansion show that the real advantage is at the workflow boundary, where notes turn into orders, codes, and revenue.
- Model geopolitics into your costs. Tariffs, data localization, and national bio programs will reprice everything from APIs to compute power. It's a risk that needs to be budgeted for.
Outside (but related) Interests
Sometimes the most interesting signals for biotech and healthcare come from outside the domain. Three stories caught my eye this week:
- Every company as an AI trainer. Clement Delangue (Hugging Face CEO) argued that every tech company can—and should—train their own models, rather than treating AI as a monolithic supplier industry. The resonance for bio is obvious: will pharma and health systems outsource AI “capacity” indefinitely, or start owning training loops tied to their unique data?
- Small is the new big. A recent arXiv preprint argues that small language models—when carefully trained and integrated with external tools—can outperform frontier-scale models in practical “agentic” tasks. The implications for healthcare are direct: instead of waiting for trillion-parameter generalists, hospitals and pharma may benefit more from compact, domain-specific models that are cheaper to run, easier to govern, and better at fitting into workflows.