Noor’s Newsletter — Issue #12

This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.

Replacing Animal Testing: A New Regulatory Era

Animal testing in drug discovery has long been the subject of multifaceted debate. On one hand, it has been — until now — the primary mechanism for assessing a drug’s safety before it reaches humans. On another, the ethical discomfort is obvious: sacrificing dogs, pigs, monkeys, and mice for human benefit is increasingly hard to justify. And finally, despite the value of animal studies in flagging efficacy signals and potential toxicities, they fail to model the full complexity of human biology. In some therapeutic areas — neurodegenerative diseases such as Alzheimer’s being the archetypal example — animal models have been almost entirely uninformative.

It’s no surprise, then, that governments and funders have spent decades chasing more accurate and ethical replacements. For years, organs-on-a-chip were hailed as the ultimate alternative. With AI, that horizon is now expanding dramatically. AI is arguably the first technology capable of even approaching the modeling of human biology at the scale and granularity required. Meanwhile, we’ve built an ecosystem of data modalities — genomics, proteomics, multiplexed assays, high-resolution tissue profiling — that capture multidimensional complexity. Put together, these inputs offer an increasingly plausible route to predicting what actually happens inside a human body when exposed to a drug.

Regulators are moving quickly. On 11 November 2025, the UK government published a detailed strategy to accelerate the replacement of animal tests with alternative methods — with AI front and center. Their plan:

  • By end of 2026: eliminate animal testing for skin/eye irritation and sensitization.
  • By 2027: phase out key mouse-based assays (e.g., botulinum “strength” tests).
  • By 2030: significantly reduce dog and primate pharmacokinetic (PK) studies.

This follows the trajectory set by U.S. regulators. Earlier in 2025, the FDA published a roadmap outlining its intent to phase out specific animal tests, especially for monoclonal antibody therapeutics, in favor of new approach methodologies, including computational and AI-based models. This builds on the FDA Modernization Act 2.0 (2022), which removed the statutory requirement for animal testing in certain investigational drug submissions, opening the door to organoids, in vitro platforms, and increasingly sophisticated simulations.


AI in Cell & Gene Therapy: CAR-T Design Breaks New Ground

AI is no stranger to drug design. Its contributions to small-molecule discovery are widely recognized (though the percentages are hotly debated), and over the past year it has rapidly expanded into antibody engineering. But its role in CAR-T and broader cell and gene therapies is still at an early stage.

That may be changing. New work from St. Jude Children’s Research Hospital demonstrates how AI — specifically protein-structure modeling with AlphaFold — enabled the design of a bispecific (tandem) CAR-T receptor targeting two antigens. This led to improved expression, stronger anti-tumor potency, and the tantalizing possibility of extending CAR-T interventions into solid, heterogeneous tumors.


Governance & Biosecurity: Dual-Use Risks, Faster Than Regulation

The risks of AI in the biosciences are not new. A widely cited paper from three years ago showed that non-chemists could use generative models to design theoretically weaponizable small molecules. The latest work continues in this direction: a new perspective on arXiv warns that generative AI applied to biology introduces qualitatively new dual-use risks — including de novo synthesis of toxins, viruses, and engineered harmful proteins.

The authors call for multi-layered governance, including data filtering, real-time monitoring of model outputs, alignment mechanisms tuned specifically for biological misuse and secure-by-design model architectures.

Related frameworks have been emerging across the literature — including “whack-a-mole” governance models emphasizing rapid, modular interventions to keep pace with fast-evolving AI-synthetic biology systems.

Risk-assessment tools are also appearing. One recent publication proposes a structured methodology for evaluating biosecurity risks arising from AI + synthetic biology systems, aiming to give institutions something more rigorous than intuition-based red-teaming.


Techbio ecosystem

Profluent — one of the leading generative-AI-for-protein-design companies — raised $106 million in a round co-led by Altimeter Capital and Bezos Expeditions. This brings its total funding to roughly $150M. Profluent's business model so far seems to be following a platform/technology offering. Their toolbox includes the use of large language models to generate functional proteins (Nature Biotech, 2023), de novo design of CRISPR system (Nature, 2025) and showing that scaling laws — the core concept driving LLM progress — apply to protein design (NeurIPS, 2025). The company is also known for OpenCRISPR-1, its open-source gene-editing system — a notable example of open science in a field where commercial models are mostly considered proprietary IP.


Interesting Things in Research and Beyond

Google released the new Gemini model, and it immediately climbed to the top of the LLM leaderboard. Of course, these victories are fleeting; another model will soon overtake it and the cycle will repeat. What genuinely stood out to me this round is the update to the Nano Banana model, which is startlingly capable for its size. The examples from Nano Banana Pro are almost hard to believe, and it feels inevitable that the internet will soon be saturated with AI-generated content that eclipses the volume of human-created data. A few minutes scrolling through TikTok or Instagram already gives you a sense of how quickly this is accelerating. I love that these tools are opening the gates to unbounded creativity for anyone who wants to make something, yet there’s a growing tension around information quality and integrity. We are already drowning in dubious content, and the next wave will only make discerning truth from noise more challenging.