Noor's Newsletter: Issue #4
This isn’t a recap of the weeks—it’s an attempt to understand the forces reshaping how we live, govern, and evolve.
Every truly foundational technology eventually transitions from a technical question to a political one. For AI, that slow transition from the abstract to the concrete is now becoming undeniable. The period of chaotic, permissionless innovation has been gradually ceding ground to a more structured, geopolitical reality. And in the last two weeks, we've seen the blueprints of that new reality take solid form in Washington, Brussels, and London, as each bloc lays down the foundational rules for its respective economic ecosystem.
This isn't just about regulation; it's about industrial strategy. The EU, US, and UK are each defining a distinct path, creating a complex global landscape where the rules of competition are being formalized in real-time.
The most significant move came from the White House with its "America's AI Action Plan". This is a comprehensive industrial strategy document, framing AI leadership as a national imperative. But beyond the high-level goals, its core strategy for biotech and healthcare rests on building a powerful, open innovation engine. The plan aims to democratize access to AI tools for academic labs and startups, preventing the concentration of power in a few hands. This is underpinned by a push for a "try-first" culture, designed to accelerate experimentation with AI solutions in historically slow-moving sectors like healthcare. Crucially, however, the plan recognizes that innovation without trust is useless. It places unprecedented emphasis on model interpretability, directly addressing the "black box" problem to build the necessary confidence with clinicians and regulators for real-world adoption. It is the US codifying its intent not just to set the pace of innovation, but to define the terms of its application in high-stakes fields.
Meanwhile, the European Union's AI Act has moved into its critical implementation phase. As of this week, rules for general-purpose AI models are now in effect. The European Commission immediately followed by publishing a template for how model providers must summarize the data used for training. This is the EU's blueprint in action: a clear, if demanding, focus on transparency and safety. They are not just setting principles; they are defining the exact format of the compliance paperwork.
Finally, we have the UK's long-term vision, detailed in its "Fit for the Future: 10-Year Health Plan". The plan emphasizes leveraging technology and data as core pillars for transforming the NHS. While this positions the NHS as a potential goldmine—a unique, single-payer system for research and innovation—the plan remains vague on the actual policies needed to turn this asset into a true engine for innovation. A clear, long-term strategy centered around this century's core technologies could transform the NHS into one of the world's most powerful healthcare systems, not only in services, but also as a unique enabler of discovery.
These are not just policy papers; they are the architectural drawings for three competing global stacks. The US is building for speed and scale, the EU is building for trust and safety, and the UK is trying to leverage its unique healthcare data asset. For any company operating in this space, navigating the overlaps—and the inevitable conflicts—between these emerging rulebooks is now the central strategic challenge.
- Noor
Interesting Things in Research and Beyond
The gap between a high-level political directive and the technical reality of implementing it is often vast. A great example is the recent executive order targeting so-called "woke AI." New Scientist has a sharp analysis on why such an order may be technically impossible to follow. It explores the deep challenges of defining and eliminating "bias" in models trained on vast, unfiltered text from the internet, where human biases are inherently reflected.
Signals to Watch
The Regulatory Landscape:
Europe's AI Strategy: The European Medicines Agency (EMA) continues to build out its formal strategy for AI, emphasizing a risk-based approach and the need for human-centric governance across the entire lifecycle of a medicinal product.
The Battle over Patents: A radical new proposal being discussed within the Commerce Department is creating significant debate over the future of the US patent system. This "novel approach" would move away from the current system of fixed filing and maintenance fees and instead tax patent holders a percentage of their patent's overall value. While framed as a revenue-raising measure, it is effectively a tax on innovation itself. The AAF rightly points out the immense practical challenges—how do you possibly value one of hundreds of patents in a complex product like an iPhone?—and warns that such a move would put the US out of step with global norms, creating a risk of a "patent exodus" and ultimately deterring the very innovation it's meant to fund.
This would be uniquely damaging to the pharmaceutical business model, which relies on the massive revenue from a few blockbuster drugs to fund the vast R&D costs of countless failures. Consider a drug like Merck’s Keytruda, which generates over $25 billion in annual revenue. The value of its core patents is directly tied to that revenue stream. Even a modest hypothetical tax of 1% on the value attributable to its key patent could create a new, annual tax liability of over $250 million for a single drug. For an industry built on long, high-risk, and expensive development cycles, a tax that punishes its few successful outcomes so heavily would fundamentally cripple the economic model that funds future innovation.
Patient & Public Trust:
The Doctor's Time: A new survey highlighted in Ophthalmology Times provides a crucial data point on patient perception. A majority of patients support the use of AI in healthcare, but with a key condition: its primary purpose must be to free up doctors' time, allowing for more direct human interaction. This underscores that for patients, the value of AI is not in replacing the clinician, but in augmenting them.