| ◈ Science | Quantum ‘Jamming’ Explores the Truly Fundamental Principles of Nature | 15 min |
| ⬡ AI | My Bets on Open Models, Mid-2026 | 12 min |
| ◉ Wildcard | How Britain Learned and Unlearned Nuclear | 22 min |
What if a third party could secretly alter the correlations between entangled particles — without sending any signal, without leaving any trace? This ‘quantum jamming’ would shatter the security guarantees of device-independent cryptography even if no faster-than-light communication is possible. What makes it interesting isn’t the cryptographic stakes — it’s the question underneath: is ‘no-jamming’ a fundamental principle that constrains all possible physical theories, or just an artifact of quantum mechanics? That framing turns cryptographic security into a probe of the structure of physical law.
Thirteen numbered positions on where the open/closed model competition actually lands — and why the interesting answer is structural, not just about benchmarks. The most clarifying bet: closed labs are building an asymmetric advantage through online RL trained directly on real user deployments, a feedback loop open models can’t replicate at scale regardless of compute or funding. That reframes the question from ‘who has better capabilities’ to ‘who has better feedback distribution.’ Lambert is unusually specific about the mechanisms, which makes this worth reading even if you disagree with the conclusions.
Britain built the world’s first civilian nuclear program — 26 Magnox reactors in 15 years, often approved in months, completed in 4–5 years — and then lost its entire lead permanently by 1970. The story isn’t about safety fears, though those came later. It’s about governance: technocrats who couldn’t adapt to scrutiny, regulators divorced from the incentive to approve things, and a culture of secrecy that collapsed under public inquiry. The telling detail: the ‘failed’ AGR fleet that supposedly proved British nuclear uneconomical improved from 47% to 76% capacity utilization as soon as it came under professional management. The technology worked fine. The institutions didn’t.
Kopp and Maleknejad calculate that the stochastic gravitational wave background from the early universe — the diffuse GW noise produced by various sources in the first moments after the Big Bang — can act as a source for fermionic dark matter via a freeze-in mechanism. The key: massless Weyl fermions are produced at 1-loop order through their coupling to the GW background, acquiring their dark matter mass only later. What makes this notable is the efficiency claim: conventional gravitational production requires superheavy particles, but this mechanism works for light fermions and may be more efficient at reproducing the observed dark matter abundance. The observational angle is genuinely interesting: if this mechanism operates, the dark matter density is not independent of the primordial GW spectrum. Future GW detectors sensitive to the early-universe background — not just binary mergers — could probe this link. Two of the most opaque facts about the early universe end up connected in ways that are, in principle, testable.
UK Research and Innovation has quietly defunded £49m of a committed contribution to LHCb Upgrade II — the upgrade that would allow the experiment to run through the High-Luminosity LHC era beginning in 2035. Without the UK share, international partners can’t proceed, and LHCb will likely close in 2033, forfeiting a decade of planned B-physics measurements: precision CP violation searches, rare decay modes, and the BSM sensitivity that comes with 40× more data. One researcher’s quote captures the logic: ‘It’s like paying to heat your house but then sitting outside in the cold.’
What stuck with you this week? Reply with a sentence or the name of a piece—or tell me what didn't land. It helps me calibrate.