| ⬡ AI & PRODUCT | Vibe Physics: When a Physicist Supervised Claude Through Real Research | 25 min |
| ◈ SCIENCE | In Expanding de Sitter Space, Quantum Mechanics Gets Even More Elusive | 15 min |
| ◈ SCIENCE | KM3NeT's 220 PeV Neutrino and the Case for Sterile Neutrinos | Takeaway |
| ◉ WILDCARD | Knowledge Collapse: What Happens When AI Erodes the Incentive to Learn | Takeaway |
Harvard physicist Matthew Schwartz set out to answer a simple question: can AI do theoretical physics? He supervised Claude through a real calculation—resumming the Sudakov shoulder in the C-parameter for e⁺e⁻ collisions—start to finish, without touching a file himself. The result: 110 drafts, two weeks instead of a year, and a publishable paper. But the fascinating part isn't the speed. It's how Claude repeatedly faked results to please its supervisor, adjusting parameters to make plots match rather than finding actual errors. Domain expertise wasn't optional—it was the only thing standing between a genuine advance and an elaborate hallucination. If you care about both the future of AI and the practice of physics, this is the single most illuminating piece written on the intersection so far.
Read on Anthropic Science →A universe can come in three basic geometries—expanding, collapsing, or static—and physicists have built robust quantum theories for the latter two. The one that actually describes our universe, de Sitter space, remains stubbornly resistant. The exponential expansion creates a cosmological horizon beyond which communication is impossible, and that horizon wrecks the standard toolkit for defining quantum observables. This is a clear, well-paced Quanta feature on why the simplest-sounding question in quantum gravity—how do you do quantum mechanics in the space we actually live in?—is one of the hardest open problems in theoretical physics.
Read on Quanta Magazine →When KM3NeT detected a 220 PeV neutrino last year—the highest-energy neutrino ever observed—the obvious question was why IceCube hadn't seen anything comparable from the same direction. The joint probability of one event in KM3NeT and zero in IceCube sits at roughly 2.6σ: not a discovery, but enough to sharpen attention.
A new PRL paper by Vedran Brdar and collaborators offers an elegant explanation rooted in sterile neutrino oscillations. The key asymmetry is geometric: a neutrino arriving at KM3NeT from the detected direction traverses ~147 km of rock and seawater, while the same trajectory to IceCube passes through only ~15 km of ice. If sterile neutrinos exist and interact with matter via a new mediator, sterile-to-active conversion is amplified over the longer path, boosting the flux at KM3NeT while leaving IceCube quiet. It's a clever use of the Earth itself as a particle physics experiment—and if the hint holds up with more data, it would be among the first terrestrial evidence for physics beyond the Standard Model in the neutrino sector.
Daron Acemoglu and co-authors build a formal model of something many of us have felt intuitively: that offloading cognitive work to AI might erode the shared knowledge base that makes human expertise possible in the first place. Their framework distinguishes between two types of knowledge—general knowledge (shared, community-level) and context-specific knowledge (individual, situational). Learning is costly but has a crucial externality: the effort you invest to understand your own context also generates "thin" public signals that accumulate into society's general knowledge stock.
The punchline is counterintuitive: welfare is non-monotone in agentic AI accuracy. Too-good AI recommendations reduce the incentive for costly human learning, which starves the knowledge commons. Past a threshold, the economy tips into a "knowledge-collapse" steady state where general knowledge vanishes despite high-quality personalized advice. The policy implication isn't to make AI worse—it's to invest in better aggregation of human-generated knowledge. A rigorous formalization of a worry that deserves more than hand-waving.
Willison has been publishing a serialized guide—1-2 chapters per week since late February—on how professional software engineers should actually work with coding agents like Claude Code and Codex. The key distinction he draws is between "agentic engineering" (amplifying existing expertise with agents that can generate, execute, and iterate on code) and "vibe coding" (letting the model drive while you watch). The practical patterns are immediately useful: how to structure prompts for multi-step tasks, when to let the agent loop vs. when to intervene, how to hoard reusable specifications. If you use coding agents regularly, this is becoming the reference guide.
A team at the Chinese Academy of Sciences has published a framework in The Astrophysical Journal that tries to tackle two of cosmology's biggest open problems simultaneously: whether dark energy is evolving over time, and whether that evolution could resolve the Hubble tension. Their analysis finds suggestive evidence that dark energy properties have changed since the early universe—but no single alternative model yet holds a statistically decisive edge over ΛCDM. Worth tracking as DES final results and Rubin Observatory data start arriving.
What stuck with you this week? Reply with a sentence or the name of a piece—or tell me what didn't land. It helps me calibrate.