| ◈ Science | Why Our Expanding Universe Breaks Quantum Mechanics | 10 min |
| ⬡ AI | GAN-Inspired Harness Design for Autonomous Coding Agents | 10 min |
| ◉ Econ | $264 Billion In, Almost Nothing Out: The 2025 Tariff Experiment | Takeaway |
Physicists can make quantum field theory work in a static universe and even a collapsing one — but an expanding universe, the one we actually live in, remains stubbornly hostile to the framework. This piece traces why de Sitter space is so resistant to standard quantum treatments, and how researchers are mining insights from black hole physics for clues. If you've ever taken the reconciliation of QM and cosmology for granted, this will productively unsettle you.
Read at Quanta →The core insight here is borrowed from GANs: separating generation from evaluation produces dramatically better results than self-critique, because models reliably inflate their own assessments. Rajasekaran documents a three-agent system — planner, generator, evaluator — that turns Claude into a substantially more capable autonomous developer. The practical tradeoff is clear: harness runs are computationally expensive, but the capability jump is real, and the architecture generalizes beyond coding to any domain where iterative refinement matters.
Read at Anthropic →The 2025 tariffs were historically unprecedented — U.S. average duties went from 2.4% to 9.6%, the highest level in 80 years — yet the macro impact was almost comically muted. The Brookings BPEA paper puts the net welfare effect somewhere between −0.13% and +0.1% of GDP, essentially a rounding error. Revenue tripled to $264 billion, but roughly 90% of tariff costs passed straight through to importers (and from there, to consumers). No evidence yet of the promised manufacturing job creation or reduced trade deficits.
The one clear structural shift: China's share of U.S. imports collapsed from 23% (2017) to 7%, accelerating a decoupling that was already underway. The paper is useful less for its policy prescriptions than for its disciplined measurement of what actually happened vs. what was promised — a case study in how large policy interventions can generate enormous revenue flows while barely moving the needle on the metrics they were designed to change.
Sam Rose's interactive essay is probably the best visual explanation of LLM quantization you'll find. It builds from first principles — how floating point numbers are represented in binary — up through the surprising importance of "super weights," rare outlier values that a model can't survive without (remove a single one and you get gibberish). The practical takeaway: 16-bit to 8-bit costs you almost nothing; 16-bit to 4-bit gets you ~90% quality at a fraction of the memory. If you've ever used a quantized model without understanding what you were trading away, this fills the gap beautifully.
Thompson reverses his earlier bubble-friendly stance with a straightforward argument: agents change the economics. When compute improvements are exponential, the deployment barrier keeps dropping, and the returns are measurable on revenue (not just cost savings), the standard bubble framework doesn't hold. The most interesting move is framing Anthropic and OpenAI as emerging integration points in the value chain — the position that historically captures disproportionate value. Worth reading alongside the Anthropic harness piece above; they're describing the same shift from different altitudes.