The Daily Edition
Wednesday, April 1, 2026
The Index
Science Antimatter Takes to the Road: CERN's First Mobile Antiproton Trap 6 min
Science DESI DR2 Tightens the Case Against a Cosmological Constant Takeaway
AI & Product The LiteLLM Supply Chain Attack: A Cascading Compromise 10 min
Wildcard Do LLMs Inherit Our Cognitive Biases? Takeaway
◈ Science
Antimatter Takes to the Road: CERN's First Mobile Antiproton Trap
Nature News · 6 min read

The BASE collaboration loaded 92 antiprotons into a portable cryogenic Penning trap, disconnected it from the Antiproton Decelerator, put it on a truck, and drove it across CERN’s campus—with the particles still trapped. The mobile system, BASE-STEP, combines a superconducting magnet, ultra-high vacuum, and cryogenic cooling at 8.2 K into an autonomous platform that runs for hours. The endgame: transport antiprotons to magnetically quieter labs like HHU Düsseldorf, where precision CPT symmetry tests could improve by a factor of 100–1,000.

Read in Nature →
Also: CERN press release with technical details

◈ Science · Takeaway
DESI DR2 Tightens the Case Against a Cosmological Constant
Nature Astronomy & Phys. Rev. D · Papers from March 2026

The Dark Energy Spectroscopic Instrument’s second data release—three years of survey data—doubles the precision of its baryon acoustic oscillation measurements to ~0.24% statistical uncertainty. The headline: combined with supernova and CMB data, the preference for a time-varying dark energy equation of state w(z) does not weaken relative to DR1. It strengthens. The tension with ΛCDM now sits around 3σ, with multiple independent datasets pointing to the same mild, oscillatory departure from a cosmological constant.

The caveat is real: some of this signal may reflect tensions between the CMB, BAO, and supernova datasets rather than new physics. But the fact that the evidence grows with more data rather than washing out is exactly what you’d expect if something genuine is happening. If Λ really is dynamical, this is how you’d first see it—not with a bang, but with a slow, persistent statistical accumulation across independent probes.

Worth watching: DESI’s full five-year dataset should be decisive.

Papers: Nature Astronomy · arXiv: 2503.14738

⬥ AI & Product
The LiteLLM Supply Chain Attack: A Cascading Compromise
Datadog Security Labs · 10 min read

On March 24, a threat actor called TeamPCP published two compromised versions of LiteLLM to PyPI—one of the most widely used Python libraries for routing LLM API calls. The attack chain is what makes this worth understanding: the attackers first compromised Trivy, an open-source security scanner, which LiteLLM’s CI/CD pipeline pulled from apt without a pinned version. The poisoned scanner exfiltrated PyPI publish tokens from GitHub Actions. From there, the malicious packages dropped a .pth file that executed on every Python process startup, harvesting environment variables, SSH keys, cloud credentials, and Kubernetes configs. Over 40,000 downloads in a three-hour window before PyPI quarantined it. The attack was discovered when the package was pulled as a transitive dependency by an MCP plugin running inside Cursor.

Read Datadog’s analysis →
Also: Simon Willison’s minute-by-minute response timeline

◉ Wildcard · Takeaway
Do LLMs Inherit Our Cognitive Biases?
NBER Working Paper 34745 · January 2026

Researchers ran the classic behavioral economics experiments—the ones designed to document human irrationality in the Kahneman-Tversky tradition—on large language models across multiple model families and scales. The split result is the interesting part: on preference-based tasks (risk aversion, loss aversion, framing effects), models become more human-like as they scale up. On belief-based tasks (probability estimation, Bayesian updating), larger models tend toward rationality.

The implication is unsettling and fascinating: scaling language models on human text doesn’t converge on some platonic rationality. It converges on us—biases included—at least in domains where human preferences are the training signal. The practical upside: explicitly prompting models to “be rational” significantly reduces these biases. The deeper question: if these models are increasingly used for financial and economic decision support, whose irrationality are they inheriting?

Paper: NBER Working Paper 34745

The Rabbit Hole
In Expanding de Sitter Space, Quantum Mechanics Gets Even More Elusive
Quanta Magazine · 12 min

The holographic principle has been spectacularly productive in anti-de Sitter space. Our actual universe, however, is expanding—it’s de Sitter, not AdS—and the tools that work so well in the negative-curvature case keep breaking in the positive-curvature one. This Quanta piece walks through the specific ways expansion undermines holographic dualities and what recent work suggests about whether a de Sitter version is even possible. If you care about quantum gravity beyond the toy models, this is where the hard problems live.

Thoughts on Slowing the Fuck Down
Simon Willison · 5 min

Willison—who has been one of the most prolific and thoughtful practitioners writing about AI-assisted development—steps back to consider the pace itself. Coming days after the LiteLLM supply chain attack exposed how fast the AI tooling ecosystem is moving without adequate security hygiene, the timing is pointed. A useful counterweight if you’ve been in “ship fast” mode.


The Daily Edition · groundstate.ink
Curated for one reader. Quality over quantity.