MxS
The 2025 Peregrine Report
Completed

The 2025 Peregrine Report

AI Safety Research Policy

Say you are unconstrained by money, and can get all the talent in the world: what are the top interventions that will have a substantial impact over the next 2 years? The projects should make you feel substantially better about humanity’s trajectory with transformative AI, that ‘we are on track’.

This was the question we put to 48 experts at OpenAI, Anthropic, Google DeepMind, the EU AI Office, multiple AI Safety Institutes, METR, RAND, and others.

When frontier lab CEOs started publicly predicting transformative AI by 2026-2027, we noticed a gap: no comprehensive resource mapping what interventions might actually work under these compressed timelines. So we asked the people closest to the problem.

The result: 208 concrete project proposals across eight domains, from technical alignment research to international coordination to crisis preparedness.

What surprised me most was the sheer breadth. This isn’t a field waiting for theoretical breakthroughs. There are many dials to turn, and people want to turn them.

My role

I designed the methodology, led most interviews, and drove the synthesis. The project originated as preparation for a Halcyon Futures retreat where senior participants stress-tested and prioritized proposals. MxSchons GmbH handled operational backend.

Collaborators

Samuel Härgestam (coordination, report finalization), Gavin Leech (interview processing), Raymund Bermejo (operations).

Where it went

Shared with OpenAI Foundation, Coefficient Giving, and other funders as a planning resource for AI safety portfolios.


Read the full report at riskmitigation.ai