The AI Futures Project is a 501(c)(3) nonprofit research organization (EIN 99-4320292). We have developed a scenario forecast detailing the development of artificial general intelligence (AGI) and artificial superintelligence (ASI). Based on our research, we’ve come to believe that we might cross the thresholds to AGI and ASI within just a few years—and that plausible outcomes include unprecedented concentration of power, global catastrophe, or progress and flourishing.

The world is not ready for such a radical transformation. We hope to help policymakers, AI companies, AI entrepreneurs, AI safety researchers, and the general public to make sense of what’s coming, so they can take action and steer us in a better direction. In addition to our scenario forecast, we are developing policy recommendations and tabletop exercises (wargames) for decisionmakers.

Our scenario is highly specific. Any such specific forecast will inevitably be inaccurate. We strive to provide the most accurate possible scenario, given that it is highly specific. Based on trend extrapolations, expert opinions, and informed guesses, we provide concrete, tangible stories that we can use to reason intuitively about what the future might look like.

Scenario forecast

Our scenario forecast will be published in Q1 2025. We are currently seeking feedback on our scenario. If you are interested in reviewing it, please reach out to us.

Tabletop exercise

Our tabletop exercise explores a scenario where artificial general intelligence (AGI) is developed in the year 2027.

It is based on trend extrapolations, expert opinions, and informed guesses about how AGI might be developed in the coming years. The scenario begins in early 2027 with a US AGI company having achieved the “weak autonomous remote worker” milestone. This exercise is a matrix game, meaning that it is very free-form—it can be thought of as a structured way for a group of people to answer the question “what do we think would actually happen next, if this scenario were to obtain?” Actions consist simply of saying what you think your actor would do, and what effect you think this would have; if other players disagree, there is a brief out-of-character discussion and the facilitators make a ruling.

100+ players have played this game, including D.C. policymakers and researchers at OpenAI and Anthropic. We’ve found it helpful for informing our own thinking about how AGI might go, and so have many of our participants—average participant rating >8/10. We can provide instructions and materials used to run the exercise; we recommend you reach out to us if you plan to run this exercise professionally.

We’ve written up two “scenario endings” that continue the story all the way through 2028 and beyond. They represent our current very tentative best-guess answer to the question “what would happen if the starting scenario obtains” informed by what tends to happen in the exercises we’ve run.

Publications

  • 4 Ways to Advance Transparency in Frontier AI Development. Dean Ball & Daniel Kokotajlo. Time, October 15, 2024.
  • AI 2027: A Scenario Forecast. Daniel Kokotajlo, Eli Lifland, Thomas Larsen. Forthcoming.
  • AI Goals. Daniel Kokotajlo, Eli Lifland, Thomas Larsen. Forthcoming.
  • Training AGI in Secret Would Be Unsafe and Unethical. Daniel Kokotajlo. Forthcoming.
  • High-level priorities for a US government-affiliated AGI project. Daniel Kokotajlo. Forthcoming.