Ensuring AI is aligned with our values is the most important issue facing humanity
Hi, we're the Nonlinear Fund.
We research, fund, and seed AI Safety interventions because failure could lead to extinction or worse.
We are thinkers and builders. We systematically search for high impact interventions, and when we identify a promising opportunity after hundreds of hours of diligence, we make it happen via advocacy, incubation, and funding.
If you need help with your intervention, we'd love to hear what you're working on.