Anteater Effective Altruism at UC Irvine 📘 Intro Program 🤖 AI Alignment 🤝 Join 👥 Team

Most of us in the EA community have come to realise just how big the threat of future general artifical intelligence can be to our world.

In Irvine, we've formed a group of researchers keen to tackle problems in AI alignment.

We meet every two weeks. Please contact Neil if you're interested in joining our group.

These are our goals:

  1. To foster a community for people interested in working on AI alignment
  2. To improve our understanding of AI alignment
  3. To share with each other our ongoing AI alignment research
  4. To encourage other talented individuals to work on AI alignment