Experts launch exam to assess expert-level AI intelligence
A team of experts has launched "Humanity’s Last Exam," seeking challenging questions for AI systems to assess when they reach expert-level intelligence. This initiative comes after recent AI models, like OpenAI o1, excelled in standard tests. The project is organized by the Center for AI Safety and Scale AI. It aims to create at least 1,000 difficult questions, with submissions due by November 1. Winning entries will receive prizes and co-authorship. The exam will focus on abstract reasoning and will not include questions about weapons. Organizers hope to ensure that AI responses are not based on memorized answers from existing datasets.