The Grand Prize remains unclaimed.
The official 2024 Technical Report.
All scores & papers below are open source & reproducible.
"Combining Induction and Transduction for Abstract Reasoning".
Li et al.
"The Surprising Effectiveness of Test-Time Training for Abstract Reasoning".
Akyürek et al.
"Searching Latent Program Spaces".
Bonnet & Macfarlane
"The LLM ARChitect: Solving ARC-AGI Is a Matter of Perspective".
Franzen et al.
"Mini-ARC: Solving Abstraction and Reasoning Puzzles with Small Transformer Models".
Fletcher-Hill
"Towards Efficient Neurally-Guided Program Induction for ARC-AGI".
Ouellette
o3 (coming soon) | 75.7% | 82.8% | |
Jeremy Berman | 53.6% | 58.5% | Code Paper |
MARA(BARC) + MIT | 47.5% | 62.8% | Code Paper |
Ryan Greenblatt | 43% | 42% | Code Paper |
o1-preview | 18% | 21% | Code |
Claude 3.5 Sonnet | 14% | 21% | Code |
GPT-4o | 5% | 9% | Code |
Gemini 1.5 | 4.5% | 8% | Code |
We aspire to grow ARC Prize from its experimental origins into a durable north star for AGI.
The 2025 edition of the competition will account for a diversity of incentives to serve academics, independent researchers, startups, and big labs.
Alongside the competition launch, expect to see ARC-AGI-2 - same format, better benchmark.
We'll announce more competition details early next year. Stay tuned!