AGI progress has stalled.
New ideas are needed.

Presented by Infinite Monkey Lab42
June 27, 2024: ARC-AGI-Pub - measure the AGI progress of frontier AI models. Read.

ARC Prize

ARC Prize is a $1,000,000+ public competition to beat and open source a solution to the ARC-AGI benchmark.

Hosted by Mike Knoop (Co-founder, Zapier) and François Chollet (Creator of ARC-AGI, Keras).

Start here


Most AI benchmarks measure skill. But skill is not intelligence. General intelligence is the ability to efficiently acquire new skills. Chollet's unbeaten 2019 Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) is the only formal benchmark of AGI.

It's easy for humans, but hard for AI.


Try ARC-AGI. Given the examples, identify the pattern, solve the test puzzle.


Scroll 👉


1. Configure your output grid:
2. Click to select a color:
3. See if your output is correct:
AI Benchmark Saturation Chart


Progress toward artificial general intelligence (AGI) has stalled. LLMs are trained on unimaginably vast amounts of data, yet they remain unable to adapt to simple problems they haven't been trained on, or make novel inventions, no matter how basic.

Strong market incentives have pushed frontier AI research to go closed source. Research attention and resources are being pulled toward a dead end. You can change that.

Defining AGI

Consensus but wrong:
AGI is a system that can automate the majority of economically valuable work.
AGI is a system that can efficiently acquire new skills and solve open-ended problems.

Definitions are important. We turn them into benchmarks to measure progress toward AGI.

Without AGI, we will never have systems that can invent and discover alongside humans.


Infinite Monkey & Lab42




Grand Prize Goal 85%
MindsAI 39
alijs 32
Lyrialtus 28
Alexander Larko 27


Win Prizes

Total: $1,100,000
Grand Prize: $500,000
2024 Progress Prizes: $100,000
ARC-AGI-Pub Verification: $150,000
To Be Announced: $350,000

ARC Prize 2024 is live on Kaggle.

Learn more

Toggle Animation