ARC Prize remains undefeated.
New ideas still needed.

The ARC Prize 2024 Technical Report is here. Get it now.

ARC Prize

ARC Prize is a $1,000,000+ public competition to beat and open source a solution to the ARC-AGI benchmark.

Hosted by Mike Knoop (Co-founder, Zapier) and François Chollet (Creator of ARC-AGI, Keras).

ARC Prize 2025 coming soon.

See 2024 winners

ARC-AGI

Most AI benchmarks measure skill. But skill is not intelligence. General intelligence is the ability to efficiently acquire new skills. Chollet's unbeaten 2019 Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) is the only formal benchmark of AGI progress.

It's easy for humans, but hard for AI.

Play

Try ARC-AGI. Given the examples, identify the pattern and solve the test puzzle.

Examples

Scroll 👉

Test

1. Configure your output grid:
2. Click to select a color:
3. See if your output is correct:
AI Benchmark Saturation Chart

AGI

LLMs are trained on unimaginably vast amounts of data, yet remain unable to adapt to simple problems they haven't been trained on, or make novel inventions, no matter how basic.

Strong market incentives have pushed frontier AI research to go closed source. Research attention and resources are being pulled toward a dead end.

ARC Prize is designed to inspire researchers to discover new technical approaches that push open AGI progress forward.

Defining AGI

Consensus but wrong:
AGI is a system that can automate the majority of economically valuable work.
Correct:
AGI is a system that can efficiently acquire new skills and solve open-ended problems.

Definitions are important. We turn them into benchmarks to measure progress toward AGI.

Without AGI, we will never have systems that can invent and discover alongside humans.

Team

Sponsors

Advisors

ARC-AGI SOTA Scores

ARC-AGI Scores
Toggle Animation