The ARC Prize Foundation is a nonprofit organization dedicated to open scientific progress through enduring AI benchmarks.
We make tools that provide empirical data about intelligence capabilities which guide critical industry decisions about research, safety, and policy.
ARC Prize benchmarks are designed to measure AI progress, not to serve as a litmus test for AGI. ARC Prize tasks are not economically useful to target, instead they are a measure of AI capability. Our goal is to provide an objective assessment of model capabilities rather than influence model training through repeated, iterative testing.
We provide transparency via 3 initiatives:
Providing open source datasets (ARC-AGI-1 and ARC-AGI-2) and software for benchmarking model capabilities.
Maintaining an up-to-date leaderboard of state-of-the-art model performance on the ARC-AGI "Semi-Private" Evaluation dataset. This leaderboard has no limitations on internet access or compute, and is intended to test selected state-of-the-art models and bespoke solutions.
Hosting competitions on an additional "Private" Evaluation dataset of ARC-AGI tasks for open-source models with constraints including bounded compute and no internet access.
Self-reported or third‑party ARC-AGI result figures often vary in dataset curation, prompting methods, and many other factors, which prevents an apples‑to‑apples comparison of results. This causes confusion in the market and ultimately detracts from our goal of measuring frontier AI progress.
ARC Prize Verified submissions are provided official badge assets to optionally display alongside reported ARC-AGI scores. Badge usage guidelines can be found below.
The ARC Prize Verified program is an initiative that ensures the integrity of benchmark results through third-party academic oversight. The program is currently overseen by an independent academic panel, including Todd Gureckis (Professor of Psychology, NYU), Guy Van den Broeck (Professor of Computer Science, UCLA), Melanie Mitchell (Professor at the Santa Fe Institute), and Vishal Misra (Vice Dean of Computing and AI at Columbia) providing an audit and academic validation of our verification method.
ARC Prize accepts general-support donations from individuals, foundations, and AI labs. Sponsors receive no privileged access to our Private or Semi-Private Evaluation datasets, nor any special influence over the development of our benchmarks, roadmap, or methodologies.
Cash, in-kind donations (e.g., API/compute credits), or other contributions have no influence over what we test, how we test, or when we publish. We do not withhold, edit, or delay testing results at any sponsor's request, and we publish results on a standard cadence after evaluations are complete or the model is publicly released (see "Publication Timing" below).
No sponsor, regardless of contribution level, gains access to proprietary information, including but not limited to unpublished evaluation data, testing methodologies, or future benchmark designs. Our nonprofit mission and neutrality remain unchanged. Results are reproducible via the ARC-AGI Benchmarking repository.
A core design principle of ARC-AGI as an intelligence benchmark is that the test taker must not know what the test will be. Fluid intelligence cannot be hard-coded. To rigorously evaluate whether a system is truly learning and adapting - rather than merely recalling known solutions - it's essential that the Semi-Private and Private Evaluation ARC-AGI datasets remain secure. These hidden sets enable authoritative measurement of generalization and are critical for validating state-of-the-art claims without risk of overfitting to the training distribution.
For this reason, we are extremely selective about which submissions we choose to verify.
There are 2 types of submissions ARC Prize will currently consider for verification. Selection criteria are subject to change at any time given input from our independent academic panel and board. Not all previously verified submissions are guaranteed to meet the following criteria.
Solutions that meet the constraints of an active or upcoming ARC Prize competition should be submitted via competition.
We do not verify untrusted non-open-source, non-commercial systems.
The ARC Prize Verified program is not intended to certify all proprietary AI solutions. We're aware that many startups and researchers see value in endorsement from ARC Prize. However, as a small nonprofit organization, it's not possible for us to fully vet sources and certify results for every submission.
We collaborate with selected (at our discretion) open-source and commercial model providers to test released and unreleased models for the benefit of the community.
A model is considered unreleased if its weights are neither open nor available via a public API or service. We will only test unreleased models intended for public launch.
Our approach to testing unreleased models:
This approach ensures that ARC Prize remains an independent evaluator of frontier AI capabilities.
Verified results from selected models, including model outputs, evaluation durations, costs, and individual task scores, are shared alongside the overall model score on HuggingFace.
Many researchers and companies develop custom solutions to ARC-AGI. For those that have open sourced all parts of their solution - apart from API calls to third-party services (see constraints below) - we will consider verification for new, and plausible, high-score claims. Consideration does not guarantee selection for verification.
Only submissions that score 1% higher on both the ARC-AGI-1 and ARC-AGI-2 Public Evaluation datasets (as compared to the Leaderboard scores) may be selected for verification.
Please note that at any given time, there might be a queue of submissions with varying score claims. We reserve the right to refuse selection for any submission.
If you have a solution that is expensive to run, we encourage you to test by randomly sampling 100 tasks from the public evaluation tasks of your selected benchmark version and hold out the remainder for private validation. This can build confidence towards your overall score before incurring significant cost with the full task dataset.
Submit an ARC Prize Verified high-score claim.
Compute and/or provider costs can be significant to run solutions against evaluation sets. To help support those contributing to this initiative, we’ve set up a verification fund.
For each new verified high-score reproduction, we will reimburse up to $2,500.
This fund is a work-in-progress and we reserve the right to make changes at any time or refuse reimbursement requests upon consideration by the ARC Prize team.
To increase verified submission credibility and maintain trustworthiness in the ARC Prize brand and associated benchmarks, the following are the ARC Prize Verified badge guidelines.
Places badges, appropriately associated with verified scores, might be displayed:
Here are the badge assets available for download.
We will continuously add new models and unlist old ones. It is not feasible to add every possible model due to the cost and the scalability of our evaluation process. Reasoning on why we are extremely selective above exposure to our Semi-Private and Private Evaluation datasets is at the top of this page.
We are interested in assessing performance across different levels of reasoning. To do this, we will often repeat model tests at varied reasoning levels.
No. The leaderboard is open to all model types.
If a model selected for verification is open-source and not available via API by the model creator, we will use another public model provider.
Cost is a critical factor in model evaluation, and whenever possible, we will use retail pricing to assess cost efficiency. For model providers, we will base cost calculations on publicly available retail rates — typically measured in price per million tokens — rather than a provider's internal margins or raw cost of goods. Costs are generally shared on an average per-test-pair-attempt basis.
We are a nonprofit that seeks to provide transparency in our testing. We invite the community to reproduce our results. Our independent academic panel also provides external oversight of our testing process.
You are free to test on public data and self-report your scores to the community. Please state clearly the data you tested on, how you tested, and that your results are not verified.
The ARC Prize Foundation is a nonprofit funded by donations, including support from individuals, foundations, and AI labs. We also accept in-kind service credits. Sponsor status does not affect verification eligibility, methods, scoring, publication timing, or access to Semi-Private/Private evaluations.
We publicly disclose lab donations and in-kind support. We do not withhold or delay results at any sponsor's request. Our commitment is scientific rigor, transparency, and impartiality.
If you’d like to support our work, please visit our Donation page.
Feel free to contact us at: team@arcprize.org