EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

o1

o1 is a OpenAI language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.

OpenAI model releases

Operational Metrics

Output Speed103.3 tok/s
First Token19.20s
Blended Price$26.25/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseDec 5, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: MMLU-Pro

Rank #42 across 345 models.

84.1%

Strength: MATH-500

Rank #27 across 201 models.

97.0%

Strength: AIME

Rank #35 across 194 models.

72.3%

Watch Area: Input $

Rank #320 across 325 models.

$15.00/M

Watch Area: Value

Rank #316 across 323 models.

1.2

Watch Area: Output $

Rank #316 across 325 models.

$60.00/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall30.8#139
Artificial Analysis Coding Indexcoding20.5#191
MMLU-Proreasoning84.1%#42
GPQAreasoning74.7%#162
Humanity's Last Exam
reasoning
7.7%
#194
LiveCodeBenchcoding67.9%#83
SciCodecoding, reasoning35.8%#178
MATH-500math97.0%#27
AIMEmath72.3%#35
Output Speedspeed103.3 tok/s#116
Time to First Tokenspeed19.20s#270
Blended Pricecost$26.25/M#315
Input Pricecost$15.00/M#320
Output Pricecost$60.00/M#316
Value Indexcost, overall1.2#316