EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

o3-pro

o3-pro is one of OpenAI's reasoning-focused models, built for harder multi-step tasks where deliberate problem solving matters more than simple chat completion. The benchmark snapshot highlights how that reasoning emphasis translates into scores, latency, and value versus general-purpose models.

Introducing o3 and o4-mini

Operational Metrics

Output Speed16.9 tok/s
First Token98.15s
Blended Price$35.00/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseJun 10, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: GPQA

Rank #58 across 478 models.

84.5%

Strength: Overall

Rank #71 across 500 models.

40.7

Watch Area: Output $

Rank #323 across 325 models.

$80.00/M

Watch Area: Blended $

Rank #322 across 325 models.

$35.00/M

Watch Area: Input $

Rank #322 across 325 models.

$20.00/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall40.7#71
GPQAreasoning84.5%#58
Output Speedspeed16.9 tok/s#290
Time to First Tokenspeed98.15s#289
Blended Price
cost
$35.00/M
#322
Input Pricecost$20.00/M#322
Output Pricecost$80.00/M#323
Value Indexcost, overall1.2#317