OpenAI
o4-mini (high) is one of OpenAI's reasoning-focused models, built for harder multi-step tasks where deliberate problem solving matters more than simple chat completion. The benchmark snapshot highlights how that reasoning emphasis translates into scores, latency, and value versus general-purpose models.
Introducing o3 and o4-miniQueryable facts extracted from the upstream model payload.
Rank #3 across 194 models.
Rank #7 across 201 models.
Rank #12 across 343 models.
Rank #268 across 293 models.
Rank #223 across 325 models.
Rank #221 across 325 models.
Percentile score by analysis domain.
* Cost is inverted: lower input, output, and blended prices rank higher.
Higher bars mean stronger relative placement.
| Metric | Domain | Value | Rank |
|---|---|---|---|
| Artificial Analysis Intelligence Index | overall | 33.1 | #118 |
| Artificial Analysis Coding Index | coding | 25.6 | #144 |
| Artificial Analysis Math Index | math | 90.7 | #24 |
| MMLU-Pro | reasoning | 83.2% | #58 |
| reasoning |
| 78.4% |
| #117 |
| Humanity's Last Exam | reasoning | 17.5% | #82 |
| LiveCodeBench | coding | 85.9% | #12 |
| SciCode | coding, reasoning | 46.5% | #33 |
| MATH-500 | math | 98.9% | #7 |
| AIME | math | 94.0% | #3 |
| Output Speed | speed | 124.5 tok/s | #88 |
| Time to First Token | speed | 17.05s | #268 |
| Blended Price | cost | $1.93/M | #221 |
| Input Price | cost | $1.10/M | #223 |
| Output Price | cost | $4.40/M | #221 |
| Value Index | cost, overall | 17.2 | #201 |