OpenAI
GPT-4.1 nano is a OpenAI language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.
OpenAI model releasesQueryable facts extracted from the upstream model payload.
Rank #33 across 325 models.
Rank #34 across 293 models.
Rank #51 across 325 models.
Rank #419 across 474 models.
Rank #201 across 269 models.
Rank #361 across 500 models.
Percentile score by analysis domain.
* Cost is inverted: lower input, output, and blended prices rank higher.
Higher bars mean stronger relative placement.
| Metric | Domain | Value | Rank |
|---|---|---|---|
| Artificial Analysis Intelligence Index | overall | 13.0 | #361 |
| Artificial Analysis Coding Index | coding | 11.2 | #295 |
| Artificial Analysis Math Index | math | 24.0 | #201 |
| MMLU-Pro | reasoning | 65.7% | #249 |
| reasoning |
| 51.2% |
| #335 |
| Humanity's Last Exam | reasoning | 3.9% | #419 |
| LiveCodeBench | coding | 32.6% | #209 |
| SciCode | coding, reasoning | 25.9% | #308 |
| MATH-500 | math | 84.8% | #96 |
| AIME | math | 23.7% | #96 |
| Output Speed | speed | 125.2 tok/s | #84 |
| Time to First Token | speed | 0.46s | #34 |
| Blended Price | cost | $0.175/M | #51 |
| Input Price | cost | $0.100/M | #33 |
| Output Price | cost | $0.400/M | #53 |
| Value Index | cost, overall | 74.3 | #75 |