EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

GPT-4.1 nano

GPT-4.1 nano is a OpenAI language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.

OpenAI model releases

Operational Metrics

Output Speed125.2 tok/s
First Token0.46s
Blended Price$0.175/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 14, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Input $

Rank #33 across 325 models.

$0.100/M

Strength: TTFT

Rank #34 across 293 models.

0.46s

Strength: Blended $

Rank #51 across 325 models.

$0.175/M

Watch Area: HLE

Rank #419 across 474 models.

3.9%

Watch Area: Math

Rank #201 across 269 models.

24.0

Watch Area: Overall

Rank #361 across 500 models.

13.0

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall13.0#361
Artificial Analysis Coding Indexcoding11.2#295
Artificial Analysis Math Indexmath24.0#201
MMLU-Proreasoning65.7%#249
GPQA
reasoning
51.2%
#335
Humanity's Last Examreasoning3.9%#419
LiveCodeBenchcoding32.6%#209
SciCodecoding, reasoning25.9%#308
MATH-500math84.8%#96
AIMEmath23.7%#96
Output Speedspeed125.2 tok/s#84
Time to First Tokenspeed0.46s#34
Blended Pricecost$0.175/M#51
Input Pricecost$0.100/M#33
Output Pricecost$0.400/M#53
Value Indexcost, overall74.3#75