EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

GPT-4.1

GPT-4.1 is a OpenAI language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.

OpenAI model releases

Operational Metrics

Output Speed86.4 tok/s
First Token0.69s
Blended Price$3.50/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 14, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Watch Area: Value

Rank #264 across 323 models.

7.5

Watch Area: Blended $

Rank #264 across 325 models.

$3.50/M

Watch Area: Input $

Rank #264 across 325 models.

$2.00/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall26.3#175
Artificial Analysis Coding Indexcoding21.8#182
Artificial Analysis Math Indexmath34.7#175
MMLU-Proreasoning80.6%#104
GPQA
reasoning
66.6%
#233
Humanity's Last Examreasoning4.6%#339
LiveCodeBenchcoding45.7%#165
SciCodecoding, reasoning38.1%#133
MATH-500math91.3%#67
AIMEmath43.7%#67
Output Speedspeed86.4 tok/s#148
Time to First Tokenspeed0.69s#87
Blended Pricecost$3.50/M#264
Input Pricecost$2.00/M#264
Output Pricecost$8.00/M#246
Value Indexcost, overall7.5#264