EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

GPT-4o (Nov '24)

GPT-4o (Nov '24) is a OpenAI language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.

OpenAI model releases

Operational Metrics

Output Speed107.3 tok/s
First Token0.47s
Blended Price$4.38/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseNov 20, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #40 across 293 models.

0.47s

Watch Area: HLE

Rank #462 across 474 models.

3.3%

Watch Area: Value

Rank #299 across 323 models.

4.0

Watch Area: Math

Rank #245 across 269 models.

6.0

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall17.3#277
Artificial Analysis Coding Indexcoding16.7#221
Artificial Analysis Math Indexmath6.0#245
MMLU-Proreasoning74.8%#181
GPQA
reasoning
54.3%
#312
Humanity's Last Examreasoning3.3%#462
LiveCodeBenchcoding30.9%#217
SciCodecoding, reasoning33.3%#210
MATH-500math75.9%#125
AIMEmath15.0%#112
Output Speedspeed107.3 tok/s#111
Time to First Tokenspeed0.47s#40
Blended Pricecost$4.38/M#273
Input Pricecost$2.50/M#279
Output Pricecost$10.00/M#261
Value Indexcost, overall4.0#299