EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Meta

Llama 4 Maverick

Llama 4 Maverick is a Meta Llama 4 family model, from Meta's open-weight generation focused on multimodal intelligence and broad developer deployment. This page shows how the benchmarked variant ranks against hosted frontier models and other open-weight systems across quality, speed, and price context.

Introducing Llama 4

Operational Metrics

Output Speed115.2 tok/s
First Token0.68s
Blended Price$0.475/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 5, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Watch Area: Math

Rank #211 across 269 models.

19.3

Watch Area: HLE

Rank #320 across 474 models.

4.8%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall18.4#264
Artificial Analysis Coding Indexcoding15.6#235
Artificial Analysis Math Indexmath19.3#211
MMLU-Proreasoning80.9%#96
GPQA
reasoning
67.1%
#227
Humanity's Last Examreasoning4.8%#320
LiveCodeBenchcoding39.7%#185
SciCodecoding, reasoning33.1%#217
MATH-500math88.9%#78
AIMEmath39.0%#72
Output Speedspeed115.2 tok/s#102
Time to First Tokenspeed0.68s#83
Blended Pricecost$0.475/M#113
Input Pricecost$0.350/M#147
Output Pricecost$0.850/M#103
Value Indexcost, overall38.7#117