EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Meta

Llama 4 Scout

Llama 4 Scout is a Meta Llama 4 family model, from Meta's open-weight generation focused on multimodal intelligence and broad developer deployment. This page shows how the benchmarked variant ranks against hosted frontier models and other open-weight systems across quality, speed, and price context.

Introducing Llama 4

Operational Metrics

Output Speed109.2 tok/s
First Token0.52s
Blended Price$0.292/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 5, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #53 across 293 models.

0.52s

Strength: Input $

Rank #74 across 325 models.

$0.170/M

Strength: Blended $

Rank #79 across 325 models.

$0.292/M

Watch Area: Coding

Rank #353 across 410 models.

6.7

Watch Area: Math

Rank #221 across 269 models.

14.0

Watch Area: SciCode

Rank #386 across 472 models.

17.0%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall13.5#353
Artificial Analysis Coding Indexcoding6.7#353
Artificial Analysis Math Indexmath14.0#221
MMLU-Proreasoning75.2%#175
GPQA
reasoning
58.7%
#286
Humanity's Last Examreasoning4.3%#372
LiveCodeBenchcoding29.9%#222
SciCodecoding, reasoning17.0%#386
MATH-500math84.4%#98
AIMEmath28.3%#88
Output Speedspeed109.2 tok/s#110
Time to First Tokenspeed0.52s#53
Blended Pricecost$0.292/M#79
Input Pricecost$0.170/M#74
Output Pricecost$0.660/M#90
Value Indexcost, overall46.2#104