EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Anthropic

Claude 4.5 Haiku (Reasoning)

Claude 4.5 Haiku (Reasoning) is an Anthropic Claude Haiku model, the faster and lighter Claude tier designed for responsive assistants, automation, and high-volume workflows. The benchmark view helps judge how much quality it retains relative to larger Claude variants while showing its speed and pricing tradeoffs.

Introducing Claude Haiku

Operational Metrics

Output Speed103.8 tok/s
First Token12.26s
Blended Price$2.00/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseOct 15, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: SciCode

Rank #53 across 472 models.

43.3%

Strength: Overall

Rank #93 across 500 models.

37.1

Strength: Math

Rank #52 across 269 models.

83.7

Watch Area: TTFT

Rank #262 across 293 models.

12.26s

Watch Area: Output $

Rank #228 across 325 models.

$5.00/M

Watch Area: Blended $

Rank #223 across 325 models.

$2.00/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall37.1#93
Artificial Analysis Coding Indexcoding32.6#99
Artificial Analysis Math Indexmath83.7#52
MMLU-Proreasoning76.0%#166
GPQA
reasoning
67.2%
#225
Humanity's Last Examreasoning9.7%#158
LiveCodeBenchcoding61.5%#114
SciCodecoding, reasoning43.3%#53
Output Speedspeed103.8 tok/s#115
Time to First Tokenspeed12.26s#262
Blended Pricecost$2.00/M#223
Input Pricecost$1.00/M#214
Output Pricecost$5.00/M#228
Value Indexcost, overall18.6#196