EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Anthropic

Claude Sonnet 4.6 (Non-reasoning, Low Effort)

Claude Sonnet 4.6 (Non-reasoning, Low Effort) is an Anthropic Claude Sonnet model, a balanced Claude tier commonly aimed at strong coding, reasoning, and agent workflows with practical latency and cost. The local profile shows whether this specific reasoning mode is a better fit for quality, speed, or value-sensitive use cases.

Introducing Claude Sonnet

Operational Metrics

Output Speed51.5 tok/s
First Token1.09s
Blended Price$6.00/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseFeb 17, 2026
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Coding

Rank #33 across 410 models.

43.0

Strength: SciCode

Rank #47 across 472 models.

44.1%

Strength: Overall

Rank #56 across 500 models.

42.6

Watch Area: Blended $

Rank #297 across 325 models.

$6.00/M

Watch Area: Input $

Rank #297 across 325 models.

$3.00/M

Watch Area: Output $

Rank #295 across 325 models.

$15.00/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall42.6#56
Artificial Analysis Coding Indexcoding43.0#33
GPQAreasoning79.7%#102
Humanity's Last Examreasoning10.8%#138
SciCode
coding, reasoning
44.1%
#47
Output Speedspeed51.5 tok/s#218
Time to First Tokenspeed1.09s#150
Blended Pricecost$6.00/M#297
Input Pricecost$3.00/M#297
Output Pricecost$15.00/M#295
Value Indexcost, overall7.1#267