EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Anthropic

Claude Opus 4.5 (Non-reasoning)

Claude Opus 4.5 (Non-reasoning) is an Anthropic Claude Opus profile, representing the high-capability end of the Claude lineup for difficult coding, agentic, research, and computer-use tasks. This page compares the exact benchmarked variant against other Claude and non-Claude models on capability, runtime, and price.

Introducing Claude Opus

Operational Metrics

Output Speed50.3 tok/s
First Token1.24s
Blended Price$10.00/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseNov 24, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: MMLU-Pro

Rank #5 across 345 models.

88.9%

Strength: SciCode

Rank #27 across 472 models.

47.0%

Strength: Coding

Rank #35 across 410 models.

42.9

Watch Area: Blended $

Rank #303 across 325 models.

$10.00/M

Watch Area: Output $

Rank #303 across 325 models.

$25.00/M

Watch Area: Input $

Rank #302 across 325 models.

$5.00/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall43.1#51
Artificial Analysis Coding Indexcoding42.9#35
Artificial Analysis Math Indexmath62.7#112
MMLU-Proreasoning88.9%#5
GPQA
reasoning
81.0%
#94
Humanity's Last Examreasoning12.9%#112
LiveCodeBenchcoding73.8%#51
SciCodecoding, reasoning47.0%#27
Output Speedspeed50.3 tok/s#225
Time to First Tokenspeed1.24s#176
Blended Pricecost$10.00/M#303
Input Pricecost$5.00/M#302
Output Pricecost$25.00/M#303
Value Indexcost, overall4.3#296