EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Google

Gemma 4 26B A4B (Reasoning)

Gemma 4 26B A4B (Reasoning) is a Google Gemma open-model profile, aimed at developers who want deployable Google-developed models rather than hosted Gemini-only access. The benchmark data is especially useful for comparing its efficiency and capability against similarly sized open and open-weight alternatives.

Introducing Gemma

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price$0.198/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 2, 2026
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Value

Rank #26 across 323 models.

157.6

Strength: Output $

Rank #50 across 325 models.

$0.400/M

Strength: HLE

Rank #79 across 474 models.

18.3%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall31.2#134
Artificial Analysis Coding Indexcoding22.4#173
GPQAreasoning79.2%#108
Humanity's Last Examreasoning18.3%#79
SciCode
coding, reasoning
40.0%
#96
Blended Pricecost$0.198/M#57
Input Pricecost$0.130/M#57
Output Pricecost$0.400/M#50
Value Indexcost, overall157.6#26