EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Cohere

Command A

Command A is a Cohere model profile, generally aimed at enterprise language, retrieval, multilingual, and tool-using workflows. The benchmark data on this page helps compare the model's practical quality, runtime, and price against both frontier chat models and other enterprise-oriented alternatives.

Cohere model releases

Operational Metrics

Output Speed50.7 tok/s
First Token0.44s
Blended Price$4.38/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseMar 13, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #28 across 293 models.

0.44s

Watch Area: Value

Rank #303 across 323 models.

3.1

Watch Area: Input $

Rank #277 across 325 models.

$2.50/M

Watch Area: Math

Rank #227 across 269 models.

13.0

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall13.5#352
Artificial Analysis Coding Indexcoding9.9#318
Artificial Analysis Math Indexmath13.0#227
MMLU-Proreasoning71.2%#213
GPQA
reasoning
52.7%
#322
Humanity's Last Examreasoning4.6%#336
LiveCodeBenchcoding28.7%#234
SciCodecoding, reasoning28.1%#274
MATH-500math81.9%#104
AIMEmath9.7%#134
Output Speedspeed50.7 tok/s#223
Time to First Tokenspeed0.44s#28
Blended Pricecost$4.38/M#271
Input Pricecost$2.50/M#277
Output Pricecost$10.00/M#257
Value Indexcost, overall3.1#303