EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Ministral 3 3B

Ministral 3 3B is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed287.6 tok/s
First Token0.30s
Blended Price$0.100/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseDec 2, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #3 across 293 models.

0.30s

Strength: Speed

Rank #6 across 293 models.

287.6 tok/s

Strength: Output $

Rank #7 across 325 models.

$0.100/M

Watch Area: Coding

Rank #366 across 410 models.

4.8

Watch Area: GPQA

Rank #407 across 478 models.

35.8%

Watch Area: SciCode

Rank #398 across 472 models.

14.4%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall11.2#397
Artificial Analysis Coding Indexcoding4.8#366
Artificial Analysis Math Indexmath22.0#205
MMLU-Proreasoning52.4%#289
GPQA
reasoning
35.8%
#407
Humanity's Last Examreasoning5.3%#267
LiveCodeBenchcoding24.7%#255
SciCodecoding, reasoning14.4%#398
Output Speedspeed287.6 tok/s#6
Time to First Tokenspeed0.30s#3
Blended Pricecost$0.100/M#21
Input Pricecost$0.100/M#42
Output Pricecost$0.100/M#7
Value Indexcost, overall112.0#37