EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Magistral Small 1

Magistral Small 1 is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price-

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseJun 10, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: MATH-500

Rank #33 across 201 models.

96.3%

Strength: AIME

Rank #37 across 194 models.

71.3%

Watch Area: Coding

Rank #298 across 410 models.

11.1

Watch Area: SciCode

Rank #325 across 472 models.

24.1%

Strength Profile

Percentile score by analysis domain.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall16.8#283
Artificial Analysis Coding Indexcoding11.1#298
Artificial Analysis Math Indexmath41.3#155
MMLU-Proreasoning74.6%#183
GPQA
reasoning
64.1%
#249
Humanity's Last Examreasoning7.2%#205
LiveCodeBenchcoding51.4%#145
SciCodecoding, reasoning24.1%#325
MATH-500math96.3%#33
AIMEmath71.3%#37