EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Mistral 7B Instruct

Mistral 7B Instruct is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed156.9 tok/s
First Token0.36s
Blended Price$0.250/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 27, 2023
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #14 across 293 models.

0.36s

Strength: Output $

Rank #33 across 325 models.

$0.250/M

Strength: Speed

Rank #50 across 293 models.

156.9 tok/s

Watch Area: GPQA

Rank #476 across 478 models.

17.7%

Watch Area: MATH-500

Rank #200 across 201 models.

12.1%

Watch Area: SciCode

Rank #463 across 472 models.

2.4%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall7.4#480
MMLU-Proreasoning24.5%#338
GPQAreasoning17.7%#476
Humanity's Last Examreasoning4.3%#375
LiveCodeBench
coding
4.6%
#334
SciCodecoding, reasoning2.4%#463
MATH-500math12.1%#200
AIMEmath0.0%#190
Output Speedspeed156.9 tok/s#50
Time to First Tokenspeed0.36s#14
Blended Pricecost$0.250/M#67
Input Pricecost$0.250/M#113
Output Pricecost$0.250/M#33
Value Indexcost, overall29.6#157