EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Meta

Llama 3.2 Instruct 3B

Llama 3.2 Instruct 3B is a Meta Llama 3 family model, part of Meta's widely deployed open-weight LLM series for assistants, retrieval, coding, and self-hosted applications. The profile helps compare this specific size and instruction variant with newer open and proprietary models in the local snapshot.

Introducing Meta Llama 3

Operational Metrics

Output Speed52.2 tok/s
First Token0.66s
Blended Price$0.150/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 25, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Output $

Rank #13 across 325 models.

$0.150/M

Strength: Blended $

Rank #33 across 325 models.

$0.150/M

Strength: Input $

Rank #68 across 325 models.

$0.150/M

Watch Area: GPQA

Rank #463 across 478 models.

25.5%

Watch Area: MMLU-Pro

Rank #328 across 345 models.

34.7%

Watch Area: Math

Rank #255 across 269 models.

3.3

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall9.7#427
Artificial Analysis Math Indexmath3.3#255
MMLU-Proreasoning34.7%#328
GPQAreasoning25.5%#463
Humanity's Last Exam
reasoning
5.2%
#277
LiveCodeBenchcoding8.3%#323
SciCodecoding, reasoning5.2%#445
MATH-500math48.9%#174
AIMEmath6.7%#149
Output Speedspeed52.2 tok/s#216
Time to First Tokenspeed0.66s#80
Blended Pricecost$0.150/M#33
Input Pricecost$0.150/M#68
Output Pricecost$0.150/M#13
Value Indexcost, overall64.7#83