EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Meta

Llama 3.2 Instruct 1B

Llama 3.2 Instruct 1B is a Meta Llama 3 family model, part of Meta's widely deployed open-weight LLM series for assistants, retrieval, coding, and self-hosted applications. The profile helps compare this specific size and instruction variant with newer open and proprietary models in the local snapshot.

Introducing Meta Llama 3

Operational Metrics

Output Speed97.7 tok/s
First Token0.65s
Blended Price$0.100/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 25, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Output $

Rank #6 across 325 models.

$0.100/M

Strength: Blended $

Rank #20 across 325 models.

$0.100/M

Strength: Input $

Rank #36 across 325 models.

$0.100/M

Watch Area: GPQA

Rank #475 across 478 models.

19.6%

Watch Area: Math

Rank #267 across 269 models.

0.0

Watch Area: LCB

Rank #340 across 343 models.

1.9%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall6.3#492
Artificial Analysis Coding Indexcoding0.6#406
Artificial Analysis Math Indexmath0.0#267
MMLU-Proreasoning20.0%#340
GPQA
reasoning
19.6%
#475
Humanity's Last Examreasoning5.3%#266
LiveCodeBenchcoding1.9%#340
SciCodecoding, reasoning1.7%#466
MATH-500math14.0%#199
AIMEmath0.0%#189
Output Speedspeed97.7 tok/s#124
Time to First Tokenspeed0.65s#79
Blended Pricecost$0.100/M#20
Input Pricecost$0.100/M#36
Output Pricecost$0.100/M#6
Value Indexcost, overall63.0#86