EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

NVIDIA

Llama 3.1 Nemotron Nano 4B v1.1 (Reasoning)

Llama 3.1 Nemotron Nano 4B v1.1 (Reasoning) is a Meta Llama 3 family model, part of Meta's widely deployed open-weight LLM series for assistants, retrieval, coding, and self-hosted applications. The profile helps compare this specific size and instruction variant with newer open and proprietary models in the local snapshot.

Introducing Meta Llama 3

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price-

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseMay 20, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: AIME

Rank #38 across 194 models.

70.7%

Strength: MATH-500

Rank #44 across 201 models.

94.7%

Watch Area: SciCode

Rank #422 across 472 models.

10.1%

Watch Area: MMLU-Pro

Rank #284 across 345 models.

55.6%

Watch Area: GPQA

Rank #389 across 478 models.

40.8%

Strength Profile

Percentile score by analysis domain.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall14.4#336
Artificial Analysis Math Indexmath50.0#140
MMLU-Proreasoning55.6%#284
GPQAreasoning40.8%#389
Humanity's Last Exam
reasoning
5.1%
#291
LiveCodeBenchcoding49.3%#153
SciCodecoding, reasoning10.1%#422
MATH-500math94.7%#44
AIMEmath70.7%#38