EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

gpt-oss-120B (low)

gpt-oss-120B (low) is part of OpenAI's gpt-oss open-weight model family, intended for developers who need strong local or self-hosted language-model capability. The local profile is useful for checking how this deployment-friendly model compares with hosted frontier systems on quality, throughput, and pricing assumptions.

Introducing gpt-oss

Operational Metrics

Output Speed216.3 tok/s
First Token0.50s
Blended Price$0.263/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseAug 5, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Speed

Rank #14 across 293 models.

216.3 tok/s

Strength: TTFT

Rank #47 across 293 models.

0.50s

Strength: Value

Rank #54 across 323 models.

93.2

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall24.5#194
Artificial Analysis Coding Indexcoding15.5#237
Artificial Analysis Math Indexmath66.7#103
MMLU-Proreasoning77.5%#149
GPQA
reasoning
67.2%
#226
Humanity's Last Examreasoning5.2%#274
LiveCodeBenchcoding70.7%#66
SciCodecoding, reasoning36.0%#171
Output Speedspeed216.3 tok/s#14
Time to First Tokenspeed0.50s#47
Blended Pricecost$0.263/M#71
Input Pricecost$0.150/M#67
Output Pricecost$0.600/M#83
Value Indexcost, overall93.2#54