EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Nous Research

Hermes 4 - Llama-3.1 70B (Reasoning)

Hermes 4 - Llama-3.1 70B (Reasoning) is a Nous Research language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.

Nous Research model releases

Operational Metrics

Output Speed78.6 tok/s
First Token0.59s
Blended Price$0.198/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseAug 27, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Output $

Rank #58 across 325 models.

$0.400/M

Strength: Blended $

Rank #59 across 325 models.

$0.198/M

Strength: Input $

Rank #59 across 325 models.

$0.130/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall16.0#297
Artificial Analysis Coding Indexcoding14.4#250
Artificial Analysis Math Indexmath68.7#99
MMLU-Proreasoning81.1%#93
GPQA
reasoning
69.9%
#201
Humanity's Last Examreasoning7.9%#188
LiveCodeBenchcoding65.3%#95
SciCodecoding, reasoning34.1%#201
Output Speedspeed78.6 tok/s#161
Time to First Tokenspeed0.59s#65
Blended Pricecost$0.198/M#59
Input Pricecost$0.130/M#59
Output Pricecost$0.400/M#58
Value Indexcost, overall80.8#68