EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Value Index

Overall intelligence divided by blended price. Higher means more benchmark score per dollar.

Value Index is derived inside Easy Benchmarks. It divides the Artificial Analysis Intelligence Index by blended price, which makes it useful for finding inexpensive models with strong broad benchmark performance.

Test type: Derived price-quality metric. Higher values mean more broad benchmark score per dollar.

Coverage

323 models have this metric.

525.0

Current leader: Qwen3.5 0.8B (Reasoning)

Project links

This is a local derived metric, not an upstream Artificial Analysis benchmark.

Artificial Analysis methodology

Top Value Models

Top models ranked by Value.

Leaderboard

RankModelCreatorValueSpeedBlended Price
#1Qwen3.5 0.8B (Reasoning)Alibaba525.0n/a$0.020/M
#2
Qwen3.5 0.8B (Non-reasoning)
Alibaba
495.0
273.6 tok/s
$0.020/M
#3Qwen3.5 4B (Reasoning)Alibaba451.7204.8 tok/s$0.060/M
#4Qwen3.5 2B (Reasoning)Alibaba407.5n/a$0.040/M
#5Qwen3.5 4B (Non-reasoning)Alibaba376.7200.2 tok/s$0.060/M
#6Qwen3.5 2B (Non-reasoning)Alibaba367.5227 tok/s$0.040/M
#7Qwen3.5 9B (Reasoning)Alibaba286.762.9 tok/s$0.113/M
#8MiMo-V2-Flash (Feb 2026)Xiaomi276.7120.6 tok/s$0.150/M
#9DeepSeek V4 Flash (Reasoning, Max Effort)DeepSeek265.777.4 tok/s$0.175/M
#10MiMo-V2-Flash (Reasoning)Xiaomi261.3118.8 tok/s$0.150/M
#11DeepSeek V4 Flash (Reasoning, High Effort)DeepSeek256.6n/a$0.175/M
#12Gemma 3n E4B InstructGoogle256.015.3 tok/s$0.025/M
#13NVIDIA Nemotron 3 Nano 30B A3B (Reasoning)NVIDIA253.1154.8 tok/s$0.096/M
#14Step 3.5 FlashStepFun252.0123.6 tok/s$0.150/M
#15gpt-oss-20B (high)OpenAI245.0242.3 tok/s$0.100/M
#16NVIDIA Nemotron Nano 9B V2 (Reasoning)NVIDIA211.4121.6 tok/s$0.070/M
#17MiMo-V2-Flash (Non-reasoning)Xiaomi202.7116.7 tok/s$0.150/M
#18LFM2 24B A2BLiquid AI201.9196.9 tok/s$0.052/M
#19GLM-4.7-Flash (Reasoning)Z AI198.0110.5 tok/s$0.152/M
#20Granite 4.1 8BIBM196.8134.6 tok/s$0.063/M
#21GPT-5 nano (high)OpenAI194.2136 tok/s$0.138/M
#22gpt-oss-20B (low)OpenAI192.6249.7 tok/s$0.108/M
#23GPT-5 nano (medium)OpenAI187.7150.3 tok/s$0.138/M
#24Ling 2.6 FlashInclusionAI174.7206 tok/s$0.150/M
#25Nova MicroAmazon168.9332.1 tok/s$0.061/M
#26Gemma 4 26B A4B (Reasoning)Google157.6n/a$0.198/M
#27NVIDIA Nemotron Nano 9B V2 (Non-reasoning)NVIDIA153.5153.3 tok/s$0.086/M
#28GLM-4.7-Flash (Non-reasoning)Z AI145.489.6 tok/s$0.152/M
#29Grok 4.1 Fast (Reasoning)xAI140.4140.9 tok/s$0.275/M
#30Qwen2.5 TurboAlibaba137.977.7 tok/s$0.087/M
#31DeepSeek V3.2 (Reasoning)DeepSeek132.4n/a$0.315/M
#32Grok 4 Fast (Reasoning)xAI127.676.2 tok/s$0.275/M
#33gpt-oss-120B (high)OpenAI126.6212.3 tok/s$0.263/M
#34Gemini 2.5 Flash-Lite Preview (Sep '25) (Reasoning)Google123.4n/a$0.175/M
#35Nova LiteAmazon121.0186.8 tok/s$0.105/M
#36Llama 3.1 Instruct 8BMeta118.0164.4 tok/s$0.100/M
#37Ministral 3 3BMistral112.0287.6 tok/s$0.100/M
#38Gemini 2.5 Flash-Lite Preview (Sep '25) (Non-reasoning)Google110.9n/a$0.175/M
#39Llama Nemotron Super 49B v1.5 (Reasoning)NVIDIA106.950.8 tok/s$0.175/M
#40Mistral Small 4 (Reasoning)Mistral105.7149.5 tok/s$0.263/M
#41DeepSeek V3.2 Exp (Reasoning)DeepSeek104.4n/a$0.315/M
#42DeepSeek V3.2 (Non-reasoning)DeepSeek101.9n/a$0.315/M
#43Devstral Small (Jul '25)Mistral101.3194.2 tok/s$0.150/M
#44Granite 4.0 H SmallIBM100.9238.9 tok/s$0.107/M
#45Mistral Small 3.2Mistral100.7153.8 tok/s$0.150/M
#46Gemini 2.5 Flash-Lite (Reasoning)Google100.6243.6 tok/s$0.175/M
#47GPT-5 nano (minimal)OpenAI100.0139.1 tok/s$0.138/M
#48Ministral 3 8BMistral98.7157.6 tok/s$0.150/M
#49Llama 2 Chat 7BMeta97.099.7 tok/s$0.100/M
#50Mistral Small 3.1Mistral96.7138.8 tok/s$0.150/M
#51GPT-5.4 nano (xhigh)OpenAI95.0160.3 tok/s$0.463/M
#52MiniMax-M2.7MiniMax94.543.9 tok/s$0.525/M
#53Qwen3.5 Omni FlashAlibaba94.2190.4 tok/s$0.275/M
#54gpt-oss-120B (low)OpenAI93.2216.3 tok/s$0.263/M
#55Grok 3 mini Reasoning (high)xAI91.7215.5 tok/s$0.350/M
#56Llama 3 Instruct 8BMeta91.482.2 tok/s$0.070/M
#57DeepSeek V3.2 Exp (Non-reasoning)DeepSeek90.2n/a$0.315/M
#58Mercury 2Inception87.5820.2 tok/s$0.375/M
#59NVIDIA Nemotron 3 Super 120B A12B (Reasoning)NVIDIA87.4162.5 tok/s$0.412/M
#60Grok 4.1 Fast (Non-reasoning)xAI85.8112.1 tok/s$0.275/M
#61Mistral Small 3Mistral84.7135.9 tok/s$0.150/M
#62Grok 4 Fast (Non-reasoning)xAI84.077.4 tok/s$0.275/M
#63Seed-OSS-36B-InstructByteDance Seed84.040 tok/s$0.300/M
#64Llama Nemotron Super 49B v1.5 (Non-reasoning)NVIDIA83.451.3 tok/s$0.175/M
#65KAT Coder Pro V2KwaiKAT83.4110.7 tok/s$0.525/M
#66Granite 3.3 8B (Non-reasoning)IBM82.4410.5 tok/s$0.085/M
#67GPT-5.4 nano (medium)OpenAI82.3153.4 tok/s$0.463/M
#68Hermes 4 - Llama-3.1 70B (Reasoning)Nous Research80.878.6 tok/s$0.198/M
#69Trinity Large ThinkingArcee AI80.8124.6 tok/s$0.395/M
#70Ministral 3 14BMistral80.0121.6 tok/s$0.200/M
#71MiniMax-M2.5MiniMax79.879.7 tok/s$0.525/M
#72Solar MiniUpstage79.341.7 tok/s$0.150/M
#73Qwen3.6 35B A3B (Reasoning)Alibaba78.1191.8 tok/s$0.557/M
#74MiniMax-M2.1MiniMax75.084.8 tok/s$0.525/M
#75GPT-4.1 nanoOpenAI74.3125.2 tok/s$0.175/M
#76Gemini 2.5 Flash-Lite (Non-reasoning)Google72.6239.9 tok/s$0.175/M
#77Mistral Small 4 (Non-reasoning)Mistral70.7139.5 tok/s$0.263/M
#78Gemini 2.0 Flash (Feb '25)Google70.3n/a$0.263/M
#79MiniMax-M2MiniMax68.883.5 tok/s$0.525/M
#80KAT-Coder-Pro V1KwaiKAT68.6117.1 tok/s$0.525/M