Overall intelligence divided by blended price. Higher means more benchmark score per dollar.
Value Index is derived inside Easy Benchmarks. It divides the Artificial Analysis Intelligence Index by blended price, which makes it useful for finding inexpensive models with strong broad benchmark performance.
Test type: Derived price-quality metric. Higher values mean more broad benchmark score per dollar.
323 models have this metric.
Current leader: Qwen3.5 0.8B (Reasoning)
Project links
This is a local derived metric, not an upstream Artificial Analysis benchmark.
Top models ranked by Value.
| Rank | Model | Creator | Value | Speed | Blended Price |
|---|---|---|---|---|---|
| #1 | Qwen3.5 0.8B (Reasoning) | Alibaba | 525.0 | n/a | $0.020/M |
| #2 |
| Alibaba |
| 495.0 |
| 273.6 tok/s |
| $0.020/M |
| #3 | Qwen3.5 4B (Reasoning) | Alibaba | 451.7 | 204.8 tok/s | $0.060/M |
| #4 | Qwen3.5 2B (Reasoning) | Alibaba | 407.5 | n/a | $0.040/M |
| #5 | Qwen3.5 4B (Non-reasoning) | Alibaba | 376.7 | 200.2 tok/s | $0.060/M |
| #6 | Qwen3.5 2B (Non-reasoning) | Alibaba | 367.5 | 227 tok/s | $0.040/M |
| #7 | Qwen3.5 9B (Reasoning) | Alibaba | 286.7 | 62.9 tok/s | $0.113/M |
| #8 | MiMo-V2-Flash (Feb 2026) | Xiaomi | 276.7 | 120.6 tok/s | $0.150/M |
| #9 | DeepSeek V4 Flash (Reasoning, Max Effort) | DeepSeek | 265.7 | 77.4 tok/s | $0.175/M |
| #10 | MiMo-V2-Flash (Reasoning) | Xiaomi | 261.3 | 118.8 tok/s | $0.150/M |
| #11 | DeepSeek V4 Flash (Reasoning, High Effort) | DeepSeek | 256.6 | n/a | $0.175/M |
| #12 | Gemma 3n E4B Instruct | 256.0 | 15.3 tok/s | $0.025/M |
| #13 | NVIDIA Nemotron 3 Nano 30B A3B (Reasoning) | NVIDIA | 253.1 | 154.8 tok/s | $0.096/M |
| #14 | Step 3.5 Flash | StepFun | 252.0 | 123.6 tok/s | $0.150/M |
| #15 | gpt-oss-20B (high) | OpenAI | 245.0 | 242.3 tok/s | $0.100/M |
| #16 | NVIDIA Nemotron Nano 9B V2 (Reasoning) | NVIDIA | 211.4 | 121.6 tok/s | $0.070/M |
| #17 | MiMo-V2-Flash (Non-reasoning) | Xiaomi | 202.7 | 116.7 tok/s | $0.150/M |
| #18 | LFM2 24B A2B | Liquid AI | 201.9 | 196.9 tok/s | $0.052/M |
| #19 | GLM-4.7-Flash (Reasoning) | Z AI | 198.0 | 110.5 tok/s | $0.152/M |
| #20 | Granite 4.1 8B | IBM | 196.8 | 134.6 tok/s | $0.063/M |
| #21 | GPT-5 nano (high) | OpenAI | 194.2 | 136 tok/s | $0.138/M |
| #22 | gpt-oss-20B (low) | OpenAI | 192.6 | 249.7 tok/s | $0.108/M |
| #23 | GPT-5 nano (medium) | OpenAI | 187.7 | 150.3 tok/s | $0.138/M |
| #24 | Ling 2.6 Flash | InclusionAI | 174.7 | 206 tok/s | $0.150/M |
| #25 | Nova Micro | Amazon | 168.9 | 332.1 tok/s | $0.061/M |
| #26 | Gemma 4 26B A4B (Reasoning) | 157.6 | n/a | $0.198/M |
| #27 | NVIDIA Nemotron Nano 9B V2 (Non-reasoning) | NVIDIA | 153.5 | 153.3 tok/s | $0.086/M |
| #28 | GLM-4.7-Flash (Non-reasoning) | Z AI | 145.4 | 89.6 tok/s | $0.152/M |
| #29 | Grok 4.1 Fast (Reasoning) | xAI | 140.4 | 140.9 tok/s | $0.275/M |
| #30 | Qwen2.5 Turbo | Alibaba | 137.9 | 77.7 tok/s | $0.087/M |
| #31 | DeepSeek V3.2 (Reasoning) | DeepSeek | 132.4 | n/a | $0.315/M |
| #32 | Grok 4 Fast (Reasoning) | xAI | 127.6 | 76.2 tok/s | $0.275/M |
| #33 | gpt-oss-120B (high) | OpenAI | 126.6 | 212.3 tok/s | $0.263/M |
| #34 | Gemini 2.5 Flash-Lite Preview (Sep '25) (Reasoning) | 123.4 | n/a | $0.175/M |
| #35 | Nova Lite | Amazon | 121.0 | 186.8 tok/s | $0.105/M |
| #36 | Llama 3.1 Instruct 8B | Meta | 118.0 | 164.4 tok/s | $0.100/M |
| #37 | Ministral 3 3B | Mistral | 112.0 | 287.6 tok/s | $0.100/M |
| #38 | Gemini 2.5 Flash-Lite Preview (Sep '25) (Non-reasoning) | 110.9 | n/a | $0.175/M |
| #39 | Llama Nemotron Super 49B v1.5 (Reasoning) | NVIDIA | 106.9 | 50.8 tok/s | $0.175/M |
| #40 | Mistral Small 4 (Reasoning) | Mistral | 105.7 | 149.5 tok/s | $0.263/M |
| #41 | DeepSeek V3.2 Exp (Reasoning) | DeepSeek | 104.4 | n/a | $0.315/M |
| #42 | DeepSeek V3.2 (Non-reasoning) | DeepSeek | 101.9 | n/a | $0.315/M |
| #43 | Devstral Small (Jul '25) | Mistral | 101.3 | 194.2 tok/s | $0.150/M |
| #44 | Granite 4.0 H Small | IBM | 100.9 | 238.9 tok/s | $0.107/M |
| #45 | Mistral Small 3.2 | Mistral | 100.7 | 153.8 tok/s | $0.150/M |
| #46 | Gemini 2.5 Flash-Lite (Reasoning) | 100.6 | 243.6 tok/s | $0.175/M |
| #47 | GPT-5 nano (minimal) | OpenAI | 100.0 | 139.1 tok/s | $0.138/M |
| #48 | Ministral 3 8B | Mistral | 98.7 | 157.6 tok/s | $0.150/M |
| #49 | Llama 2 Chat 7B | Meta | 97.0 | 99.7 tok/s | $0.100/M |
| #50 | Mistral Small 3.1 | Mistral | 96.7 | 138.8 tok/s | $0.150/M |
| #51 | GPT-5.4 nano (xhigh) | OpenAI | 95.0 | 160.3 tok/s | $0.463/M |
| #52 | MiniMax-M2.7 | MiniMax | 94.5 | 43.9 tok/s | $0.525/M |
| #53 | Qwen3.5 Omni Flash | Alibaba | 94.2 | 190.4 tok/s | $0.275/M |
| #54 | gpt-oss-120B (low) | OpenAI | 93.2 | 216.3 tok/s | $0.263/M |
| #55 | Grok 3 mini Reasoning (high) | xAI | 91.7 | 215.5 tok/s | $0.350/M |
| #56 | Llama 3 Instruct 8B | Meta | 91.4 | 82.2 tok/s | $0.070/M |
| #57 | DeepSeek V3.2 Exp (Non-reasoning) | DeepSeek | 90.2 | n/a | $0.315/M |
| #58 | Mercury 2 | Inception | 87.5 | 820.2 tok/s | $0.375/M |
| #59 | NVIDIA Nemotron 3 Super 120B A12B (Reasoning) | NVIDIA | 87.4 | 162.5 tok/s | $0.412/M |
| #60 | Grok 4.1 Fast (Non-reasoning) | xAI | 85.8 | 112.1 tok/s | $0.275/M |
| #61 | Mistral Small 3 | Mistral | 84.7 | 135.9 tok/s | $0.150/M |
| #62 | Grok 4 Fast (Non-reasoning) | xAI | 84.0 | 77.4 tok/s | $0.275/M |
| #63 | Seed-OSS-36B-Instruct | ByteDance Seed | 84.0 | 40 tok/s | $0.300/M |
| #64 | Llama Nemotron Super 49B v1.5 (Non-reasoning) | NVIDIA | 83.4 | 51.3 tok/s | $0.175/M |
| #65 | KAT Coder Pro V2 | KwaiKAT | 83.4 | 110.7 tok/s | $0.525/M |
| #66 | Granite 3.3 8B (Non-reasoning) | IBM | 82.4 | 410.5 tok/s | $0.085/M |
| #67 | GPT-5.4 nano (medium) | OpenAI | 82.3 | 153.4 tok/s | $0.463/M |
| #68 | Hermes 4 - Llama-3.1 70B (Reasoning) | Nous Research | 80.8 | 78.6 tok/s | $0.198/M |
| #69 | Trinity Large Thinking | Arcee AI | 80.8 | 124.6 tok/s | $0.395/M |
| #70 | Ministral 3 14B | Mistral | 80.0 | 121.6 tok/s | $0.200/M |
| #71 | MiniMax-M2.5 | MiniMax | 79.8 | 79.7 tok/s | $0.525/M |
| #72 | Solar Mini | Upstage | 79.3 | 41.7 tok/s | $0.150/M |
| #73 | Qwen3.6 35B A3B (Reasoning) | Alibaba | 78.1 | 191.8 tok/s | $0.557/M |
| #74 | MiniMax-M2.1 | MiniMax | 75.0 | 84.8 tok/s | $0.525/M |
| #75 | GPT-4.1 nano | OpenAI | 74.3 | 125.2 tok/s | $0.175/M |
| #76 | Gemini 2.5 Flash-Lite (Non-reasoning) | 72.6 | 239.9 tok/s | $0.175/M |
| #77 | Mistral Small 4 (Non-reasoning) | Mistral | 70.7 | 139.5 tok/s | $0.263/M |
| #78 | Gemini 2.0 Flash (Feb '25) | 70.3 | n/a | $0.263/M |
| #79 | MiniMax-M2 | MiniMax | 68.8 | 83.5 tok/s | $0.525/M |
| #80 | KAT-Coder-Pro V1 | KwaiKAT | 68.6 | 117.1 tok/s | $0.525/M |