Artificial Analysis aggregate LLM intelligence score.
The Intelligence Index is an Artificial Analysis aggregate for comparing broad language model capability. Its March 2026 methodology describes a text-only English suite weighted across agents, coding, general tasks, and scientific reasoning.
Test type: Aggregate evaluation suite with standardized prompting and task-specific scoring.
500 models have this metric.
Current leader: GPT-5.5 (xhigh)
Project links
Scores come from the Artificial Analysis LLM snapshot committed in this app.
Top models ranked by Overall.
| Rank | Model | Creator | Value | Speed | Blended Price |
|---|---|---|---|---|---|
| #1 | GPT-5.5 (xhigh) | OpenAI | 60.2 | 66.1 tok/s | $11.25/M |
| #2 |
| OpenAI |
| 58.9 |
| 59.3 tok/s |
| $11.25/M |
| #3 | Claude Opus 4.7 (Adaptive Reasoning, Max Effort) | Anthropic | 57.3 | 51.8 tok/s | $10.00/M |
| #4 | Gemini 3.1 Pro Preview | 57.2 | 131.2 tok/s | $4.50/M |
| #5 | GPT-5.4 (xhigh) | OpenAI | 56.8 | 93.5 tok/s | $5.63/M |
| #6 | GPT-5.5 (medium) | OpenAI | 56.7 | 57.5 tok/s | $11.25/M |
| #7 | Kimi K2.6 | Kimi | 53.9 | 29.1 tok/s | $1.71/M |
| #8 | MiMo-V2.5-Pro | Xiaomi | 53.8 | 59.9 tok/s | $1.50/M |
| #9 | GPT-5.3 Codex (xhigh) | OpenAI | 53.6 | 87.1 tok/s | $4.81/M |
| #10 | Claude Opus 4.6 (Adaptive Reasoning, Max Effort) | Anthropic | 53.0 | 49.9 tok/s | $10.00/M |
| #11 | Muse Spark | Meta | 52.1 | n/a | - |
| #12 | Claude Opus 4.7 (Non-reasoning, High Effort) | Anthropic | 51.8 | 43 tok/s | $10.00/M |
| #13 | Qwen3.6 Max Preview | Alibaba | 51.8 | 33.2 tok/s | $2.93/M |
| #14 | Claude Sonnet 4.6 (Adaptive Reasoning, Max Effort) | Anthropic | 51.7 | 68 tok/s | $6.00/M |
| #15 | DeepSeek V4 Pro (Reasoning, Max Effort) | DeepSeek | 51.5 | 34.3 tok/s | $2.18/M |
| #16 | GLM-5.1 (Reasoning) | Z AI | 51.4 | 45.7 tok/s | $2.15/M |
| #17 | GPT-5.2 (xhigh) | OpenAI | 51.3 | 71.8 tok/s | $4.81/M |
| #18 | GPT-5.5 (low) | OpenAI | 50.8 | 56.8 tok/s | $11.25/M |
| #19 | Qwen3.6 Plus | Alibaba | 50.0 | 53.1 tok/s | $1.13/M |
| #20 | DeepSeek V4 Pro (Reasoning, High Effort) | DeepSeek | 49.8 | 32.9 tok/s | $2.18/M |
| #21 | GLM-5 (Reasoning) | Z AI | 49.8 | 64.5 tok/s | $1.55/M |
| #22 | Claude Opus 4.5 (Reasoning) | Anthropic | 49.7 | 57 tok/s | $10.00/M |
| #23 | MiniMax-M2.7 | MiniMax | 49.6 | 43.9 tok/s | $0.525/M |
| #24 | Grok 4.20 0309 v2 (Reasoning) | xAI | 49.3 | 89.3 tok/s | $3.00/M |
| #25 | MiMo-V2-Pro | Xiaomi | 49.2 | n/a | - |
| #26 | GPT-5.2 Codex (xhigh) | OpenAI | 49.0 | 87.7 tok/s | $4.81/M |
| #27 | MiMo-V2.5 | Xiaomi | 49.0 | n/a | - |
| #28 | GPT-5.4 mini (xhigh) | OpenAI | 48.9 | 158.9 tok/s | $1.69/M |
| #29 | Grok 4.20 0309 (Reasoning) | xAI | 48.5 | 87.8 tok/s | $3.00/M |
| #30 | Gemini 3 Pro Preview (high) | 48.4 | 128.7 tok/s | $4.50/M |
| #31 | GPT-5.4 (low) | OpenAI | 47.9 | 59.1 tok/s | $5.63/M |
| #32 | GPT-5.1 (high) | OpenAI | 47.7 | 123.3 tok/s | $3.44/M |
| #33 | GLM-5-Turbo | Z AI | 46.8 | n/a | - |
| #34 | Kimi K2.5 (Reasoning) | Kimi | 46.8 | 31.6 tok/s | $1.20/M |
| #35 | GPT-5.2 (medium) | OpenAI | 46.6 | n/a | $4.81/M |
| #36 | Claude Opus 4.6 (Non-reasoning, High Effort) | Anthropic | 46.5 | 42 tok/s | $10.00/M |
| #37 | DeepSeek V4 Flash (Reasoning, Max Effort) | DeepSeek | 46.5 | 77.4 tok/s | $0.175/M |
| #38 | Gemini 3 Flash Preview (Reasoning) | 46.4 | 193.2 tok/s | $1.13/M |
| #39 | Qwen3.6 27B (Reasoning) | Alibaba | 45.8 | 64.1 tok/s | $1.35/M |
| #40 | Qwen3.5 397B A17B (Reasoning) | Alibaba | 45.0 | 50.4 tok/s | $1.35/M |
| #41 | DeepSeek V4 Flash (Reasoning, High Effort) | DeepSeek | 44.9 | n/a | $0.175/M |
| #42 | MiMo-V2-Omni-0327 | Xiaomi | 44.9 | n/a | - |
| #43 | GPT-5 (high) | OpenAI | 44.6 | 84.2 tok/s | $3.44/M |
| #44 | GPT-5 Codex (high) | OpenAI | 44.6 | 166.8 tok/s | $3.44/M |
| #45 | Claude Sonnet 4.6 (Non-reasoning, High Effort) | Anthropic | 44.4 | 48.3 tok/s | $6.00/M |
| #46 | GPT-5.4 nano (xhigh) | OpenAI | 44.0 | 160.3 tok/s | $0.463/M |
| #47 | GLM-5.1 (Non-reasoning) | Z AI | 43.8 | 41.5 tok/s | $2.15/M |
| #48 | KAT Coder Pro V2 | KwaiKAT | 43.8 | 110.7 tok/s | $0.525/M |
| #49 | Qwen3.6 35B A3B (Reasoning) | Alibaba | 43.5 | 191.8 tok/s | $0.557/M |
| #50 | MiMo-V2-Omni | Xiaomi | 43.4 | n/a | - |
| #51 | Claude Opus 4.5 (Non-reasoning) | Anthropic | 43.1 | 50.3 tok/s | $10.00/M |
| #52 | GPT-5.1 Codex (high) | OpenAI | 43.1 | 162.7 tok/s | $3.44/M |
| #53 | Claude 4.5 Sonnet (Reasoning) | Anthropic | 43.0 | 43.8 tok/s | $6.00/M |
| #54 | Kimi K2.6 (Non-reasoning) | Kimi | 43.0 | n/a | - |
| #55 | GLM 5V Turbo (Reasoning) | Z AI | 42.9 | n/a | - |
| #56 | Claude Sonnet 4.6 (Non-reasoning, Low Effort) | Anthropic | 42.6 | 51.5 tok/s | $6.00/M |
| #57 | GLM-4.7 (Reasoning) | Z AI | 42.1 | 90.3 tok/s | $1.00/M |
| #58 | Qwen3.5 27B (Reasoning) | Alibaba | 42.1 | 87 tok/s | $0.825/M |
| #59 | Claude 4.1 Opus (Reasoning) | Anthropic | 42.0 | 35.8 tok/s | $30.00/M |
| #60 | GPT-5 (medium) | OpenAI | 42.0 | 82.3 tok/s | $3.44/M |
| #61 | Hy3-preview (Reasoning) | Tencent | 41.9 | 86.4 tok/s | - |
| #62 | MiniMax-M2.5 | MiniMax | 41.9 | 79.7 tok/s | $0.525/M |
| #63 | DeepSeek V3.2 (Reasoning) | DeepSeek | 41.7 | n/a | $0.315/M |
| #64 | Qwen3.5 122B A10B (Reasoning) | Alibaba | 41.6 | 139.9 tok/s | $1.10/M |
| #65 | Grok 4 | xAI | 41.5 | 50.3 tok/s | $6.00/M |
| #66 | MiMo-V2-Flash (Feb 2026) | Xiaomi | 41.5 | 120.6 tok/s | $0.150/M |
| #67 | Gemini 3 Pro Preview (low) | 41.3 | n/a | $4.50/M |
| #68 | GPT-5 mini (high) | OpenAI | 41.2 | 85.7 tok/s | $0.688/M |
| #69 | GPT-5.5 (Non-reasoning) | OpenAI | 40.9 | 51.3 tok/s | $11.25/M |
| #70 | Kimi K2 Thinking | Kimi | 40.9 | 99 tok/s | $1.08/M |
| #71 | o3-pro | OpenAI | 40.7 | 16.9 tok/s | $35.00/M |
| #72 | GLM-5 (Non-reasoning) | Z AI | 40.6 | 59.6 tok/s | $1.55/M |
| #73 | Qwen3.5 397B A17B (Non-reasoning) | Alibaba | 40.1 | 52.5 tok/s | $1.35/M |
| #74 | Qwen3 Max Thinking | Alibaba | 39.9 | 34.3 tok/s | $2.40/M |
| #75 | MiniMax-M2.1 | MiniMax | 39.4 | 84.8 tok/s | $0.525/M |
| #76 | DeepSeek V4 Pro (Non-reasoning) | DeepSeek | 39.3 | n/a | - |
| #77 | Gemma 4 31B (Reasoning) | 39.2 | 34.8 tok/s | - |
| #78 | GPT-5 (low) | OpenAI | 39.2 | 65.8 tok/s | $3.44/M |
| #79 | MiMo-V2-Flash (Reasoning) | Xiaomi | 39.2 | 118.8 tok/s | $0.150/M |
| #80 | Claude 4 Opus (Reasoning) | Anthropic | 39.0 | 36.8 tok/s | $30.00/M |