Gemini 2.0 Pro Experimental (Feb '25) is a Google language-model profile in the Easy Benchmarks snapshot. Use this page to compare its measured Artificial Analysis scores, output speed, time to first token, pricing, and relative ranking against other models in the local catalog.
Gemini model releasesQueryable facts extracted from the upstream model payload.
This model has too few comparable metrics for automatic insights.
Percentile score by analysis domain.
Higher bars mean stronger relative placement.
| Metric | Domain | Value | Rank |
|---|---|---|---|
| Artificial Analysis Intelligence Index | overall | 18.1 | #268 |
| Artificial Analysis Coding Index | coding | 25.5 | #145 |
| MMLU-Pro | reasoning | 80.5% | #107 |
| GPQA | reasoning | 62.2% | #260 |
| Humanity's Last Exam |
| reasoning |
| 6.8% |
| #214 |
| LiveCodeBench | coding | 34.7% | #200 |
| SciCode | coding, reasoning | 31.2% | #232 |
| MATH-500 | math | 92.3% | #64 |
| AIME | math | 36.0% | #74 |