EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

OpenAI

GPT-5.5 (xhigh)

GPT-5.5 (xhigh) is an OpenAI GPT-5.5 profile in this benchmark snapshot, part of the newest GPT-5.5 generation described by OpenAI as its most capable model line for ChatGPT and Codex work. Use this page to compare its reasoning setting against sibling GPT-5.5 variants on intelligence, coding, speed, latency, and price.

Introducing GPT-5.5

Operational Metrics

Output Speed66.1 tok/s
First Token40.81s
Blended Price$11.25/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 23, 2026
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Overall

Rank #1 across 500 models.

60.2

Strength: Coding

Rank #1 across 410 models.

59.1

Strength: GPQA

Rank #2 across 478 models.

93.5%

Watch Area: Output $

Rank #314 across 325 models.

$30.00/M

Watch Area: TTFT

Rank #283 across 293 models.

40.81s

Watch Area: Blended $

Rank #313 across 325 models.

$11.25/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall60.2#1
Artificial Analysis Coding Indexcoding59.1#1
GPQAreasoning93.5%#2
Humanity's Last Examreasoning44.3%#2
SciCode
coding, reasoning
56.1%
#4
Output Speedspeed66.1 tok/s#180
Time to First Tokenspeed40.81s#283
Blended Pricecost$11.25/M#313
Input Pricecost$5.00/M#313
Output Pricecost$30.00/M#314
Value Indexcost, overall5.4#283