Search/
Skip to content
/
Posts5/4/2026 by Justin Summerville

GPT-5.5 Price Increase: What It Actually Costs

GPT-5.5 Price Increase: What It Actually Costs

We replicated the cost analysis we did on Opus on the new GPT-5.5 model. GPT-5.5 launched with a 2x price increase over GPT-5.4: input tokens increased from $2.50/M to $5.00/M and output tokens from $15/M to $30/M. OpenAI has also noted that the model is less verbose, producing shorter completions for the same tasks. Just as we did with Opus 4.7 we wanted to know what is the net impact on costs to users by analyzing usage that shifted from GPT-5.4 to GPT-5.5.

We observed cost increases between 49-92%. The price increase is mitigated by the model generating 19-34% fewer completion tokens for longer prompts.

Methodology: Same Switcher Cohort Approach

We used the same approach as our Opus 4.7 analysis. We identified users whose top model by request count was GPT-5.4 prior to the 5.5 launch, who then switched to GPT-5.5 as their top model. This "switcher cohort" gives us a controlled before-and-after comparison of the same user base across model versions.

Since GPT-5.4 and 5.5 use the same tokenizer family, we don't need to control for tokenizer differences. The comparison is direct: same users, same workflows, different model version.

GPT-5.5 Is Less Verbose, But Only for Longer Prompts

Using OpenRouter's consistent token counts, we measured how completion lengths changed between models:

Prompt SizeMedian Completion (5.4)Median Completion (5.5)Change
< 2K tokens121129+7%
2K – 10K140213+52%
10K – 25K211143-32%
25K – 50K185150-19%
50K – 128K188136-28%
128K+215143-34%

For prompts above 10K tokens, GPT-5.5 produces 19-34% fewer tokens. For shorter prompts, the pattern reverses: under 2K tokens completions are roughly the same length, and in the 2K-10K range they are 52% longer.

Actual Cost Impact

Using billed costs from requests in the switcher cohort, we calculated the average cost per million OpenRouter tokens. This normalizes for prompt length, allowing a direct comparison of cost efficiency.

Prompt SizeAvg $/M OR Tokens (5.4)Avg $/M OR Tokens (5.5)Change
< 2K tokens$4.89$9.37+92%
2K – 10K$2.25$3.81+69%
10K – 25K$1.42$2.15+51%
25K – 50K$1.02$1.65+62%
50K – 128K$0.74$1.10+49%
128K+$0.71$1.31+85%

Our analysis shows that GPT-5.5 actual costs increased 49% to 92%. Longer prompts, over 10k tokens, saw costs offset by shorter completions. Shorter prompts, under 10k, experience a higher cost increase where completions did not get shorter.

Methodology

  • Source: OpenRouter's request logs
  • Cohort: Users whose top model by request count was GPT-5.4, who then switched to GPT-5.5 as their top model
  • Sample size: Text-only, non-cancelled requests split across 5.4 and 5.5
  • Windows: GPT-5.4: April 21-23, 2026 (pre-launch); GPT-5.5: April 25-28, 2026 (post-launch, launch day excluded)
  • Normalization: Cost per million OpenRouter tokens, bucketed by prompt token count. OpenRouter counts tokens independently from OpenAI, providing a consistent baseline across model versions.
  • Controls: Excluded media (images, files, audio, video), cancelled requests, and zero-token requests
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube