rw-book-cover

Metadata

Highlights

  • Claude Sonnet 4 now supports 1M tokens of context. Gemini and OpenAI both have million token models, so it’s good to see Anthropic catching up. This is 5x the previous 200,000 context length limit of the various Claude Sonnet models. (View Highlight)
  • This is also the first time I’ve seen Anthropic use prices that vary depending on context length: • Prompts ≤ 200K: 15/million output • Prompts > 200K: 22.50/million output Gemini have been doing this for a while: Gemini 2.5 Pro is 10 below 200,000 tokens and 15 above 200,000. (View Highlight)