News
Anthropic’s developers recently upgraded the AI model Claude Sonnet 4 to support up to 1 million tokens of context, thereby ...
Anthropic has upgraded Claude Sonnet 4 with a 1M token context window, competing with OpenAI's GPT-5 and Meta's Llama 4.
Anthropic's popular coding model just became a little more enticing for developers with a million token context window.
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
Claude Sonnet 4 can now support up to one million tokens of context, marking a fivefold increase from the prior 200,000, ...
To account for the extra computing power required for large requests, Anthropic will increase the cost for Claude Sonnet 4 ...
Anthropic’s Claude Sonnet 4 now supports a 1 million token context window, enabling AI to process entire codebases and complex documents in a single request—redefining software development and ...
Anthropic AI has increased the context window for their Claude Sonnet 4 model to 1 million tokens, which is 5 times more than ...
Anthropic has expanded Claude Sonnet 4’s context window to 1 million tokens, matching OpenAI’s GPT-4.1 and enhancing its ability to process large code bases and document sets in one request.
The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit.
Hosted on MSN2mon
Anthropic's new Claude 4 models promise the biggest AI brains ever
Claude Sonnet 4 is the smaller model, but it's still a major upgrade in power from the earlier Sonnet 3.7. Anthropic claims Sonnet 4 is much better at following instructions and coding.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results