Claude AI Now Supports Longer Prompts | Anthropic Update

Anthropic Expands Claude's Context Window for Enterprise Users
Anthropic is actively working to enhance its appeal to developers by increasing the volume of data enterprise clients can submit to Claude within a single prompt.
Increased Token Capacity with Claude Sonnet 4
The Claude Sonnet 4 AI model, for customers utilizing Anthropic’s API, now boasts a 1 million token context window. This allows the AI to process requests equivalent to approximately 750,000 words.
To put this into perspective, the capacity exceeds the length of the complete “Lord of the Rings” trilogy and is capable of handling 75,000 lines of code. This represents a substantial increase – roughly five times greater – compared to Claude’s prior 200,000 token limitation.
Furthermore, this expanded capacity surpasses the 400,000 token context window currently offered by OpenAI’s GPT-5.
Availability Through Cloud Partners
This extended context capability will also be accessible via Anthropic’s cloud partnerships, specifically through Amazon Bedrock and Google Cloud’s Vertex AI.
Competition with GPT-5
Anthropic has established a significant enterprise presence among AI model developers, primarily through sales of Claude to AI coding platforms like Microsoft’s GitHub Copilot, Windsurf, and Anysphere’s Cursor.
While Claude currently holds a favored position among developers, GPT-5 presents a potential challenge with its competitive pricing structure and robust coding performance. Notably, Anysphere CEO Michael Truell participated in the announcement of GPT-5’s launch, and it has since become the default AI model for new users within Cursor.
Anthropic’s Response and Outlook
Brad Abrams, Anthropic’s product lead for the Claude platform, indicated in a TechCrunch interview that AI coding platforms are anticipated to derive considerable advantages from this update.
When questioned about any potential impact of GPT-5 on Claude’s API usage, Abrams minimized concerns, expressing satisfaction with the growth trajectory of the API business.
Business Model Focus
Unlike OpenAI, which primarily generates revenue through consumer subscriptions to ChatGPT, Anthropic’s business model revolves around selling AI models to enterprises via an API. This makes AI coding platforms a crucial customer segment for Anthropic.
The introduction of new features is likely a strategic move to attract and retain users in the context of increasing competition from GPT-5.
Recent Model Updates
Just last week, Anthropic released an updated version of its most powerful AI model, Claude Opus 4.1, further enhancing the company’s AI coding capabilities.
Benefits of Larger Context Windows
Generally, AI models demonstrate improved performance across various tasks when provided with a larger context. This is particularly true for complex software engineering challenges.
For instance, an AI tasked with developing a new feature for an application is likely to achieve better results when it has access to the entire project codebase, rather than just a limited portion.
Enhanced Agentic Coding
Abrams also explained to TechCrunch that Claude’s expanded context window improves its performance in extended agentic coding tasks, where the AI autonomously works on a problem over extended periods.
The larger context allows Claude to retain memory of its previous actions throughout these long-horizon tasks.
Context Window Comparisons
While Anthropic has significantly increased Claude’s context window, other companies are pushing the boundaries even further. Google’s Gemini 2.5 Pro offers a 2 million token context window, and Meta’s Llama 4 Scout provides a 10 million token context window.
Effectiveness of Large Context Windows
Research suggests that there may be limitations to the effectiveness of extremely large context windows, as AI models can struggle to process such massive prompts.
Anthropic’s research team has focused on increasing not only the context window size but also the “effective context window,” implying that its AI can effectively understand a greater proportion of the information it receives. However, specific techniques remain undisclosed.
API Pricing Adjustments
For prompts exceeding 200,000 tokens to Claude Sonnet 4, Anthropic will implement increased pricing for API users, at $6 per million input tokens and $22.50 per million output tokens (increased from $3 and $15 respectively).
We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
