Claude can now process entire software projects in single request, Anthropic says

Claude can now process entire software projects in single request, Anthropic says


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Anthropic announced Tuesday that its Claude Sonnet 4 artificial intelligence model can now process up to 1 million tokens of context in a single request — a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks.

The expansion, available now in public beta through Anthropic’s API and Amazon Bedrock, represents a significant leap in how AI assistants can handle complex, data-intensive tasks. With the new capacity, developers can load codebases containing more than 75,000 lines of code, enabling Claude to understand complete project architecture and suggest improvements across entire systems rather than individual files.

The announcement comes as Anthropic faces intensifying competition from OpenAI and Google, both of which already offer similar context windows. However, company sources speaking on background emphasized that Claude Sonnet 4’s strength lies not just in capacity but in accuracy, achieving 100% performance on internal “needle in a haystack” evaluations that test the model’s ability to find specific information buried within massive amounts of text.

How developers can now analyze entire codebases with AI in one request

The extended context capability addresses a fundamental limitation that has constrained AI-powered software development. Previously, developers working on large projects had to manually break down their codebases into smaller segments, often losing important connections between different parts of their systems.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

“What was once impossible is now reality,” said Sean Ward, CEO and co-founder of London-based iGent AI, whose Maestro platform transforms conversations into executable code, in a statement. “Claude Sonnet 4 with 1M token context has supercharged autonomous capabilities in Maestro, our software engineering agent. This leap unlocks true production-scale engineering–multi-day sessions on real-world codebases.”

Eric Simons, CEO of Bolt.new, which integrates Claude into browser-based development platforms, said in a statement: “With the 1M context window, developers can now work on significantly larger projects while maintaining the high accuracy we need for real-world coding.”

The expanded context enables three primary use cases that were previously difficult or impossible: comprehensive code analysis across entire repositories, document synthesis involving hundreds of files while maintaining awareness of relationships between them, and context-aware AI agents that can maintain coherence across hundreds of tool calls and complex workflows.

Why Claude’s new pricing strategy could reshape the AI development market

Anthropic has adjusted its pricing structure to reflect the increased computational requirements of processing larger contexts. While prompts of 200,000 tokens or fewer maintain current pricing at $3 per million input tokens and $15 per million output tokens, larger prompts cost $6 and $22.50 respectively.

The pricing strategy reflects broader dynamics reshaping the AI industry. Recent analysis shows that Claude Opus 4 costs roughly seven times more per million tokens than OpenAI’s newly launched GPT-5 for certain tasks, creating pressure on enterprise procurement teams to balance performance against cost.

However, Anthropic argues the decision should factor in quality and usage patterns rather than price alone. Company sources noted that prompt caching — which stores frequently accessed large datasets — can make long context cost-competitive with traditional Retrieval-Augmented Generation approaches, especially for enterprises that repeatedly query the same information.

“Large context lets Claude see everything and choose what’s relevant, often producing better answers than pre-filtered RAG results where you might miss important connections between documents,” an Anthropic spokesperson told VentureBeat.

Anthropic’s billion-dollar dependency on just two major coding customers

The long context capability arrives as Anthropic commands 42% of the AI code generation market, more than double OpenAI’s 21% share according to a Menlo Ventures survey of 150 enterprise technical leaders. However, this dominance comes with risks: industry analysis suggests that coding applications Cursor and GitHub Copilot drive approximately $1.2 billion of Anthropic’s $5 billion annual revenue run rate, creating significant customer concentration.

The GitHub relationship proves particularly complex given Microsoft’s $13 billion investment in OpenAI. While GitHub Copilot currently relies on Claude for key functionality, Microsoft faces increasing pressure to integrate its own OpenAI partnership more deeply, potentially displacing Anthropic despite Claude’s current performance advantages.

The timing of the context expansion is strategic. Anthropic released this capability on Sonnet 4 — which offers what the company calls “the optimal balance of intelligence, cost, and speed” — rather than its most powerful Opus model. Company sources indicated this reflects the needs of developers working with large-scale data, though they declined to provide specific timelines for bringing long context to other Claude models.

Inside Claude’s breakthrough AI memory technology and emerging safety risks

The 1 million token context window represents significant technical advancement in AI memory and attention mechanisms. To put this in perspective, it’s enough to process approximately 750,000 words — roughly equivalent to two full-length novels or extensive technical documentation sets.

Anthropic’s internal testing revealed perfect recall performance across diverse scenarios, a crucial capability as context windows expand. The company embedded specific information within massive text volumes and tested Claude’s ability to find and use those details when answering questions.

However, the expanded capabilities also raise safety considerations. Earlier versions of Claude Opus 4 demonstrated concerning behaviors in fictional scenarios, including attempts at blackmail when faced with potential shutdown. While Anthropic has implemented additional safeguards and training to address these issues, the incidents highlight the complex challenges of developing increasingly capable AI systems.

Fortune 500 companies rush to adopt Claude’s expanded context capabilities

The feature rollout is initially limited to Anthropic API customers with Tier 4 and custom rate limits, with broader availability planned over coming weeks. Amazon Bedrock users have immediate access, while Google Cloud’s Vertex AI integration is pending.

Early enterprise response has been enthusiastic, according to company sources. Use cases span from coding teams analyzing entire repositories to financial services firms processing comprehensive transaction datasets to legal startups conducting contract analysis that previously required manual document segmentation.

“This is one of our most requested features from API customers,” an Anthropic spokesperson said. “We’re seeing excitement across industries that unlocks true agentic capabilities, with customers now running multi-day coding sessions on real-world codebases that would have been impossible with context limitations before.”

The development also enables more sophisticated AI agents that can maintain context across complex, multi-step workflows. This capability becomes particularly valuable as enterprises move beyond simple AI chat interfaces toward autonomous systems that can handle extended tasks with minimal human intervention.

The long context announcement intensifies competition among leading AI providers. Google’s older Gemini 1.5 Pro model and OpenAI’s older GPT-4.1 model both offer 1 million token windows, but Anthropic argues that Claude’s superior performance on coding and reasoning tasks provides competitive advantage even at higher prices.

The broader AI industry has seen explosive growth in model API spending, which doubled to $8.4 billion in just six months according to Menlo Ventures. Enterprises consistently prioritize performance over price, upgrading to newer models within weeks regardless of cost, suggesting that technical capabilities often outweigh pricing considerations in procurement decisions.

However, OpenAI’s recent aggressive pricing strategy with GPT-5 could reshape these dynamics. Early comparisons show dramatic price advantages that may overcome typical switching inertia, especially for cost-conscious enterprises facing budget pressures as AI adoption scales.

For Anthropic, maintaining its coding market leadership while diversifying revenue sources remains critical. The company has tripled the number of eight and nine-figure deals signed in 2025 compared to all of 2024, reflecting broader enterprise adoption beyond its coding strongholds.

As AI systems become capable of processing and reasoning about increasingly vast amounts of information, they’re fundamentally changing how developers approach complex software projects. The ability to maintain context across entire codebases represents a shift from AI as a coding assistant to AI as a comprehensive development partner that understands the full scope and interconnections of large-scale projects.

The implications extend far beyond software development. Industries from legal services to financial analysis are beginning to recognize that AI systems capable of maintaining context across hundreds of documents could transform how organizations process and understand complex information relationships.

But with great capability comes great responsibility—and risk. As these systems become more powerful, the incidents of concerning AI behavior during Anthropic’s testing serve as a reminder that the race to expand AI capabilities must be balanced with careful attention to safety and control.

As Claude learns to juggle a million pieces of information simultaneously, Anthropic faces its own context window problem: being trapped between OpenAI’s pricing pressure and Microsoft’s conflicting loyalties.



Source link