Anthropic Opus 4.6 Launch Adds Multi-Agent Teams at $350B Valuation
The post Anthropic Opus 4.6 Launch Adds Multi-Agent Teams at $350B Valuation appeared on BitcoinEthereumNews.com.
Zach Anderson
Feb 05, 2026 18:43
Anthropic releases Claude Opus 4.6 with 1M token context window and agent teams feature, days after $350B tender offer valuation shook tech stocks.
Anthropic dropped Claude Opus 4.6 on Thursday, a model upgrade that quintuples its context window to one million tokens and introduces autonomous multi-agent collaboration—arriving just one day after the company’s $350 billion tender offer valuation triggered a selloff across tech stocks. The timing matters. Investors spooked by AI competition dumped shares on February 4, and now Anthropic is showing exactly why it commands that valuation: Opus 4.6 outperforms OpenAI’s GPT-5.2 by 144 Elo points on GDPval-AA, a benchmark measuring economically valuable knowledge work in finance, legal, and technical domains. What Actually Changed Three upgrades stand out for enterprise users. The 1M token context window (in beta) represents a 5x increase from Opus 4.5’s 200,000 tokens. On MRCR v2—a needle-in-a-haystack retrieval test—Opus 4.6 scores 76% compared to Sonnet 4.5’s 18.5%. That’s not incremental improvement; that’s a different capability class for document-heavy workflows. Agent Teams in Claude Code lets developers spin up multiple AI agents working in parallel. Early access partner Invariant Labs reported Opus 4.6 “autonomously closed 13 issues and assigned 12 issues to the right team members in a single day, managing a ~50-person organization across 6 repositories.” The model handled both product and organizational decisions while knowing when to escalate to humans. For enterprise integration, Anthropic added PowerPoint support (research preview) alongside upgraded Excel capabilities. The model can now ingest unstructured data, infer structure without guidance, and execute multi-step changes in one pass. Benchmark Performance Opus 4.6 leads on Terminal-Bench 2.0 for agentic coding and tops Humanity’s Last Exam, a multidisciplinary reasoning test. It also beat every other model on OpenAI’s BrowseComp,…
Filed under: News - @ February 7, 2026 7:29 am