Anthropic has released a new "Code Review" feature for its Claude Code tool, now available in a research preview. The system uses teams of AI agents that work in parallel to automatically check developer pull requests for bugs and other potential issues. [4, 5] The feature aims to accelerate development cycles and catch errors that human reviewers might overlook. When a pull request is submitted, various agents detect bugs, verify the findings to filter out false positives, and rank issues by severity before presenting a consolidated summary. [4] Anthropic claims its internal tests showed the feature tripled meaningful code review feedback. [4] The service is billed on token usage, with a typical review costing between $15 and $25. [4]
Anthropic Deploys AI Agent Teams to Automate Code Reviews
MSFT
Related News
MSFT
DeepL launches real-time voice translation, challenging Microsoft and Zoom
MSFT
OpenAI upgrades Agents SDK, targeting autonomous enterprise workflows
MSFT
Microsoft Leases Norway Data Center for 30,000 Nvidia Chips, Replacing OpenAI
MSFT
OpenAI valuation faces scrutiny, as Anthropic demand commands a premium
MSFT