Introduction
A dispute between AI giants erupted as Anthropic revoked OpenAI’s access to its Claude API. Anthropic found that OpenAI’s engineers were using Claude to evaluate their own models ahead of the launch of GPT‑5, which violated the terms prohibiting the use of Claude to develop competing models. Anthropic claims that only customers can benchmark and analyse Claude for safety, whereas commercial competitors are barred from using it to train or improve their own systems. OpenAI defended the practice as standard benchmarking but said it respects Anthropic’s decision.
What Happened?
-
Discovery and ban: According to WIRED, Anthropic discovered that OpenAI’s technical staff were using the Claude Code API to evaluate tasks ahead of releasing GPT‑5, breaching terms that prohibit using Claude to train or develop competing models. Anthropic promptly revoked OpenAI’s access and said that customers may not use the platform to build competing models or reverse engineer the service.
-
OpenAI’s response: OpenAI spokesperson Hannah Wong told WIRED that benchmarking other AI systems is industry standard but confirmed that the company respects Anthropic’s decision and will look at other ways to improve its models.
-
Other details: Mitrade reported that OpenAI had integrated Claude via custom APIs to assess coding and safety tasks and that Anthropic’s terms explicitly forbid using the API to train competing models. The article also mentioned that Anthropic plans weekly usage caps on Claude Code for its Pro and Max tiers, reflecting the company’s cautious approach to resource allocation.
Why It Matters
-
API policy enforcement: This case highlights how generative‑AI providers are tightening their terms of service to protect intellectual property and prevent competitors from exploiting their APIs. Anthropic’s terms allow evaluation for safety and benchmarking but bar use to train competing models; the enforcement of these rules signals a maturation of the AI‑as‑a‑service industry.
-
Competitive dynamics: The conflict underscores the competitive arms race among major AI labs. With GPT‑5 rumours swirling and new models like Deep Think emerging, companies are protective of their capabilities. Cutting off access can slow a competitor’s benchmarking and may influence how quickly models can be improved.
-
Impact on researchers: If providers start restricting cross‑model benchmarking, independent researchers may find it harder to compare systems. This could reduce transparency and hinder the development of safety standards.
Web Reactions
-
Industry commentary: Many AI practitioners noted the tension between openness and commercial interests. Some argued that benchmarking is necessary for progress and that the restrictions might harm the development of safer models. Others supported Anthropic’s right to protect its technology under its terms of service.
-
Speculation around GPT‑5: The incident fuelled speculation that GPT‑5 is nearing completion. Observers pointed out that OpenAI might have been using Claude to test performance gaps before launching its next model. Anthropic’s move could therefore delay or complicate these plans.
-
Historical analogues: WIRED noted that API restrictions have precedent, referencing past cases where tech firms restricted service access to protect intellectual property.
Expert Breakdown
From a legal and operational perspective, Anthropic’s action is straightforward: the company enforced its contractual terms. Most cloud services prohibit reverse engineering or using APIs to train competing services. The unusual aspect is that the competitor is one of the biggest players in AI and had been quietly using Claude Code.
The event emphasises the growing tension between cooperation and competition in AI. On one hand, cross‑model benchmarking is vital for safety research; on the other, companies invest billions in training large models and thus seek to protect that investment. Analysts predict that API providers will introduce more granular usage caps (as Anthropic plans for Claude Code) and stricter compliance checks. There is also talk of creating third‑party audits to allow benchmarking under controlled conditions.
Final Thoughts
Anthropic’s revocation of OpenAI’s access to Claude signals a new chapter in AI competition. As generative models become central to businesses, the boundaries around who can access a model and for what purpose will tighten. While providers are within their rights to enforce these boundaries, the incident raises concerns about transparency and the ability to independently evaluate AI systems. The AI community may need to develop frameworks that allow fair benchmarking while respecting commercial constraints.