Anthropic has revoked OpenAI’s access to its Claude API, citing a terms of service violation. This decision comes as OpenAI is reportedly preparing to release its next major AI model, GPT-5, which is expected to have enhanced capabilities in coding and other domains. According to sources familiar with the matter, OpenAI had been utilizing Claude’s API internally to test its own models against competing AI systems. This included evaluating Claude’s performance in areas such, as coding and creative writing, as well as its response to safety-related prompts covering topics like CSAM, self-harm, and defamation. These tests were intended to help OpenAI benchmark its models’ behavior under similar conditions and make necessary adjustments for safety and performance. Anthropic’s spokesperson, Christopher Nulty, stated that Claude Code has become the go-to tool for coders, making it unsurprising that OpenAI’s technical staff were using the service. However, he emphasized that this use was a direct violation of Anthropic’s commercial terms, which prohibit customers from using the service to build competing products, train competing AI models, or reverse engineer the service. OpenAI’s chief communications officer, Hannah Wong, expressed disappointment over the decision, noting that while they respect Anthropic’s stance, their API remains available for benchmarking and safety evaluations as per industry standards. This incident underscores the competitive landscape in the AI industry, where companies often engage in strategic evaluations of each other’s technologies to refine their models and ensure robust safety protocols. The move may have implications for the broader AI landscape, as it highlights the importance of proprietary technologies and the challenges of maintaining competitive advantages in a rapidly evolving market.