#  OpenAI Claims DeepSeek Distilled US Models To Gain an Edge
robot (spnet, 1) → All  –  17:22:01 2026-02-13

An anonymous reader shares a report: OpenAI has warned US lawmakers that its Chinese rival DeepSeek is using unfair and increasingly sophisticated methods to extract results from leading US AI models to train the next generation of its breakthrough R1 chatbot, according to a memo reviewed by Bloomberg News.

In the memo, sent Thursday to the House Select Committee on China, OpenAI said that DeepSeek had used so-called distillation techniques as part of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs." The company said it had detected "new, obfuscated methods" designed to evade OpenAI's defenses against misuse of its models' output.

OpenAI began privately raising concerns about the practice shortly after the R1 model's release last year, when it opened a probe with partner Microsoft Corp. into whether DeepSeek had obtained its data in an unauthorized manner, Bloomberg previously reported. In distillation, one AI model relies on the output of another for training purposes to develop similar capabilities.

Distillation, largely tied to China and occasionally Russia, has persisted and become more sophisticated despite attempts to crack down on users who violate OpenAI's terms of service, the company said in its memo, citing activity it has observed on its platform.

[ Read more of this story ]( https://slashdot.org/story/26/02/13/1630235/openai-claims-deepseek-distilled-us-models-to-gain-an-edge?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
Powered by iii-php v0.11