top of page

Chinese AI Companies Accused of Illicitly Using Claude Models

  • Writer: tech360.tv
    tech360.tv
  • 2 hours ago
  • 2 min read

Three Chinese artificial intelligence companies improperly used Claude to improve their own models, according to Anthropic, the chatbot's creator. Anthropic also made a case for export controls on chips to mitigate such activities.


Chinese flag waving on a flagpole against a modern glass building. The red flag features yellow stars, creating a sense of national pride.

DeepSeek, Moonshot, and MiniMax created more than 16 million interactions with Claude. These actions involved roughly 24,000 fake accounts, violating Anthropic's terms of service and regional access restrictions.


The companies employed a technique called "distillation." This method involves training a less capable model on the outputs of a stronger one, Anthropic stated in a blog post.


Anthropic warned that these campaigns are increasing in intensity and sophistication. The threat extends beyond any single company or region, and the window to act is narrow.


Illicitly distilled models lack necessary safeguards, creating significant national security risks. These risks multiply if such capabilities spread freely beyond government control through open-sourcing.


Anthropic, which raised USD 30 billion in its latest funding round and is now valued at USD 380 billion, asserted that distillation attacks support the need for export controls. Chip access restrictions can reduce direct model training capabilities and limit improper distillation.


DeepSeek's operation specifically targeted reasoning capabilities across diverse tasks. It also aimed at creating censorship-safe alternatives for policy-sensitive queries.


Moonshot, another company involved, focused on agentic reasoning and tool use. Its objectives also included coding and data analysis improvements.


MiniMax targeted agentic coding, tool use, and orchestration. Anthropic detected this campaign while it was still active, prior to MiniMax releasing the model it was training.


Anthropic observed MiniMax's quick adaptation. When the company released a new model during MiniMax's active campaign, MiniMax pivoted within 24 hours, redirecting nearly half its traffic to capture capabilities from the latest system.


This announcement from Anthropic follows a memo by OpenAI earlier this month, when the startup warned U.S. lawmakers. OpenAI had stated that Chinese AI firm DeepSeek was targeting the ChatGPT maker and leading U.S. AI companies to replicate models for its own training.

  • Three Chinese AI companies—DeepSeek, Moonshot, and MiniMax—improperly used Anthropic's Claude to enhance their own models.

  • The companies engaged in "distillation," training less capable models on Claude's outputs using over 16 million interactions and approximately 24,000 fake accounts.

  • Anthropic warned that these illicitly distilled models lack safeguards, posing national security risks, and advocated for stricter chip export controls.


Source: REUTERS

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page