Anthropic CEO Dario Amodei is concerned about the rival Deepseek, the Chinese AI company that Silicon Valley took with his R1 storm model. And his concern can be more serious than the typical raised about Deepseek, to return user data to China.
In An interview On Jordan Schneider’s Chinatalk -Podcast, Amodei said Deepseeek had raised rare biaweapons information in a safety test conducted by Anthropic.
Deepsheek’s performance was “the worst of basically any model we have ever tested,” Amodei claimed. “It had absolutely no blocks to generate this information.”
Amodei said it was part of evaluations that work anthropically on different AI models to assess their potential national security risks. His team watches whether models can generate biowapons-related information that cannot be easily found on Google or in textbooks. Anthropic positions himself as the AI ​​Foundation Model Provider It takes safety seriously.
Amodei said he did not think that Deepseek’s models were “literally dangerous” today to provide rare and dangerous information, but that it could be in the near future. Although he praised Deepseek’s team as “talented engineers”, he advised the company to take these AI safety considerations seriously. “
Amodei also supported strong export control on chips to China, referring to the concern that they could give China’s army an edge.
Amodei did not make it clear in the Chinatalk interview that Deepseek Model Anthropic tested, nor did he give more technical details about these tests. Anthropic does not immediately reply to a request for comment from TechCrunch. Deepsheek didn’t.
The rise of Deepseek also raised concerns about the safety elsewhere. For example, Cisco Security Researchers Last week said The fact that Deepseek R1 failed to block any harmful directions in its safety tests reached a 100% success of 100%.
Cisco did not mention biaweapens, but said it was able to get DeepSeek to generate harmful information about cybercrime and other illegal activities. However, it is worth mentioning that Meta’s Llama-3.1-405B and Openai’s GPT-4O also had a high failure rate of 96% and 86% respectively.
It has yet to be seen whether safety considerations like this will make a serious dive in Deepseek’s quick acceptance. Companies like Aws And Microsoft has publicly named R1 integration into their cloud platforms – ironically, given that Amazon is the largest investor of Anthropic.
On the other hand, there is a growing list of countries, businesses and especially government organizations such as the US Navy and the Pentagon that began to ban Deepseek.
The time will see if these efforts are going on or whether Deepseek’s global increase will continue. Either way, Amodei says he considers Deepseek a new participant at the level of the top AI businesses of the US.
“The new fact here is that there is a new participant,” he said at Chinatalk. “In the large companies that AI can train – Anthropic, Openai, Google, maybe Meta and Xai – Deepseek may be added to the category.”
(Tagstotranslate) AI models
+++++++++++++++++++
TechNewsUpdates
beewire.org