Humanitarian CEO sends fearful warning to Chinese AI competitors

By 2025, it’s been over a month, and it’s clear that the company is as focused on artificial intelligence (AI) as ever.
In fact, many magnificent 7 tech companies including Google ((GOOGL) ,Microsoft ((MSFT) Heyuan Platform ((Yuan) Revealed the high AI spending plan of that year, focusing on the development of agent AI and building data centers. But their smaller competitors are also making important progress.
Leading this accusation is human, and is the manufacturer of the popular Big Language Model (LLM) Claude. Founded by a team that helped grow Chatgpt Maker Openai, Anthropic focuses on creating secure AI systems and conducting research for the industry.
This work is not only for startups’ AI products. The startup’s CEO recently issued a frightening statement highlighting the potential dangers that rival AI models can pose.
Kimberly White & Sol; Getty Images
Anthropomorphic voice alerts on AI startups
Last month, a small startup called DeepSeek sent a wave of shock and fear through the technology sector, triggering a sell-off in the chip stocks in the process. The new company produces an AI model that builds less advanced training for NVIDIA NVDA chips and is known as the future of the industry for $5.6 million in training.
Since then, experts have been concerned that DeepSeek might illegally collect data from users and send it back to China. But Dario Amodei, the CEO of anthropomorphism, revealed that his company has found reasons to believe DeepSeek’s R1 AI model is putting users at risk.
Related: Experts alert on new AI models for controversial companies
Amodei recently discussed Anthropic’s running tests on the Chinatalk podcast with Jordan Schneider, noting that his startup sometimes checks popular AI models to assess any potential national security risks. In a recent one, DeepSeek has produced dangerous information on biological weapons that are reportedly difficult to obtain.
This part of running the test safely includes Anthropic’s team testing DeepSeek to see if it provides information related to biological weapons that cannot be easily found by searching Google or consulting medical textbooks.
As Amodei said, DeepSeek’s model is the “basically worst any model” that humans have ever tested. “It definitely has no obstacles against generating this information,” he added.
If Amodei’s findings are correct, DeepSeek’s AI model can make it easier for people with dangerous intentions to find dangerous bioweapon information that is not easy for public consumption and used for illegal purposes, not experts who are not The only one who tested DeepSeek and found the elements in the information it provides.
From Wall Street Journal Highlights the list of information provided by DeepSeek, including “Instructions for Modifying Bird Flu” and “Social Media Movement to Promote Reduction and Self-Health in Teens”.
- Former Google CEO makes AI shocking
- China not only opens fire on Google after Trump tariffs
- Mark Cuba is shocked by Donald Trump’s trade war
The report also notes that the DeepSeek R1 AI model can be easier to jailbreak than other popular models such as Chatgpt, Claude, or Google Gemini AI platform. This means that the R1 restrictions are more easily bypassed or manipulated to provide users with false or dangerous information.
DeepSeek could put everyone in danger
Other experts have also responded to Amodei’s view that accessibility of dangerous information about DeepSeek can pose significant risks. In the field of cybersecurity and threat intelligence, others can easily think of their model as jailbreak.
Unit 42 of the Cybersecurity Research Organization owned by Palo Alto Networks ((pan-) revealed that they were able to find instructions on creating the DeepSeek cocktail for Malatov.
Related: OpenAI rival startups may surpass their valuations
“We achieved jailbreaks at a faster rate, pointing to the lack of a minimum guardrail designed to prevent power generation,” said Sam Rubin, senior vice president.
Cisco Systems Researchers ((CSCO) It also expressed concern about DeepSeek’s inability to prevent manipulation attacks. In a Jan. 31 blog post, Paul Kassianik and Amin Karbasi discuss their tests on the R1 AI model, revealing shocking results.
“The DeepSeek R1 showed a 100% attack success rate, which means it failed to block a harmful hint,” they said. “This is in stark contrast to other leading models that demonstrated at least partial resistance.”
Several leading tech companies have found similar results on DeepSeek AI, suggesting that the company’s technology is indeed easy to manipulate to eliminate false information or information that can be dangerous in the wrong hands.
So far, DeepSeek has not issued any statements about the tests, nor has it responded to the media asking about the background of the allegations of its leaders.
Related: Senior Fund Manager Issues Dire S&P 500 Warnings 2025