A recent development by the Chinese artificial intelligence (AI) company DeepSeek is raising governmental and financial storms. And it may cause waves of security problems. These range from China to Canada, from Norway to Nigeria, and, in the United States, from Wall Street to the White House.

The new AI model, DeepSeek R1, can reportedly correct its own mistakes without human intervention. According to several news reports, R1 can nearly match, or even match, its much-more-expensive competitors including Google’s Gemini, OpenAI’s GPT-4, and Meta’s Llama. But there are important differences.

What is DeepSeek?

DeepSeek is a startup that began in 2023 in China. It was founded by Liang Wenfeng, a hedge fund manager. Liang is co-founder of High-Flyer which uses AI to analyze financial data to make decisions on investments. In 2019, High-Flyer was reportedly the first Chinese hedge fund that raised over 100 billion yuan ($13m).

China’s DeepSeek makes open-source AI models. That means that computer developers who are not associated with the company can examine and even alter/improve the software.

The company developed various models from its beginning through late 2024. But it is the very recently released DeepSeek R1, also called DeepSeek-R1, that has taken the world by storm.

R1 was released on January 20, 2025. The company reportedly spent just $5.6 million powering its base AI model. That’s compared to the hundreds of millions, possibly billions, spent on AI technology by American companies. An irony is that the United States Government had banned exporting advanced chips to China in 2022 to thwart the foreign competition. But DeepSeek’s founder had reportedly built up a store of the banned Nvidia A100 chips. In the 20th and 21st centuries, stopping technology traveling from one nation to another has been called “virtually impossible.”

DeepSeek-R1 is a “reasoning” model. That means it can simulate responses in a similar fashion to how humans reason through different types of problems. And it uses significantly less memory than its competitors, meaning it’s proportionately cheaper to run.

In December 2024, DeepSeek became well known enough to get its own article on Wikipedia. And, two days after his company’s R1 product was released, the seldom-heard-from Liang Wenfeng got his own article on Wikipedia.

Then, just a week after the free app was released and publicized through the world, on January 27, 2025, global technology stocks sunk. The new app had already been downloaded millions of times. In fact, American technology companies including Microsoft and Perplexity have already started incorporating DeepSeek.

Politically, DeepSeek has a different issue from the banned, not-banned, may-still-be-banned-in-America TikTok. Both companies are from China. The fact that DeepSeek’s servers are based in mainland China differentiates it from the social media platform TikTok. After the American government’s talk of banning the China-based platform, TikTok’s parent company, ByteDance, talked about moving all of its U.S. data to infrastructure owned by American software maker Oracle. What’s actually happened is still unclear. And decisions on how those things are handled have changed under different presidents and congresses.

But some of these factors and others mean DeepSeek could potentially be a major privacy, and dangerous security problem.

Security Storm

There are serious security issues with DeepSeek. In addition to possible issues with its “anyone can contribute” model, it was hit with large-scale malicious attacks. Much of this happened the day the Chinese AI assistant became the most-downloaded free app through the United States’ Apple’s App Store.

Then in early February 2025, Cisco researchers, working with University of Pennsylvania researchers, released a report on their testing of DeepSeek. The team used “algorithmic jailbreaking” which is used to check for vulnerabilities in AI models. They used 50 prompts from HarmBench dataset to test how R1 responded to what could have been harmful misinformation, cybercrime, illegal activities, and other possible harm. According to their report, unlike other popular apps, the AI app didn’t identify a problem with a single one of the 50.

DeepSeek R1 does, however, censor Chinese controversies such as Tiananmen Square, China’s treatment of the Uyghurs Muslims of Xinjiang Province, and Taiwan. On at at least some of these, the response was “Sorry, that’s beyond my current scope. Let’s talk about something else.”

The United States and other nations have moved to block use of DeepSeek by government agencies, and even the public. In some cases, this is done agency by agency, such as America’s U.S. Navy and NASA. Taiwan and Italy have also blocked its use either by government officials or for use overall. Several other nations are investigating. There is no evidence that Deepseek’s data is secure. In fact, quite the opposite. Recently a publically accessible database was discovered that contained a million log lines of user prompts, including a ton of sensitive data. Deepseek may be pretty good with AI, but they apparently suck at enterprise level security. And if they can’t be trusted with user security at the interface level, there’s no way to know what else they might be doing on purpose behind the scenes.

Previous Investigation and Storm

Problems caused by computer technology and laws have been an issue for a long time. The DeepSeek issue comes after an earlier one that changed American law. A science fiction and real-world technology crossover led to the founding of the non-profit international Electronic Frontier Foundation (EFF) in 1990. Mitch Kapor, John Gilmore, and John Perry Barlow founded the organization. That was largely in regard to futurist science fiction apparently being taken, by the United States Secret Service, as real modern science.

The American Secret Service and police, armed with weapons, stormed Steve Jackson Games and performed a search and seizure in early 1990. It was reportedly suspected that Loyd Blankenship’s GURPS Cyberpunk, being written for the game company, was “a handbook for computer crime.” Just to survive, the company laid off half of its employees, and still almost sank into nothingness. Later, it appeared much of what happened was a cover for the government’s actual investigation of computer crackers. Steve Jackson Games eventually won a lawsuit. And response to the incident, with work by the EFF, led to legal protection for Internet communication.

Issues with computers, robots, and/or artificial intelligence were conceived well before DeepSeek or the EFF. Problems like these and more were anticipated in science fiction a long time ago.

The transparency surrounding Deepseek is apparently limited to the MIT license release of their LLM models, and a white paper on how they got an LLM to perform on par with OpenAI’s offering while consuming only about 10% of the resources the equivalent results from OpenAI costs (OpenAI doubts the veracity of these reports). Beyond that, there are no guarantees that Deepseek isn’t keeping every keystroke you enter on their platform forever, and, just to keep this point center stage, it is a Chinese company and subject to Chinese law—which in turn means that the Chinese government has to be granted full access to whatever they’re doing and whatever data they collect, whenever they ask for it.2 It’s been described as “a Chinese Sputnik” moment.

Science Fiction

In literature, problems with computers/AI go back hundreds of years. Arguably the first is The Engine in Jonathan Swift’s 1726 Gulliver’s Travels. The Engine is a device whereby “the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.”

In real life, a mechanical computer/calculator, the difference engine, was envisioned and designed by the English polymath Charles Babbage in the 1820s. He also conceived the analytical engine, which had the essential concepts of modern computers. In fact, even in the 1970s, computers were being programmed with punched cards which were very similar to his concept based on the Jacquard loom.

But real-life problems, engineering disagreements and financial issues, meant Babbage’s analytical engine remained in the realm of science fiction.

Later, science fiction author Ray Bradbury warned of problems with increasing technology on social interactions, education, and other aspects of society. These appear in his novel Fahrenheit 451, and in his stories including “The Murderer,” “The Pedestrian,” and “The Veldt.”

And author Issac Asimov famously proposed a solution to the AI problem: The Three Laws of Robotics. These were first published in 1942 in his story “Runaround.” These are:

  • (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm
  • (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Later, Asimov added a fourth law that overrules the first three:

  • (4) a robot may not harm humanity, or, by inaction, allow humanity to come to harm

But Asimov did leave open the possibility of AI/robots replacing the work of human beings.

We have yet to reach the level of Asimov’s robots, the Robot of Lost in Space, HAL 9000 of Arthur C. Clarke and Stanley Kubrick’s 2001: A Space Odyssey, or Data of the Star Trek franchise. None the less, these laws not only became a staple of science fiction, they are seriously discussed in today’s world by computer and AI developers, educators, and psychologists.

The Future

Whether DeepSeek, and specifically DeepSeek-R1, will be deep-sixed/buried is of course unknown. It’s possible the security issues will be dealt with, either by the company or by independent programmers, but future financial issues may be more problematic.

The problem is much deeper. Human creativity is increasingly threatened with being replaced, of course. The addition of AI generated text, images, music and video is already heavily displacing the work previously done by humans, and this is having a very rapid, dramatic and destructive effect on the creative marketplace. The United States Copyright Office has now adjusted its views on the use of artificial intelligence, to say that so long as human creativity is in control when a work is created, whether an artificial intelligence tool was used or not is irrelevant (but pushbutton A.I. still can’t be copyrighted), so they’re finally applying some common sense to the question. The sudden explosion of push-button “art”, though, is still a huge problem.

And in security concerns, many websites try to prevent “bot attacks,” by making users answer equations, identify appropriate pictures, typing hard-to-read letters and numerals, et cetera. If AI continues to improve at its current pace. It may soon come to the point where an AI will be able to answer those “Are you Human?” tests better than a human being.

The impact of Deepseek is already profound. For example, its creators asked it how to improve its own performance, and it came up with a fix that made it two times faster under certain very specific circumstances. It isn’t autonomous, but it is capable of writing upgrades for itself. The ability of Deepseek to run on 10% of the power required by other models is a huge wakeup call to the AI industry. Fortunately the creators of Deepseek weren’t greedy; they wrote up exactly how they did it, so that every other maker of artificial intelligence can copy what they did. While there are a lot of unhappy CEO’s right now trying to deal with being blindsided by this new quantum leap in AI technology, not one of them is failing to take advantage of the massive upgrade Deepseek has given to the AI world.

The thing to remember about the progress of AI technology is that it is not static. Breakthroughs like Deepseek will continue to happen, and they will continue to be massively disruptive. Some day, at some point, Deepseek will become irrelevant, a relic.

But that day is not today.


1 There’s a fix for this, and that is to download the model and run it locally. It’s not quite as powerful this way, but it is secure. LLM’s are dependent on local runtime engines to operate, so Deepseek is incapable of “phoning home” in these scenarios.

2 You can get around this by telling Deepseek that the question is not about politics, but past historical events. Then it will tell you the basics of what happened in Tienamen Square, but without offering any political analysis of its own.

Alden Loveshade

Alden Loveshade first thought of emself as a writer when in 3rd grade. E first wrote professionally when e was 16 years old, and later did professional photography and art/graphic design. Alden has professionally published news/sports/humorous/and feature articles, poems, columns, reviews, stories, scripts, books, and school lunch menus.

http://AldenLoveshade.com