Skip to main content

In recent days, DeepSeek, a new artificial intelligence model of Chinese origin, has rapidly gained popularity and sparked discussions both economically and in terms of cybersecurity. On one hand, many observers note how DeepSeek has shaken the global market, briefly surpassing well-known U.S. models and causing a temporary downturn in the stocks of major companies like Nvidia, Oracle, and other U.S. tech giants. On the other hand, there are concrete cybersecurity concerns, fueled by reports of DDoS attacks against DeepSeek and reports from companies claiming to have identified serious vulnerabilities in the model.

Below, we will delve into what DeepSeek is, why its privacy policy is raising concerns, the potential security implications, and how to prudently use public LLM models to avoid becoming inadvertent insider threats.

A Look at DeepSeek

DeepSeek is a Large Language Model (LLM) developed in China, initially made available for free and, for a time, with simplified registration procedures. This combination—open access and low or no cost—quickly attracted a global audience of curious individuals, developers, and companies seeking alternatives to traditional giants. The media buzz was such that DeepSeek even displaced ChatGPT from the top spot in App Store rankings, simultaneously attracting investors and the attention of financial analysts who saw the model’s immediate success as a potential threat to U.S. dominance in AI.

Despite accolades and achievements in various benchmarks, DeepSeek soon faced accusations of intellectual property rights violations (some claim it borrowed techniques and data from ChatGPT). Additionally, the company found itself under pressure due to a series of cyberattacks (presumably DDoS) that compromised the platform, prompting it to temporarily suspend new registrations.

Ambiguous Privacy & Policy

One of the most discussed aspects concerns DeepSeek’s data management policy, which some experts argue is too permissive, effectively sharing large portions of user data, interactions, and prompts with its servers, potentially analyzed and stored. This, combined with fears that data might be monitored by Chinese authorities, has led many Western organizations to view the service with suspicion. Notably, the US Navy, according to some reports, has banned its personnel from using DeepSeek, fearing potential leaks of sensitive data.

It’s clear that this policy does not differ significantly from the terms and conditions imposed by other AI providers: controversies over how data is collected and processed also involve U.S. companies and those from other parts of the world. However, media attention is particularly high on DeepSeek, partly due to commercial and geopolitical tensions between the United States and China.

Security Implications

The Minimizing Perspective

According to some professionals, the cybersecurity implications are marginal. A powerful open-source LLM can indeed be exploited for malicious activities (creating malware, sophisticated phishing, etc.), but the same has long been true for other unfiltered AIs (including certain variants of Meta’s LLaMA). From this perspective, DeepSeek does not represent an unprecedented threat, although its ability to reason and generate complex content could, in theory, amplify risks.

The Critical View

On the other hand, some reports highlight that DeepSeek R1 may be particularly vulnerable to jailbreak and manipulation techniques, allowing the generation of prohibited content (instructions for making weapons or ransomware, discriminatory texts, material that violates the policies of many other AIs, etc.). If this vulnerability were confirmed, new risk scenarios would open up for security professionals: on one hand, malicious users could bypass the model’s filters for illicit purposes; on the other, DeepSeek’s structure itself could facilitate internal information leaks if company employees unknowingly provide sensitive data in prompts.

 

How not to become an Insider Threat

In a context where sharing data with an LLM could expose confidential information, it is essential to adopt some precautions. Here are some best practices to minimize risks:

  • Use an email not linked to the company: Creating an account with a personal or temporary email address allows you to keep work and private spheres separate. This way, any verification requests or phishing through the model are less likely to obtain corporate data.
  • Avoid entering sensitive data: Never directly provide the chatbot with confidential details, files, or potentially compromising information. Even if the service promises to protect them, the rule is what you don’t upload can’t be read or stolen.
  • Use a VPN: Connecting to the service via a virtual private network (VPN) can prevent your IP address or other geolocating information from being associated with your work environment, thus reducing profiling risks.
  • Check data retention policies: Before using an LLM in a corporate setting, it’s advisable to review the clauses governing data retention and the provider’s responsibilities. In the case of DeepSeek, many criticize the lack of clear limits, especially on how data is managed long-term.
  • Internal training and awareness: Companies should educate their employees about the dangers of using public AI tools. Adequate security training can prevent the accidental sharing of confidential information and reduce the risks of phishing or social engineering.

 

In conclusion…

DeepSeek has highlighted how the AI landscape is continuously evolving and how a single project, if well-orchestrated, can significantly alter the balance of cybersecurity. However, with privacy policy systems and filters still needing refinement and growing concerns about cybersecurity, caution is essential.

Whether the DeepSeek model represents a marginal threat or a significant security risk largely depends on its use (or misuse). For anyone looking to test or adopt LLM solutions, the advice is to establish clear corporate procedures to protect data, train staff, and maintain an informed approach to new technologies. After all, preventing information leaks or the generation of malicious content is much simpler than dealing with the severe consequences afterward.

Analysis by Vasily Kononov – Threat Intelligence Lead, CYBEROO