Microsoft has formally restricted its employees from using the DeepSeek AI app, citing serious concerns about data security and potential exposure to politically influenced content. The move was revealed by Microsoft Vice Chairman and President Brad Smith during testimony before the U.S. Senate.
"We don't allow our employees to use the DeepSeek app," Smith stated, referencing both the desktop and mobile versions of the chatbot. He noted that the app is not offered through Microsoft's app store, further underscoring the company's cautious stance.
The decision stems primarily from apprehensions around user data being stored on servers located in China. DeepSeek's privacy policy confirms that user interactions are processed and stored under Chinese jurisdiction, which includes legal mandates requiring data sharing with state intelligence agencies. Microsoft is also concerned that the AI's responses could be shaped by Chinese state narratives, including the censorship of politically sensitive topics.
Although other organizations and even national governments have already imposed restrictions on DeepSeek, this marks the first time Microsoft has publicly addressed its internal ban on the platform.
Interestingly, Microsoft does host DeepSeek's open-source R1 model on its Azure cloud infrastructure—a move that raised questions, given the company's security concerns. However, Smith clarified the distinction: while the model itself is available for developers to deploy and fine-tune on private servers, the consumer-facing app, which routes user data through Chinese infrastructure, is not.
Open-source access means that developers can use the model without relying on DeepSeek's official servers, thereby avoiding direct data transfers to China. Nevertheless, risks remain. Microsoft flagged issues such as the model's potential to spread misinformation or generate unsafe code if left unchecked.
To mitigate these concerns, Smith stated that Microsoft had made internal modifications to the model before deploying it on Azure. He described the process as one of identifying and removing harmful behaviors from the original system, although he declined to go into detail, referring media outlets to his Senate testimony.
When DeepSeek was first introduced on Azure, Microsoft emphasized that it had undergone comprehensive safety evaluations, including rigorous "red teaming"—a process where AI systems are tested for vulnerabilities and potentially dangerous outputs.
While DeepSeek may compete with Microsoft's own AI products like Copilot, the company has not banned all competing chat applications from its platform. For example, Perplexity AI remains available in the Windows app store. However, apps by major rival Google—including Chrome and its AI chatbot Gemini—were not found during a recent search.
Microsoft's actions reflect a broader concern in the tech industry over the security and influence risks tied to foreign-developed AI systems. As AI adoption accelerates, how and where data is handled is becoming just as critical as what the models can do.