SECURITY AND DATA PROTECTION -DEEPSEEK AND THE IFA

The Privacy Policy is always the most revealing page on a web site. Sometimes the website is not quite clear which company is represented. A problem easily cleared up by a visit to the privacy policy. The Privacy policy of all the AI tools are worth visiting, to see where your information is really going. Paragraph two of Deepseek policy is as simple as Snow White. Your information will be controlled by two firms, both registered in China. For BAT, any consideration that we may access their databases for AI development is over. We are unable to allow your data to be controlled or processed outside of the UK.

Deepseek also has had a data leak – The database contained a significant volume of chat history, backend data and sensitive information, including log streams, API Secrets, and operational details. The rapid adoption of AI services without corresponding security is inherently risky. This exposure underscores the fact that the immediate security risks for AI applications stem from the infrastructure and tools supporting them. See here

Worse still, on January 29th 2025, Israeli cyber-security start up Wiz revealed a data leak in Deepseek.
BAT has similar privacy issues with ChatGPT. The data is controlled by a strangely named firm called OpenAI OpCo in San Francisco – no good at all for BAT.

For IFAs using AI, you cannot upload customer data in any meaningful way to create outputs. All enquiries must be anonymised or risk the acute discomfort of being displayed to other parties.

As AI becomes integrated into compliance businesses worldwide, the industry must recognize the risks of handling sensitive data and enforce security practices on par with those required for public cloud providers and major infrastructure providers.

Share this article