Study: GenAI tools raise risk of sensitive data exposure

Study: GenAI tools raise risk of sensitive data exposure

A recent study by Harmonic Security highlighted significant risks of sensitive data exposure through generative AI tools like ChatGPT by OpenAI, Microsoft’s Copilot, Google’s Gemini, and others, reports Cybernews. The study, which analyzed tens of thousands of prompts, revealed that nearly 8.5% of business users may have disclosed sensitive information, with 46% of these incidents involving customer data such as billing and authentication details. Employee-related data, including payroll and performance reviews, accounted for over a quarter of the cases, while legal, financial, and proprietary security details sought after by threat actors were also frequently exposed. Sensitive code, including access keys and proprietary source code, made up the remainder. The study found that many employees use free versions of these tools, which often lack adequate security controls. Despite the risks, the majority of generative AI use was deemed safe, focusing on tasks like text summarization, editing, and coding documentation. Experts stress that proper training and safeguards are essential to minimize exposure and ensure secure AI usage.

Get essential knowledge and practical strategies to use AI to better your security program.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *