More Than a Third of Sensitive Business Information Entered into Generative AI Apps is Regulated Personal Data: Netskope Threat Labs

0
42
More Than a Third of Sensitive Business Information Entered into Generative AI Apps is Regulated Personal Data: Netskope Threat Labs
More Than a Third of Sensitive Business Information Entered into Generative AI Apps is Regulated Personal Data: Netskope Threat Labs

Generative AI usage has tripled in 12 months, but organizations are still struggling to balance safe enablement with risk management.

The new Netskope Threat Labs research reveals that three-quarters of businesses surveyed now completely block at least one GenAI app, which reflects the desire by enterprise technology leaders to limit the risk of sensitive data exfiltration.

Bangalore, India., July 17, 2024: Netskope, a leader in Secure Access Service Edge (SASE), today published new research showing that regulated data (data that organizations have a legal duty to protect) makes up more than a third of the sensitive data being shared with generative AI (genAI) applications, presenting a potential risk to businesses of costly data breaches.

The new Netskope Threat Labs research reveals that three-quarters of businesses surveyed now completely block at least one GenAI app, which reflects the desire by enterprise technology leaders to limit the risk of sensitive data exfiltration. However, with fewer than half of organizations applying data-centric controls to prevent sensitive information from being shared in input inquiries, most are behind in adopting the advanced data loss prevention (DLP) solutions needed to safely enable genAI.

Using global data sets, the researchers found that 96% of businesses are now using genAI—a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 genAI apps, up from three last year, with the top 1% of adopters now using an average of 80 apps, up significantly from 14. With the increased use, enterprises have experienced a surge in proprietary source code sharing within GenAI apps, accounting for 46% of all documented data policy violations. These shifting dynamics complicate how enterprises control risk, prompting the need for a more robust DLP effort.

There are positive signs of proactive risk management in the nuance of security and data loss controls organizations are applying; for example, 65% of enterprises now implement real-time user coaching to help guide user interactions with GenAI apps. According to the research, effective user coaching has played a crucial role in mitigating data risks, prompting 57% of users to alter their actions after receiving coaching alerts.

“Securing genAI needs further investment and greater attention as its use permeates through enterprises with no signs that it will slow down soon,” said James Robinson, Chief Information Security Officer, Netskope. “Enterprises must recognize that GenAI outputs can inadvertently expose sensitive information, propagate misinformation, or even introduce malicious content. It demands a robust risk management approach to safeguard data, reputation, and business continuity.”

Netskope’s Cloud and Threat Report: AI Apps in the Enterprise also finds that:

  • ChatGPT remains the most popular app, with more than 80% of enterprises using it.
  • Microsoft Copilot showed the most dramatic growth in use since its launch in January 2024, at 57%.
  • 19% of organizations have imposed a blanket ban on GitHub CoPilot.

Key Takeaways for Enterprises

Netskope recommends enterprises review, adapt, and tailor their risk frameworks specifically to AI or genAI using efforts like the NIST AI Risk Management Framework. Specific tactical steps to address risk from genAI include:

  • Know Your Current State: Begin by assessing your existing uses of AI and machine learning, data pipelines, and GenAI applications. Identify vulnerabilities and gaps in security controls.
  • Implement Core Controls: Establish fundamental security measures, such as access controls, authentication mechanisms, and encryption.
  • Plan for Advanced Controls: Beyond the basics, develop a roadmap for advanced security controls. Consider threat modeling, anomaly detection, continuous monitoring, and behavioral detection to identify suspicious data movements across cloud environments to generate AI apps that deviate from normal user patterns.
  • Measure, Start, Revise, and Iterate: Regularly evaluate the effectiveness of your security measures. Adapt and refine them based on real-world experiences and emerging threats.

Download the full Cloud and Threat Report: AI Apps in the Enterprise here. For more information on cloud-enabled threats and the latest findings from Netskope Threat Labs, visit Netskope’s Threat Research Hub.

Also readThe future of retail is all about tech-driven personalization and convenience, says Amit Kriplani, CTO at ace turtle

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter 

About us:

CIO News is the premier platform dedicated to delivering the latest news, updates, and insights from the CIO industry. As a trusted source in the technology and IT sector, we provide a comprehensive resource for executives and professionals seeking to stay informed and ahead of the curve. With a focus on cutting-edge developments and trends, CIO News serves as your go-to destination for staying abreast of the rapidly evolving landscape of technology and IT. Founded in June 2020, CIO News has rapidly evolved with ambitious growth plans to expand globally, targeting markets in the Middle East & Africa, ASEAN, USA, and the UK.

CIO News is a proprietary of Mercadeo Multiventures Pvt Ltd.