Netskope has published
new research showing that regulated data (data that organizations have
a legal duty to protect) makes up more than a third of the sensitive data being
shared with generative AI (genAI) applications-presenting a potential risk to
businesses of costly data breaches.
The
new Netskope Threat Labs research reveals three-quarters of businesses surveyed
now completely block at least one genAI app, which reflects the desire by
enterprise technology leaders to limit the risk of sensitive data exfiltration.
However, with fewer than half of organizations applying data-centric controls
to prevent sensitive information from being shared in input inquiries, most are
behind in adopting the advanced data loss prevention (DLP) solutions needed to
safely enable genAI.
Using
global data sets, the researchers found that 96% of businesses are now using
genAI-a number that has tripled over the past 12 months. On average,
enterprises now use nearly 10 genAI apps, up from three last year, with the top
1% adopters now using an average of 80 apps, up significantly from 14. With the
increased use, enterprises have experienced a surge in proprietary source code
sharing within genAI apps, accounting for 46% of all documented data policy
violations. These shifting dynamics complicate how enterprises control risk,
prompting the need for a more robust DLP effort.
There
are positive signs of proactive risk management in the nuance of security and
data loss controls organizations are applying: for example, 65% of enterprises
now implement real-time user coaching to help guide user interactions with
genAI apps. According to the research, effective user coaching has played a
crucial role in mitigating data risks, prompting 57% of users to alter their
actions after receiving coaching alerts.
"Securing
genAI needs further investment and greater attention as its use permeates
through enterprises with no signs that it will slow down soon," said James
Robinson, Chief Information Security Officer, Netskope. "Enterprises must
recognize that genAI outputs can inadvertently expose sensitive information,
propagate misinformation or even introduce malicious content. It demands a
robust risk management approach to safeguard data, reputation, and business
continuity."
Netskope's
Cloud and Threat Report: AI Apps in the Enterprise also finds that:
- ChatGPT
remains the most popular app, with more than 80% of enterprises using it
- Microsoft
Copilot showed the most dramatic growth in use since its launch in January 2024
at 57%
- 19%
of organizations have imposed a blanket ban on GitHub CoPilot
Key
Takeaways for Enterprises
Netskope
recommends enterprises review, adapt and tailor their risk frameworks
specifically to AI or genAI using efforts like the NIST AI Risk Management
Framework. Specific tactical steps to address risk from genAI include:
- Know Your Current State: Begin by assessing your existing uses of AI and machine
learning, data pipelines, and genAI applications. Identify vulnerabilities
and gaps in security controls.
- Implement Core Controls: Establish fundamental security measures, such as access
controls, authentication mechanisms, and encryption.
- Plan for Advanced
Controls: Beyond the basics, develop a
roadmap for advanced security controls. Consider threat modeling, anomaly
detection, continuous monitoring, and behavioral detection to identify
suspicious data movements across cloud environments to genAI apps that
deviate from normal user patterns.
- Measure, Start, Revise,
Iterate: Regularly evaluate the
effectiveness of your security measures. Adapt and refine them based on
real-world experiences and emerging threats.
Download
the full Cloud and Threat Report: AI Apps in the Enterprise
here.