주요 콘텐츠로 이동
|
0 분 읽기

Shadow AI: What It Is and How to Prevent It

Get a Demo of Forcepoint Solutions

Technological innovation has a habit of curling the monkey’s paw. 

Look no further than artificial intelligence for the latest example. Popular chatbots and AI-powered everything have given way to shadow AI, which is the unapproved corporate use of generative AI and AI software.

Shadow AI, like shadow IT, usually takes the form of an innocuous activity such as asking ChatGPT to summarize an email. But the repercussions of using unapproved software to interact with sensitive information can understandably have far-reaching consequences for data security and compliance. 

Savvy IT teams are getting ahead of these risks through better visibility and control of shadow AI—here’s how. 

The Four Main Risks of Shadow AI

Understanding where risk resides in shadow AI is the first step in preventing it. While shadow AI presents unique challenges to data security, it is fundamentally no different than the unapproved cloud app usage that every IT admin knows all too well. 

Unsanctioned software usage inherently creates risk of data loss because the security team hasn’t had a chance to evaluate the controls the vendor has in place to prevent data loss. Where there are countless mature organizations like Microsoft, Salesforce or ServiceNow that have extensive documentation on how they protect sensitive information used on their platforms, there are – especially in the case of AI startups – thousands more that do not.

Similarly, if the security team does not have a tight handle on how and where AI is interacting with data, then companies risk non-compliance with local and international regulations. Legislators are increasingly seeking to put guardrails on how AI stores and uses data, and organizations bear the burden of ensuring the data they are responsible for is handled correctly.

Of course, there are also risks that are unique to AI. For instance: some artificial intelligence models are trained through user input. In cases where employees share financial records or customer information for help with analysis or responses, this may inadvertently spur a data leak. In those cases, companies need to ensure that the AI doesn’t train using that data – a tough task with shadow AI – or it may provide the information as part of an answer to another user.

The challenges businesses face isn’t only limited to the input of AI, but the output as well. Written content and software code from GenAI invites copyright and trademark risk, and prompt injections make applying that code to a flagship product a danger in itself as it could provide a backdoor into the software or otherwise present a security threat.

How to Get Visibility and Control of Shadow AI

Preventing shadow AI requires a multi-layered approach. Of course, organizations must take the time to develop an acceptable use policy that clearly outlines which AI applications can be used and the process for evaluating new software proposals.

Securing sensitive information from data loss or leakage must be top of mind with shadow AI—a fundamental reason why data visibility is so critical. Forcepoint Data Security Posture Management (DSPM) enables data discovery and classification across the enterprise, allowing data security to understand what data exists and where it’s being used. When it comes to securing data in ChatGPT, Forcepoint DSPM can see data being shared with the chatbot in real time and can revoke that data from the application.

Forcepoint Data Loss Prevention helps organizations create, manage and enforce policies designed to prevent sensitive information such as PII, PHI and other types of regulated or proprietary data from being shared with AI. These policies deliver unparalleled control by blocking the copying and pasting of data into GenAI to stop risky activity with shadow AI in real time.

Lastly, Forcepoint Cloud Access Security Broker (CASB) and Forcepoint Secure Web Gateway (SWG) can be used in tandem to control access to AI applications on the cloud and web. It can limit access based on factors such as the user’s position or team, and direct employees to approved software where there is overlap—from Gemini to ChatGPT for example.

Talk to an expert today to explore all the ways Forcepoint can deliver visibility and control over shadow AI to keep your organization’s data safe.

X-Labs

Get insight, analysis & news straight to your inbox

요점

사이버 보안

사이버 보안 세계의 최신 트렌드와 주제를 다루는 팟캐스트

지금 듣기