轉到主要內容

Alleged ‘OmniGPT’ Data Breach is a Crash Course in GenAI Risk

|

0 分鐘閱讀

Get a Demo of Forcepoint Solutions

There are enough generative AI applications on the web to fill a Forbes 500 list four times over, and that number continues to grow.

With technology enthusiasts being commonplace in the modern workforce, more GenAI tools are being brought into the corporate fold on a weekly basis – whether it’s known by IT or not.

But without true visibility of how users are interacting with GenAI and what data is being shared, organizations may unintentionally be inviting in more risk than anticipated. The recently alleged OmniGPT data breach can serve as an example of what risks organizations face with GenAI, and their potential implications. 

What is OmniGPT?

OmniGPT is an AI aggregation platform that allows users to interact with multiple AI models from different providers, such as OpenAI's ChatGPT and Google's Gemini. It simplifies access to AI at scale and eliminates the need for multiple subscriptions, making it popular among employees experimenting with AI technologies.

Its uses vary and span multiple departments, with software engineers potentially using it to review code or accounting teams leaning on it for financial analysis. Due to its ubiquitous functionality, a wide range of potentially sensitive information can be unintentionally shared with it.

The OmniGPT Data Breach: What Happened?

In early February 2025, a threat actor known as "Gloomer" claimed responsibility for an alleged OmniGPT data breach that was initially reported by KrakenLabs in late January 2025.  

Sale of information such as account names and phone numbers is common in these situations. However, unique to the OmniGPT breach was the means through which this data was collected: 34,000,000+ messages between users and the AI.

According to Gloomer, these messages contained API keys, account credentials and files that were uploaded to OmniGPT. Without speculating on what those shared files contain, we can still consider the many ways employees use generative AI applications to better understand the risk posed to organizations that had users on OmniGPT – knowingly or unknowingly. 

The implications of the OmniGPT data breach are far-reaching. Exposure of contact information such as email addresses and phone numbers puts enterprises at heightened risk of phishing attacks, and the leaked API keys and login credentials could fuel account compromise. However, perhaps most importantly, the compromised chatbot interactions may contain personal, financial or proprietary business information, raising serious privacy concerns.

How to Prevent Data Breaches from Generative AI

Regardless of whether the alleged OmniGPT breach occurred, the narrative behind it is still worth learning from. While GenAI is a powerful driver of productivity, the security and privacy challenges that surround it are significant and can’t be overlooked. Organizations must be aware of these risks and actively working to prevent them in order to safeguard the corporate use of GenAI applications.

Preventing data breaches from generative AI tools demands complete visibility of your data – including where and how it’s used – as well as precision and control over that data. Being able to enforce data security and privacy policies in the web, cloud and at the endpoint is paramount to protect sensitive information within these applications.

Forcepoint offers comprehensive data protection solutions that can help companies safeguard their users and data within GenAI. 

  • Forcepoint Data Security Posture Management (DSPM) provides better visibility of data through AI-powered discovery and classification. This ensures companies have a handle on where sensitive data is, how it’s used and what applications it is shared with.
  • Forcepoint Data Loss Prevention (DLP) enables organizations to configure and enforce policies regarding data usage with cloud or web, effectively limiting what files can be shared via upload or copy/paste.
  • Forcepoint Web Security (SWG) and Cloud Access Security Broker (CASB) helps organizations control access to and permissions for GenAI applications, directing users to approved tools and limiting shadow AI use.

    Talk to a Forcepoint expert today to learn more about how your organization can limit data risk from GenAI applications.

X-Labs

Get insight, analysis & news straight to your inbox

直奔主題

網絡安全

涵蓋網絡安全領域最新趨勢和話題的播客

立即收聽