انتقل إلى المحتوى الرئيسي
|
0 دقائق القراءة

Learn to Live Without It: The Impact of Legislation on AI

Future Insights 2025 blog series, post #2

Get a Demo of Forcepoint Solutions

Note from Lionel: Director of Global Compliance Michael Leach examines the impact of pending AI regulations will have on the industry in this second post of our Future Insights 2025 series. If you missed it, check out the first blog post about AISPM.

###

Companies are sprinting toward a finish line where their technology stacks and the services they deliver are unmistakably intertwined with artificial intelligence.

While the jury is still out on products such as AI-powered golf clubs, AI has swiftly blossomed from a basic chatbot to the underlying technology behind some of the most impactful software services on the market today. 

At the same time, legislators and regulators across the world are ironing out their guidelines for a more thoughtful and careful approach to incorporating AI. For companies to remain ahead of the curve, they’ll need to consider how their AI-powered software can function without AI.

Legislation Follows Innovation

ChatGPT opened Pandora’s Box. Soon after its introduction in late 2022, tens of thousands of AI companies flooded into the open market, promising faster and better everything

With the flurry of IPOs and product launches, you may have missed what was going on behind the scenes. Legislators quickly got to work in setting guidelines and guardrails for the most transformative technology since the internet.

Several countries introduced their versions of global AI frameworks. These provide practical guidance for private sector companies in an attempt to address key ethical and governance issues when designing and deploying AI solutions. These frameworks emphasize transparency, fairness and accountability. Example AI frameworks include:

and accountability. Example AI frameworks include:

Some of these frameworks arrived alongside global AI laws designed to push things forward at a national level:

  • EU AI Act: Focuses on risk-based regulation of AI systems, categorizing them into unacceptable, high and low or minimal risk.
  • US Executive Order on AI: Emphasizes transparency, safety and privacy in AI development and deployment, and calls for new standards and best practices across various sectors.
    • Numerous U.S. states have pending legislation or enacted their own state-level AI laws.
    • Canada’s Directive on Automated Decision-Making:Ensures that automated decision systems are used in a manner that is transparent, accountable and consistent with human rights. Requires impact assessments and ongoing monitoring.

Many of these policies build on data protection and privacy laws already in place—some of which already account for how AI interacts with data. These include GDPR, CCPA, LGPD, PIPA, APPI and Australia’s Privacy Act 1988. Understanding what these regulations are for, how far they go (or do not go) and who they apply to is an often overlooked first step in true AI adoption.

Much Ado About Something

Legislation is spurring companies to be more thoughtful about AI and how it impacts end users. Companies bear the responsibility of ensuring data is in safe hands as with any other application, and users must retain control over what data is fed to AI – in case they want to delete it.

Many companies put a user interface on AI and expect that voila – Series A funding is right around the corner. But for a lot of these startups, the entire back end of the product is open source, and the AI is training on that data.

With AI legislation in mind, organizations must retain complete control over data. In some cases, that means making sure that AI is plug-and-play instead of one-size-fits-all. Consider the following:

  • Data is a one-way street. AI must function as an enterprise, closed-end model.
  • AI varies by function. Evaluate AI investments based on who you are—customer support, software development or marketing, for example. The criticality or sensitivity of the data you share with that AI should reflect the security measures it has in place.
  • AI must have transparency. Document everything and ensure people understand how AI is used within a product. Don’t give out intellectual property but explain how data moves from input to output. Here’s an example of how Forcepoint does this with AI Mesh

Keep The ‘Off Switch’ in Sight

Most importantly, companies must become increasingly thoughtful about how AI is integrated into the service stack. This is because on an individual basis, organizations must allow users the ability to request AI be turned off.

Consider this: if you’ve spent the last year building a brand-new AI chatbot and a piece of legislation provides for users to opt out of it, how will you handle this request? There must be consideration to the workflows in place and how regulations can impact them.

As an example, Forcepoint informs users, both at the prompt window and during the generated output, that they are engaging with AI and before using AI chatbot capabilities provided by Forcepoint.

Forcepoint also provides a listing of our data sub-processors that we use in support of our products and operations. That’s why our AI tools and services are denoted with (*), via our Forcepoint Sub-processor List on our website.

 

By chatting, you consent to the processing of your data and the chat being stored according to Forcepoint's Privacy Policy. Please be aware this chatbot uses Generative Artificial Intelligence to process and respond to questions. Chatbot responses may contain errors, please verify accuracy of the output.

This binary usage of AI will become the standard in 2025 and beyond as enterprises continue to grapple with how to provide AI-powered software services while still providing the option for users to opt out of the AI portion of those services.

Besides opt-out, organizations must also have the ability to identify the specific individuals interacting with AI and the personal data they share within it. Data Security Posture Management (DSPM) helps here by identifying and classifying the data you have so you can monitor interactions with platforms like ChatGPT Enterprise to ensure end-user activity doesn’t result in non-compliance.

Users are eager to take advantage of AI and companies are right to be skating toward the puck of innovation. However, software vendors must account for legislation in the design of their services sooner rather than later.

  • michael-leach-forcepoint

    Michael Leach

    As the Director of Global Compliance, Michael provides the necessary legal guidance and requirements for Forcepoint's operations to ensure we are compliant with applicable government laws and regulations—including cybersecurity, data protection & privacy, information security, trade compliance import & export and anti-corruption laws.

    اقرأ المزيد من المقالات بواسطة Michael Leach

X-Labs

Get insight, analysis & news straight to your inbox

إلى النقطة

الأمن السيبراني

بودكاست يغطي أحدث الاتجاهات والموضوعات في عالم الأمن السيبراني

استمع الآن