Throughout my career, I have engaged with numerous clients, and discussions regarding risk, prevention, and policy have been often taken a back seat to technology solutions and how it can achieve the business goal. In my current conversations about AI, what it can be used for is still the number one topic. Refreshingly, however, Governance, Risk, & Compliance (GRC) seems to be top of mind for many enterprises when it comes to AI, even users in some cases are requesting formal policies around AI usage. One possible, and in my opinion, likely reason for this that there have been a number of public data loss instances in conjunction with AI chat assistants, not the least well known is Samsung who suffered multiple incidents in the same year. These incidents resulted in a policy banning the use of AI assistants. That was probably the right decision to prevent additional data loss, but there is a better way
The impact of AI advancements in in AI assistants, and agentic AI can have on productivity and business outcomes is not to be understated, conversely, the risks should not be overstated. When addressed early and deliberately, a comprehensive GRC strategy can not only prevent data loss, but accelerate innovation. The blessing and curse of technology is that it can be designed a hundred different ways. If there is an enterprise wide strategy, the architecture requirements become more clear from the beginning. This uniformity can also help avoid incompatibility when integrating new AI solutions. So in effect, the GRC strategy becomes part of the foundation on which the AI ecosystem is built.
While models don’t ‘recall’ data, this data will be turned into tokens (words or parts of words) that are used when generating responses. Public models like ChatGPT and Gemini are trained on user input, so any data put in to the prompt there is a risk a similar chunk of data will be used in a response. This is the default behavior though, a company purchasing licenses is able to define what those security controls look like including restricting the model from using company input data.
At this point, banning AI isn’t realistic or enforceable. If AI assistants are banned, employees will use their personal accounts; not maliciously, and many will even be cautious of sanitizing prompts of company data when using them. However, not all employees will and even the best intentioned may inadvertently leak some tidbit of company data. So, the question becomes when, not if AI assistants are used. Taking this inevitability into consideration, pro-active policies and frameworks become the strategic vector for preventing data loss, and implementing a GRC strategy early is crucial to a successful outcome. With a clear strategy in place, solutions are designed and deployed on day zero to meet company policy, giving the enterprise – not the employee – control over critical data confidentiality decisions.



Leave a Reply