Shadow AI isn’t the threat… it’s the wake up call

4 min read Original article ↗

At least 38% of employees admit to using unsanctioned AI in their job. The real number is probably much higher…

Jake Heimark

Press enter or click to view image in full size

When you give your employees better AI, they will actually follow your security policies

Context-aware AI is always better

There’s a problem right now in AI — employees are leaking companies most sensitive internal data. It’s not hard to understand why. AI is amazing — but it needs context.

Try this experiment:

  1. Choose a sales account at your business and ask the highest powered, latest ChatGPT model to write a follow up email without exposing it to information that would violate your security policies. Take the time to describe what you are looking for in great detail. Give the model an explanation of the history of the account. Go ahead and give it multiple paragraphs of specific instructions, but exclude any sensitive information.
  2. Copy-paste the last three emails with that account and the last three salesforce notes, and put that into a prompt for Llama 3 (or any year+ old model). Add a not very thoughtful instruction like: “write me a follow up email that makes me some $$$

#2 will give you a much better response. Astronomically better. And that took way less thinking for you, the prompt engineer.

Which do you think your employee is going to do?

Shadow AI isn’t going away unless you give your employees something better

People are lazy. It may be why we are so good at inventing things… we prefer to not have to do that annoying thing anymore. It is an evolutionary advantage. But it is also a security nightmare.

One of the not very secret “secrets” of AI is that the model trainers are out of data to train models. There is only so much information available on the public internet… and the data in your business is the next frontier. So you need to protect it.

Get Jake Heimark’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

But your employees are little leak machines. They will be happy to leak it if it helps them do their job better.

And you should not blame them. I work in AI security and less than one week into our first proof of concept I was sending prompts to the wrong model because I forgot to check which window I had open. Or I was just being lazy. Point is… it happens. And it’s going to keep happening.

Most of the advice on shadow AI is to implement policies to protect your business. Those policies are dinosaurs. You aren’t going to be able to fight shadow AI with policies — your employees already know how much more productive they are when they use it.

Go ahead and try telling the 22 year old that went ape-shit that week TikTok was unavailable in the App Store that they can’t use your company’s data with ChatGPT. See how far you get.

There’s a reason shadow AI is so popular. Shadow AI gives better responses, and ultimately employees are paid to get the job done. Your best and brightest employees are going to be the ones who leak your information and they won’t even do it on purpose.

The solution: let models fetch their own context

There’s a fix to all of this:

Step one: grab a model you can run privately in your business. There are more than 400,000 open source models on Hugging Face, and I can vouch that a few of them are pretty great.

Step two: add tools so the model can grab its own context. Why should you have to feed it Salesforce data before your question? It should be able to grab that data on its own. That’s what the buzz around Model Context Protocol is all about. It makes it easier for employees to ask questions and get better response.

Then give that combination to your employees. When your employees can just say “write me the next email to ABC account” — they will stop using shadow AI. You won’t have to implement a single policy or have a single meeting to convince them to use it. They will just use the better AI.

You will immediately have cheaper, faster and more secure AI. It can really be that simple…

Interested in learning how to do this?

You do not need to build the whole system from scratch. There are a lot of companies with building blocks that will get you there quickly.

Join me on Discord where are a few of us are happy to share the latest practices in secure, practical AI. Talk soon.