Why every small business needs an AI policy (and what to put in it)

Image of two dice, with AI written on the face of one of the die. Symbolising how you can't leave it to fate before something bad happens when AI is used in your small business, an AI policy is a must.

I promise this is more relevant, and more straightforward, than it sounds.

Artificial intelligence has quietly become part of how most businesses operate. Your team is probably already using it. ChatGPT to draft emails. Claude to summarise documents. Canva's AI tools to generate images. Grammarly to tighten up copy. AI-powered transcription in meetings. The list goes on, and it's getting longer every week.

Which raises a question that most small businesses haven't got around to answering yet: does anyone actually know how your team is using these tools? And more importantly, do they know how they should be using them?

That's where an AI policy comes in. And before you click away thinking this sounds like something only large corporations need to worry about, stay with me. I promise this is more relevant, and more straightforward, than it sounds.

What is an AI policy and why does it matter?

An AI policy is simply a document that sets out how your organisation uses artificial intelligence tools. What's permitted, what isn't, where to be careful, and what to do if something goes wrong.

It doesn't need to be long. It doesn't need to be written by a lawyer. It does need to exist.

Here's why. When your team uses AI tools without any guidance, a few things tend to happen. Sensitive client information gets pasted into public AI tools without anyone realising that data might be used to train the model. Confidential business information gets shared with third-party platforms without a data protection assessment being done. AI-generated content gets published without anyone checking whether it's accurate. Decisions get made based on AI outputs that nobody has verified.

None of this is done maliciously. People are just trying to get things done faster. But without a policy, there's no shared understanding of where the boundaries are, and that creates real risk — particularly around data protection and GDPR.

But I'm a small business. Does this really apply to me?

Yes. Genuinely.

The ICO, which is the UK's data protection regulator, has made it very clear that GDPR applies regardless of the size of your organisation. If you're processing personal data, which almost every business does, you need to think about how your use of AI tools interacts with your data protection obligations.

Beyond the regulatory angle, there's a practical one. Your clients trust you with their information. If someone on your team pastes a client's personal details into a public AI tool and that information ends up somewhere it shouldn't, that's a problem for your reputation as much as your compliance.

And if you work with other businesses, particularly larger ones, you may find they start asking about your AI policy as part of their supplier due diligence. Having one ready is increasingly becoming a basic expectation rather than a nice-to-have.

What should an AI policy actually cover?

Here's what to actually put in it:

  • Which tools are approved for use. Not a blanket ban on AI, but a clear list of which tools your team can use and for what purposes. This might include tools like Microsoft Copilot, Claude, ChatGPT, Canva AI and so on. It should also flag which tools are not approved, particularly any that don't have clear data processing terms.

  • What information can and cannot be shared with AI tools. This is the most important section. Personal data, confidential client information, commercially sensitive details and anything covered by NDAs should never be entered into a public AI tool. Your policy should make this explicit and give practical examples so people understand what it actually means in practice.

  • How AI-generated content should be handled. AI makes things up. It's not malicious, it's just how the technology works. Your policy should set out that any AI-generated content must be reviewed and verified by a human before it's used, published or shared. This applies to facts, figures, quotes and any claims made in documents or marketing.

  • Transparency and disclosure. Does your business need to disclose when AI has been used to create content? In some contexts, yes. Your policy should set out your approach to this, particularly if you're creating content for clients.

  • Data protection and GDPR. A clear statement that use of AI tools must comply with your existing data protection policy, and that any new AI tools should be assessed before being adopted.

  • What to do if something goes wrong. If someone realises they've accidentally shared something they shouldn't have, who do they tell and what happens next? A simple escalation process gives people confidence to raise issues rather than hoping nobody notices.

A few things worth knowing before you write yours

  1. Keep it simple. A policy that's too long or too technical won't get read. Aim for something your whole team can read in ten minutes and actually understand.

  2. Make it specific to your business. A generic template is a starting point, not a finished product. The most useful policies are the ones that use real examples from your actual work and reference the actual tools your team uses.

  3. Review it regularly. AI is moving fast. What's true today may not be true in six months. Build in a review date, ideally every six to twelve months, so your policy keeps pace with how the tools are developing.

  4. Get your team involved. A policy that's handed down from above without any conversation tends to get ignored. A brief team discussion about how people are currently using AI and what questions they have will give you much better insight into what the policy actually needs to cover.

If you'd like help writing or reviewing an AI policy for your organisation, that's exactly the kind of work I do. I've written operational policies including AI usage guidelines for small businesses and purpose-led organisations, and I know how to make them practical rather than shelf-worthy.

Get in touch here.


Victoria Lincoln is a fractional operations partner helping small businesses, start-ups and purpose-led organisations get their systems, processes and day-to-day running properly sorted. Hands-on delivery, without the overhead of a full-time hire. Working remotely from Devon across the UK and Ireland. Find out more at The Efficiency Partner

Previous
Previous

Do you actually need a CRM? And if so, which one?

Next
Next

Five signs your business has outgrown informal operations