Building an AI Acceptable Use Policy That People Actually Follow

Most AI AUPs are copy-paste PDFs no one reads. Here's how to write one that's enforceable, practical, and gets employee buy-in.

Every week, another organisation publishes an AI Acceptable Use Policy. Most of them look the same: a list of prohibitions, a few definitions lifted from a vendor's terms of service, a signature line, and a filing cabinet where it will live indefinitely. Employees sign it. Nobody reads it. Nothing changes.

If your AI AUP isn't shaping behaviour, it isn't working — and in a regulatory environment where AI governance is increasingly scrutinised, "we have a policy" is no longer enough. You need a policy people actually understand and follow.

Here's how to build one.

Why Most AI AUPs Fail

The root problem is that AI AUPs are usually written by legal or compliance teams for legal or compliance purposes — to create a paper trail, not to guide decision-making. As a result, they tend to be:

  • Too abstract. "Do not use AI tools to process confidential data" sounds clear until an employee wonders whether their project notes count as confidential, or whether summarising a client email in ChatGPT is covered.

  • Too broad. Policies that prohibit entire categories of AI use without nuance push employees to ignore them entirely or work around them quietly.

  • Not maintained. AI tools are evolving at a pace that makes a policy written in January obsolete by June. A static document can't keep up.

  • Not connected to consequences. If there's no clear answer to "what happens if I break this rule?", the rule loses its teeth.

Step 1: Start With Use Cases, Not Prohibitions

Effective AI AUPs are built around what people are actually doing — or want to do — with AI tools. Before writing a single policy line, audit your organisation's AI usage (see our earlier post on Shadow AI) and map out the most common use cases:

  • Drafting emails and documents

  • Summarising meetings or calls

  • Writing or reviewing code

  • Analysing data or generating reports

  • Customer-facing communications

  • Research and competitive intelligence

For each use case, determine: is this permitted, permitted with conditions, or prohibited? Build your policy around this taxonomy, not around a blanket list of don'ts.

This approach has a practical benefit: employees can look up their specific situation and get a clear answer, rather than trying to interpret vague language themselves.

Step 2: Classify Your Data, Then Map It to AI Tools

The biggest risk from AI tool usage isn't the tools themselves — it's what data gets fed into them. Your AUP needs to connect your existing data classification framework to specific AI tool permissions.

A simple mapping might look like this:

Data Classification

Approved AI Tools

Conditions

Public / Internal

Any approved tool

None

Confidential

Enterprise-tier tools only (e.g., Claude for Enterprise, Microsoft 365 Copilot)

Must be covered by DPA / BAA

Restricted / Personal Data

Not permitted in external AI tools

Internal/self-hosted models only, if available

The distinction between a consumer-tier AI tool (where data may be used for model training) and an enterprise-tier tool (with a data processing agreement in place) is crucial and should be explicitly called out in your policy.

Step 3: Define the Approved Tool List

Your AUP should reference a living approved tool list — a document or intranet page that is updated as tools are reviewed and approved, rather than baking tool names into the policy itself (which will go out of date).

The approved list should include, for each tool:

  • Tool name and vendor

  • Approved use cases

  • Maximum data classification permitted

  • Whether a data processing agreement is in place

  • Any conditions (e.g., "enterprise licence only", "not for customer data")

This decouples the policy (which changes infrequently) from the tool list (which changes often), making both easier to maintain.

Step 4: Write It in Plain Language

This cannot be overstated. Your AI AUP will be read — if it's read at all — by people who are not lawyers. Write accordingly.

Avoid: "The use of generative artificial intelligence systems to process, transmit, or store information classified as Confidential or above is prohibited except where the relevant data controller has obtained appropriate contractual assurances from the processor..."

Prefer: "Don't paste confidential documents or customer data into AI tools unless the tool is on our approved list for that data type. If you're unsure, ask IT before you share."

Test your draft with a sample of employees from different departments. If they can't tell you in their own words what the policy requires of them, rewrite it.

Step 5: Make It Actionable with Worked Examples

Abstract rules are hard to apply to concrete situations. Include a section of worked examples — brief scenarios with a clear verdict:

  • "Using Claude to draft a first version of an internal report from your own notes: permitted."

  • "Using Microsoft 365 Copilot to summarise a Teams meeting with a client: permitted, as it processes data within our Microsoft 365 tenant."

  • "Uploading a client contract to a public AI chatbot to extract key terms: not permitted. Use an approved enterprise tool or ask the legal team."

  • ⚠️ "Using an AI coding assistant to review code that includes API keys or connection strings: permitted only if the tool is on the approved list and secrets have been removed first."

These examples do more work than three pages of policy prose.

Step 6: Define Accountability and Consequences

A policy without consequences is a suggestion. Your AUP should clearly state:

  • Who is responsible for compliance (line managers, individual employees, IT, procurement?)

  • What to do if you're unsure (a named team or email address, not just "contact IT")

  • How to report a suspected breach

  • What the consequences of a breach are — proportionate and clearly linked to your existing disciplinary framework

Accountability also means leadership setting an example. If senior leaders are visibly using unapproved AI tools, no amount of policy will change behaviour at the team level.

Step 7: Build in a Review Cadence

Given how rapidly the AI landscape is changing, a policy with no review date is already out of date. Build in a minimum six-month review cycle, with a named owner responsible for keeping it current.

Reviews should consider:

  • New AI tools that have become available or widely used

  • Changes to relevant regulations (GDPR, AI Act, sector-specific requirements)

  • Incidents or near-misses that revealed gaps in the policy

  • Feedback from employees on what's unclear or unworkable

Getting Buy-In: The Human Side

The best-written policy will still fail without buy-in. A few practices that help:

Involve employees in the process. Run a consultation with representatives from key teams before finalising. People are more likely to follow rules they helped shape.

Frame it as enabling, not restricting. The goal of the policy is to help people use AI confidently and safely — not to stop them from using it. Lead with the approved tools and permitted use cases; don't open with the prohibitions list.

Pair the policy with training. A 20-minute awareness session covering the key points, with real examples relevant to each team's work, will do more than an email attachment.

Make it easy to find. The policy and the approved tool list should be on your intranet, accessible from search, not buried in a SharePoint folder nobody knows about.

The Bottom Line

An AI Acceptable Use Policy is only as valuable as the behaviour it produces. If your current policy is a legal artefact rather than a practical guide, it's time to rebuild it from first principles: start with real use cases, use plain language, make it actionable, and maintain it as the landscape evolves.

The organisations getting this right aren't writing longer policies. They're writing clearer ones — and backing them up with training, tooling, and a culture where employees feel comfortable asking questions rather than quietly taking risks.