How to Run an AI Readiness Audit Across Your Organisation

A step-by-step guide for IT managers to assess AI adoption maturity, identify gaps, and build a governance roadmap — with a practical scoring framework.

Most organisations are somewhere in the middle of an AI adoption curve they did not plan for. Tools arrived faster than policies. Employees started using AI before procurement had reviewed it. Integrations went live before security had signed off. If that sounds familiar, you are not alone, and the answer is not to slow down. It is to get a clear picture of where you actually are.

An AI readiness audit does exactly that. It gives IT and security leaders a structured assessment of their organisation's current AI posture: what tools are in use, what controls exist, where the gaps are, and what to prioritise next. This post walks through how to run one.

Why Audit Before You Govern?

Governance frameworks built on assumptions fail. Before you can decide what policies to put in place, which tools to approve, or what training to run, you need to know what you are actually dealing with. An audit surfaces the real picture, not the one that existed on paper when AI adoption was still theoretical.

An AI readiness audit typically takes two to four weeks depending on organisation size and complexity. It produces a clear baseline you can report to leadership, prioritise against, and measure future progress from.

The Five Domains of AI Readiness

Structure your audit around five domains. Each one maps to a set of questions and a scoring rubric that will allow you to compare your position against a maturity model.

Domain 1: Visibility and Inventory. Do you know what AI tools are in use across your organisation? This domain assesses your ability to detect, catalogue, and track AI tool adoption. Score yourself on whether you have a formal approved tool list, whether you have detection mechanisms in place such as Intune Discovered Apps, DNS logs, and CASB, and whether you have a process for evaluating and onboarding new tools. A score of zero here means you are operating blind. A score of three means you have full visibility and a managed registry.

Domain 2: Data Governance. Do your data policies cover AI tool usage? This domain assesses whether your data classification framework has been extended to cover AI, whether employees know which data can go into which tools, and whether there are technical controls preventing the most sensitive data from reaching unapproved AI services. A score of zero means no AI-specific data policies exist. A score of three means data classification is mapped to AI tool permissions and enforced technically.

Domain 3: Access and Identity Controls. Are your identity and access management controls applied to AI tool usage? This domain looks at whether OAuth app consent is centrally managed, whether MCP integrations are configured with least-privilege permissions, and whether AI tool access is tied to your identity governance processes, including joiners, movers, and leavers. A score of zero means AI tools are self-provisioned with no IT involvement. A score of three means AI tool access is fully integrated into your IAM processes.

Domain 4: Monitoring and Incident Response. Can you detect and respond to AI-related incidents? This domain assesses whether AI activity generates logs, whether those logs feed into your SIEM, whether detection rules exist for AI-specific threats such as prompt injection and anomalous data transfers, and whether you have an AI incident response playbook. A score of zero means no AI-specific monitoring exists. A score of three means full detection coverage and a tested IR playbook.

Domain 5: Policy and Culture. Do your people know what is expected of them? This domain looks at whether an AI Acceptable Use Policy exists and is understood, whether training has been delivered on AI risks and responsibilities, and whether there is a mechanism for employees to ask questions and report concerns. A score of zero means no policies or training exist. A score of three means the policy is current, widely understood, and regularly reinforced.

Running the Audit: A Practical Approach

The audit itself has three components: document review, technical assessment, and stakeholder interviews.

For the document review, gather: your current IT policies and check for any AI-specific provisions, your approved software catalogue, any existing data processing agreements with AI vendors, recent IAM or access reviews that included AI tools, and any previous incidents or near-misses involving AI tools.

For the technical assessment, run the detection methods described in our earlier post on Shadow AI. Export discovered apps from Intune, review DNS and proxy logs for AI service domains, check Entra ID enterprise applications for OAuth consents, and audit browser extensions on managed devices. This gives you the ground truth of what is actually in use, as opposed to what is supposed to be in use.

For stakeholder interviews, speak with representatives from IT, security, legal or compliance, HR, and two or three business teams that are heavy AI users. Ask each group: what AI tools are you using, what are you using them for, what guidance have you received, and what questions do you still have? The gaps between what IT thinks is happening and what business teams are actually doing are often illuminating.

Scoring and Prioritisation

Once you have gathered your evidence, score each domain from zero to three. A combined score of zero to five indicates early stage, where foundational controls are largely absent and the priority is establishing basic visibility and policy. A score of six to ten indicates developing stage, where some controls exist but coverage is incomplete and inconsistent. A score of eleven to thirteen indicates managed stage, where controls are comprehensive and consistently applied. A score of fourteen to fifteen indicates optimised stage, where AI governance is embedded in standard IT and risk processes and continuously improved.

Most organisations running their first AI readiness audit score between four and nine. That is not a failure; it is an honest starting point.

Building the Roadmap

The audit output should drive a prioritised roadmap, not a wish list. Use the following approach to prioritise remediation actions.

First, address any critical gaps: areas where the absence of control creates immediate risk. These are typically in visibility, where you cannot see what tools are in use, and in data governance, where there is nothing preventing sensitive data from reaching unapproved AI services.

Second, close the foundational gaps: areas where basic controls are missing or inconsistent. An AI AUP that does not exist or has not been communicated falls here, as does MCP integration without least-privilege configuration.

Third, build toward optimisation: areas where controls exist but are not yet systematic or measured. This includes integrating AI monitoring into your SIEM, developing an AI IR playbook, and establishing a formal review cadence for your approved tool list.

Each roadmap item should have a named owner, a target completion date, and a clear success metric. Without these, roadmaps become good intentions.

Reporting to Leadership

Present your audit findings in two formats: a technical report for IT and security teams, and a one-page executive summary for leadership. The executive summary should cover: your overall maturity score and what it means, the two or three most significant risks identified, the proposed roadmap with timelines and resource requirements, and any regulatory or compliance exposure that requires board awareness.

Frame the findings constructively. The goal is not to demonstrate how much is wrong; it is to show leadership a clear path from the current state to a managed, confident AI programme.

Making the Audit a Habit

An AI readiness audit is not a one-time exercise. The AI landscape is changing too quickly for an annual review to be sufficient. Build in a lightweight quarterly check against your five domains, with a full audit annually or after any significant change to your AI tool landscape, such as a major new integration, a significant incident, or a change in regulatory requirements.

The organisations that get AI governance right are not the ones that ran the best initial audit. They are the ones that made governance a continuous practice rather than a project.

Getting Started

If you have not run an AI readiness audit before, start with domain one: visibility. You cannot govern what you cannot see. Run the technical detection methods, build your inventory, and you will immediately have something concrete to work from. Everything else follows from there.