Shadow AI Is Already in Your Organisation
The tools your employees are using might be invisible to you. Not for long.

Artificial Intelligence is no longer just a boardroom conversation. It's in your employees' browsers, embedded in productivity tools, quietly running in the background of daily workflows.
The challenge for IT and security teams isn't whether AI tools are being used — it's which ones, by whom, and whether they meet your organisation's security and compliance standards.
This phenomenon — employees adopting AI tools without IT approval — is the new shadow IT. And it's growing fast.
Here's how ICT administrators can get ahead of it.
1. Use Microsoft Intune's Discovered Apps
If your organisation manages devices through Microsoft Intune, you already have a powerful detection capability built in. The Discovered Apps feature inventories applications found on managed devices across your fleet — including AI tools that may have been installed without going through official procurement.
To access it: navigate to Intune admin center → Apps → Monitor → Discovered apps. From here, you can filter by platform (Windows, macOS, iOS, Android), export the full list, and identify applications like ChatGPT desktop clients, AI coding assistants, or browser-based extensions that have been sideloaded.
This is especially useful because it gives you a real-time, device-level view — not just network traffic. If an employee installed a local AI tool, Intune will surface it.
What to look for: any AI-branded application (e.g., Copilot, Gemini, Claude, Perplexity, Cursor, GitHub Copilot) appearing on devices that haven't been formally approved or enrolled in your software catalogue.
📖 Full documentation: Microsoft Intune – Discovered Apps
2. DNS and Web Proxy Filtering Logs
Your DNS resolver or web proxy (such as Zscaler, Cisco Umbrella, or Squid) logs every domain that devices on your network attempt to reach. AI tools leave a clear trail.
By querying your logs for known AI service domains — api.openai.com, claude.ai, gemini.google.com, perplexity.ai, huggingface.co, and dozens of others — you can identify which users or devices are actively communicating with external AI services, even if no local app is installed (i.e., purely browser-based usage).
This is often the most comprehensive detection method because it catches web app usage that endpoint tools might miss.
Tip: Build a regularly updated blocklist/allowlist categorised by "approved AI", "under review", and "blocked". Tools like Umbrella already categorise many AI platforms automatically.
3. Cloud Access Security Broker (CASB)
A CASB solution (such as Microsoft Defender for Cloud Apps, Netskope, or McAfee MVISION) sits between your users and cloud services, giving you deep visibility into SaaS application usage — including AI tools.
CASBs go beyond simple traffic detection. They can:
Identify which AI platforms are in use and by how many users
Assess the risk score of each app (data residency, encryption standards, compliance certifications)
Detect data being uploaded to AI tools (e.g., an employee pasting a confidential document into ChatGPT)
Enforce policy controls in real time (block, warn, or require justification)
Microsoft Defender for Cloud Apps, in particular, integrates natively with Intune and Entra ID, giving you a unified view across identity, device, and application layers.
4. Browser Extension Auditing
Many AI tools enter the organisation not as standalone apps but as browser extensions — Grammarly with AI, Copilot for Edge, Merlin, Monica, and many others. These are often overlooked in traditional app inventories.
With Google Chrome Browser Cloud Management or Microsoft Edge management policies (deployable via Group Policy or Intune), administrators can:
Enumerate all installed extensions across managed browsers
Block installation of unapproved extensions by policy
Receive alerts when new extensions are added
This vector is particularly important in BYOD environments where employees use managed browsers on personal devices.
5. Microsoft Entra ID Sign-In Logs (OAuth App Consent)
When employees sign into a third-party AI tool using their Microsoft or Google work account ("Sign in with Microsoft"), an OAuth consent grant is created. These are logged and auditable.
In Microsoft Entra ID (formerly Azure AD), navigate to: Enterprise Applications → All Applications and filter for recently added apps. You'll see every external service that has been granted access to your tenant, including AI tools that requested permissions to read email, files, or calendar data.
This is a critical control point. An AI tool with read access to your Microsoft 365 tenant is not just a productivity concern — it's a potential data governance risk.
Action: Disable user consent for third-party apps and require admin approval. This prevents AI tools from silently gaining access to organisational data.
6. Network Traffic Analysis and SIEM
For more advanced environments, feeding network flow data or proxy logs into a SIEM (such as Microsoft Sentinel, Splunk, or IBM QRadar) allows you to create detection rules and dashboards specifically targeting AI tool usage.
You can build alerts for:
First-time connections to known AI API endpoints
Unusually large data transfers to AI platforms (potential data exfiltration risk)
Usage outside business hours
Access from unmanaged or non-compliant devices
Pairing this with user behaviour analytics (UBA) can help distinguish legitimate, approved AI usage from risky or policy-violating behaviour.
Bringing It All Together
No single method gives you the complete picture. The most effective approach layers these tools:
Layer | What it catches |
|---|---|
Intune Discovered Apps | Locally installed AI software on managed devices |
DNS / Web Proxy Logs | Browser-based AI tool usage on the network |
CASB | Cloud app risk scoring and data upload monitoring |
Browser Management | AI browser extensions |
Entra ID OAuth Logs | AI tools connected via work identity |
SIEM / Network Analysis | Anomalous or high-volume AI traffic |
The Goal Isn't to Block Everything
Detection isn't the end goal — informed governance is.
Once you know what AI tools are in use, you can make deliberate decisions: approve, restrict, replace, or formally onboard them into your software catalogue with proper security assessments.
Employees are turning to AI tools because they genuinely improve productivity. The role of IT isn't to be the department that says no — it's to make sure the organisation benefits from AI safely.
Start with visibility. Everything else follows.