Tools/Agent Governance Planner

Agent Governance Planner

Track every AI agent control across Entra, Defender, and Purview.

0/25 complete
Agent Registry reviewedIdentity & AccessReview monthly
Why this matters
The Agent Registry is already in your M365 admin centre under Agents - it does not need to be enabled. But most organisations have never actually looked at it. Your tenant likely already has agents you did not know about. You cannot govern what you have not reviewed.
How to do it
In the Microsoft 365 admin centre, go to Agents and select All Agents. Review the full inventory. Pay attention to Agent publishers (created by your org vs external partners), Host products, and any agents listed as ownerless. Export the inventory to Excel for documentation. This is your baseline.
Microsoft docs
Agent ID Administrator role assignedIdentity & AccessOne-time setup
Why this matters
Agent identity management requires dedicated permissions. Without the Agent ID Administrator role assigned, nobody in your organisation has the authority to manage agent identities, review their access, or respond to agent-related incidents in Entra.
How to do it
In Entra ID, go to Roles and administrators. Search for Agent ID Administrator. Assign it to your security lead or designated agent governance owner. Consider assigning to at least two people for coverage.
Microsoft docs
Human sponsors assigned to all agentsIdentity & AccessReview monthly
Why this matters
Every agent should have a human or group who is accountable for it. Sponsors are not technically required when creating agent identities, but Microsoft strongly recommends them. Without a sponsor, there is no one to review its access, approve changes, or decommission it when it is no longer needed. Sponsors can enable or disable agents but cannot modify application settings - a deliberate least-privilege design. If a sponsor leaves, sponsorship automatically transfers to their manager, but if no manager is set the agent becomes ownerless - a governance risk that grows over time.
How to do it
In Agent Registry, review each agent and check the sponsor field. Assign a sponsor to any agent without one. Set a calendar reminder to check for ownerless agents monthly. When someone leaves, verify their agent sponsorships transferred correctly.
Microsoft docs
Shadow agent discovery completedIdentity & AccessReview quarterly
Why this matters
If anyone in your organisation has access to Copilot Studio, they can build agents that connect to your M365 data without IT approval. These shadow agents bypass your governance policies and may have broad access to sensitive content. You need to find them before they cause an incident. Note: the AI agent inventory is currently limited to Copilot Studio agents only.
How to do it
First, opt in to preview features for three separate services: Defender for Cloud Apps, Defender for Cloud, and Defender XDR. Then enable the AI agent inventory: in the Defender portal, go to System, then Settings, then Cloud Apps, then Copilot Studio AI Agents and turn it on. Work with your Power Platform administrator to enable it in the Power Platform Admin Center under Security, then Threat Protection, then select Microsoft Defender - Copilot Studio AI Agents. Allow up to 30 minutes for the connection. Once connected, query the AIAgentsInfo table in Advanced Hunting to identify agents by risk level, platform, and management status. Also check the community queries in the AI Agents folder for pre-built misconfiguration checks. Cross-reference with the Agent Registry. For each shadow agent found, identify the creator, review what data it accesses, assign a sponsor, and decide whether to formalise it or decommission it.
Microsoft docs
Agent permissions reviewed and least-privilege appliedIdentity & AccessReview quarterly
Why this matters
Every agent requests permissions to access your data. Application permissions like Group.Read.All or User.Read.All give org-wide access without a user being signed in. Delegated permissions like Mail.ReadWrite let agents read and modify emails on behalf of users. If you auto-approve consent during publishing, agents end up with far more access than they need.
How to do it
In the M365 admin centre, go to Agents, then All Agents. Select each agent and open the Permissions tab. Review every application and delegated permission listed. For each one, ask: does this agent actually need this access? Revoke any permissions that are not required. During future publishing or deployment, read the consent screen carefully before approving.
Microsoft docs
Conditional Access policies targeting agentsIdentity & AccessReview quarterly
Why this matters
Your existing Conditional Access policies for users do not automatically apply to agents. Agent identities are a separate identity type in Entra. Without dedicated CA policies, agents can authenticate from anywhere, at any time, without any risk-based controls - even if your user policies are locked down tight.
How to do it
In Entra ID, go to Conditional Access and create a new policy. Under Assignments, target agent identities - you can target all agents, specific agents by object ID, or agents grouped by blueprint or custom security attributes. The only condition currently available is agent risk level (from ID Protection). The only grant control is Block. Start with a policy that blocks high-risk agents from accessing sensitive workloads. Test in report-only mode first, then enforce.
Microsoft docs
Entra ID Protection for agents enabledIdentity & AccessReview monthly
Why this matters
Agents can behave anomalously just like users can. If an agent suddenly starts accessing resources it has never touched before, or shows unusual sign-in patterns, that could indicate compromise or misconfiguration. Without ID Protection, these signals go undetected. Important: all agent detections are currently offline (not real-time), so there will be a delay before risks surface. Requires Entra ID P2.
How to do it
In Entra ID Protection, check the Agent detections tab in the Risk detections report. ID Protection detects agent-specific risks including unfamiliar resource access, sign-in spikes, failed access attempts, sign-ins by risky users, confirmed compromised (admin-triggered), and Microsoft Entra threat intelligence matches. Use the Risky Agents report to review flagged agents and take action - confirm compromise, confirm safe, dismiss, or disable. Pair this with a Conditional Access policy that blocks high-risk agents.
Microsoft docs
Agent lifecycle governance configuredIdentity & AccessReview quarterly
Why this matters
Without lifecycle governance, agent access persists indefinitely. Decommissioned agents keep their permissions. Sponsors leave without handover. Over time, you accumulate orphaned agents with stale access - each one a potential exposure point.
How to do it
Agent lifecycle governance uses two Entra ID Governance features together. First, use Entitlement Management to create access packages for agents - these control what resources agents can access, with built-in expiry dates and sponsor approval. Access can be requested by agents programmatically via Graph API, by sponsors on behalf of agents, or by admin direct assignment. Second, enable Lifecycle Workflows sponsor tasks (preview) to automatically notify managers and co-sponsors when a sponsor moves or leaves. Review ownerless agents in the Agent Registry regularly.
Microsoft docs
Global Secure Access for Copilot Studio agents (preview)Identity & AccessReview quarterly
Why this matters
Agents built in Copilot Studio make network calls to external services, APIs, and websites. Without network-level controls, an agent could connect to malicious endpoints, exfiltrate data through unapproved channels, or access content that violates your policies. This covers HTTP Node traffic, custom connectors, and MCP Server Connector traffic.
How to do it
In Global Secure Access, configure web content filtering policies in the baseline profile - these apply tenant-wide to all Copilot Studio agent traffic. Enable threat intelligence filtering to block known malicious destinations. Set up network file filtering to prevent agents from moving sensitive files to unapproved locations. Traffic forwarding for agents is enabled per environment or environment group in the Power Platform Admin Centre.
Microsoft docs
Audit logging enabled in PurviewThreat DetectionOne-time setup
Why this matters
Audit logging is the foundation everything else depends on. Without it, agent activity is invisible - Defender cannot detect threats, Purview cannot enforce policies, and you have no evidence trail for investigations. This is the single most important prerequisite.
How to do it
In the Purview portal, go to Audit and verify it is turned on. It is enabled by default for M365 and Office 365 enterprise organisations, but not for SMB licences (Business Basic, Business Standard, Business Premium) - check yours. Review your retention settings - default is 180 days, but you may need longer for compliance. If audit was recently enabled, allow up to 60 minutes for the change to take effect and several hours for data to start appearing.
Microsoft docs
M365 connector in Defender for Cloud AppsThreat DetectionOne-time setup
Why this matters
The CloudAppEvents table in Defender XDR is where all agent activity data lands. But it only gets populated if the Microsoft 365 connector is configured in Defender for Cloud Apps. Without this connector, your KQL queries, detection rules, and real-time agent protection all return nothing.
How to do it
In the Defender portal, go to Settings, then Cloud Apps, then App connectors. Check for Microsoft 365. If it is not there, click Connect an app and follow the setup wizard. After connecting, verify data is flowing by running a simple CloudAppEvents query in Advanced Hunting.
Microsoft docs
Admin RBAC configured in DefenderThreat DetectionOne-time setup
Why this matters
When an agent-related security incident occurs, your team needs to investigate immediately. If nobody has the right permissions in Defender XDR, the investigation stalls while you sort out access. This is a problem you want solved before an incident, not during one.
How to do it
In the Defender portal, go to Settings, then Permissions, then Roles. Assign at least two team members with Security Administrator or Security Operator roles via the Unified RBAC model. They will need manage permissions for the workloads their custom detections target (for example, Defender for Cloud Apps for CloudAppEvents queries). Test by having them run a basic CloudAppEvents query in Advanced Hunting.
Microsoft docs
KQL hunting queries for CloudAppEventsThreat DetectionReview quarterly
Why this matters
The CloudAppEvents table now includes agent-specific action types: InvokeAgent, InferenceCall, ExecuteToolBySDK, ExecuteToolByGateway, and ExecuteToolByMCPServer. These are your window into exactly what agents are doing - which tools they call, what data they access, and how they interact with other agents. Without saved queries, you are flying blind.
How to do it
In Defender XDR Advanced Hunting, create and save queries filtering CloudAppEvents on agent action types. Start simple: CloudAppEvents | where ActionType in ("InvokeAgent", "InferenceCall", "ExecuteToolBySDK", "ExecuteToolByGateway", "ExecuteToolByMCPServer"). Then build queries for specific scenarios - agents accessing sensitive sites, agents active outside business hours, agents with unusually high activity volumes.
Microsoft docs
Custom detection rules for agent activityThreat DetectionReview quarterly
Why this matters
Defender provides out-of-the-box threat detections for risky agent activities. But your agents have unique workflows. Custom detection rules that flag unusual InvokeAgent patterns or unexpected ExecuteToolBySDK, ExecuteToolByGateway, or ExecuteToolByMCPServer calls specific to your environment will catch things the built-in detections miss.
How to do it
In Defender XDR, go to Advanced Hunting. Start with a query that shows all agent-related CloudAppEvents for the last 7 days. Identify your baseline of normal agent activity. Then go to Hunting, then Custom detection rules to create rules that trigger when activity deviates from that baseline - for example, an agent invoking tools it has never used before. CloudAppEvents supports Continuous (NRT) frequency for near-real-time alerting.
Microsoft docs
Real-time agent protection during runtime (preview)Threat DetectionOne-time setup
Why this matters
Out-of-the-box detections and custom rules catch threats after they happen. Real-time runtime protection goes further - it inspects tool invocations before Copilot Studio agents execute them and blocks suspicious activity in-line. If Defender determines a prompt is an injection attempt or a tool call is malicious, the action is blocked before it runs, the user is notified, and an alert is created in the Defender portal. Note: this currently applies to Copilot Studio agents only.
How to do it
In the Defender portal, go to System, then Settings, then Cloud Apps, then Copilot Studio AI Agents. Ensure the M365 connector is already connected (see previous item). Work with your Power Platform administrator to complete the onboarding: they need to enable external threat detection in the Power Platform Admin Centre and share the App ID with you. Enter the App ID in the Defender portal and save. Once connected, a green status appears in the Real time protection during agent runtime section. Note: if the M365 connector is not connected, blocking still works but alerts and incidents will not appear in Defender.
Microsoft docs
Sensitivity labels deployed with encryptionData Security & ComplianceOne-time setup
Why this matters
Sensitivity labels with encryption are your primary access control for agent interactions. Agents use the invoking user's permissions - if the user does not have the right usage rights on an encrypted file, the agent cannot return its content. Without labels, every piece of content is open to any agent operating on behalf of a user who has access to the location. Also note: encryption without sensitivity labels is not supported for AI agents - you must use labels. Double Key Encryption is entirely unsupported for agents.
How to do it
In the Purview portal, go to Information Protection, then Labels. Create labels with encryption that match your data classification needs. Publish them via a label policy. Use the Label Taxonomy Builder to plan your structure before deploying.
Microsoft docs
Encryption permission levels include EXTRACT for agent accessData Security & ComplianceReview quarterly
Why this matters
Agents do not have their own identity for encryption checks. They use the invoking user's permissions. If your encrypted labels assign a permission level that does not include the EXTRACT usage right, agents cannot return the content - even though the user can view the file themselves. Permission levels like Reviewer and Do Not Forward do not include EXTRACT. Co-Author, Co-Owner, and Encrypt-Only do. Broad scopes like "Add all users in your organisation" work fine as long as the assigned permission level includes EXTRACT. Also important: sensitivity labels must be enabled for SharePoint and OneDrive, otherwise agents can only access encrypted files that are actively open in Office desktop apps on Windows. Agent output does not automatically inherit sensitivity labels except in Word and PowerPoint, where the highest-priority label from referenced files is applied.
How to do it
For each encrypted sensitivity label, go to the encryption settings and check which permission level is assigned. Ensure it includes the EXTRACT usage right - Co-Author and Co-Owner both include it. If you are using custom permissions, verify EXTRACT is explicitly granted. Avoid Reviewer or Do Not Forward for content that agents need to access. Enable sensitivity labels for SharePoint and OneDrive in the Purview portal (Information Protection, Sensitivity labels, Turn on now) so agents can access encrypted files at rest, not just files open in desktop apps.
Microsoft docs
DLP policies covering agent interaction locationsData Security & ComplianceReview quarterly
Why this matters
Agents interact with data across Teams, SharePoint, OneDrive, and Exchange. If your DLP policies do not cover all these locations, agents can share sensitive data through uncovered channels without triggering any policy. You can scope agent instances into DLP policies the same way you add a user, or by adding them to a security group. There is also a dedicated Microsoft 365 Copilot and Copilot Chat location - but note it is mutually exclusive with other locations (requires its own separate DLP policy) and does not cover Copilot Studio custom agents. Within that location, sensitivity label conditions are GA while SIT conditions are in preview, and the two condition types cannot be combined in the same rule.
How to do it
In the Purview portal, go to Data Loss Prevention, then Policies. Review each policy and check which locations are enabled. Ensure you have coverage across Exchange, SharePoint, OneDrive, and Teams. For per-agent scoping, add agent instances or security groups containing agents to your policies in these locations. For the dedicated Copilot location, create a separate DLP policy (it cannot be combined with other locations in the same policy). Note that DLP cannot scan files uploaded directly into prompts, and policy updates can take up to 4 hours to take effect. Use the DLP Policy Simulator to plan your coverage before deploying.
Microsoft docs
DLP alert monitoring for blocked agent actionsData Security & ComplianceReview monthly
Why this matters
When DLP blocks an agent action, the agent itself is unaware it was blocked - there is no retry logic and no notification to the agent owner. In some scenarios (like prompt-level SIT blocking in Copilot) users may see a message, but in others the workflow fails without explanation. DLP alerts are generated for admins, but if nobody is monitoring them, disrupted agent workflows go unnoticed until someone reports a problem.
How to do it
In the Purview portal, go to Data Loss Prevention, then Alerts. Set up a filter or dedicated view for agent-related DLP incidents. Create an alert policy that notifies the agent owner when their agent is blocked. Review agent DLP alerts weekly to identify patterns that suggest policy tuning is needed.
Microsoft docs
Insider Risk - risky AI usage and risky agents templates enabledData Security & ComplianceReview monthly
Why this matters
There are two IRM templates relevant to agents. The Risky AI usage template detects users sending sensitive data to AI tools and risky prompts. The separate Risky Agents template (preview) specifically targets agent behaviour - agents generating sensitive responses, accessing sensitive SharePoint files, accessing risky websites, and sharing files externally. Without both, you have blind spots.
How to do it
In the Purview portal, go to Insider Risk Management, then Policies. Create policies using both the Risky AI usage and Risky Agents templates. The Risky Agents template is applied by default for all organisations but review its scope and alert thresholds. Configure your triage workflow so alerts are reviewed by someone who understands your agent deployments.
Microsoft docs
DSPM preview enabled (not classic)Data Security & ComplianceOne-time setup
Why this matters
Both DSPM versions show general AI activity data, but the AI Observability page - which gives you agent-specific visibility, risk assessment, and remediation recommendations - is only available in the DSPM preview. Classic DSPM does not provide agent-specific views. If you are still on classic, you are missing the agent risk dashboard.
How to do it
In the Purview portal, check which version of DSPM you are running. If you do not see an AI Observability page showing agents with activity in the last 30 days prioritised by risk level, you are on classic. To move to the preview, follow the migration guidance in the Purview portal. The preview is currently rolling out with no confirmed GA date. Start with the AI Observability page as your daily dashboard for agent risk.
Microsoft docs
Retention policies covering agent interactionsData Security & ComplianceReview quarterly
Why this matters
Agent prompts and responses are content that may need to be retained for compliance, legal holds, or investigations. This data is stored in hidden folders within user mailboxes. If your retention policies do not cover agent interaction data, you may lose evidence you need later - or retain it for too long. Note: tool calls and data access events are captured by audit logging, not retention policies.
How to do it
In the Purview portal, go to Data Lifecycle Management, then Retention policies. Check that you have policies covering the relevant AI app locations - these include Microsoft Copilot experiences (which covers Microsoft 365 Copilot, Security Copilot, Copilot in Fabric, and Copilot Studio), Enterprise AI apps, and Other AI apps. The underlying data is stored in user mailboxes (hidden folders). Verify the retention periods meet your regulatory requirements.
Microsoft docs
Communication compliance for agent interactionsData Security & ComplianceReview quarterly
Why this matters
Agents generate content that users see and act on. If an agent produces unethical, offensive, or policy-violating content, it could create legal or reputational risk. Communication compliance scans agent interactions and flags violations the same way it does for user communications.
How to do it
In the Purview portal, go to Communication Compliance, then Policies. Create a policy using the "Detect Microsoft Copilot interactions" template as a starting point. Under locations, enable "Microsoft Copilot experiences" and "Enterprise AI apps" to cover agent interactions. Note: detecting non-Microsoft 365 AI data requires pay-as-you-go billing. Review flagged interactions and tune the policy scope based on false positive rates.
Microsoft docs
Information Barriers gap assessed for embedded agent contentData Security & ComplianceReview quarterly
Why this matters
When agents use uploaded files as a knowledge source, those files are stored in SharePoint Embedded containers. Information Barriers are not supported on these containers. Any user who can access the agent can see responses based on the embedded file content, regardless of IB policies. If your organisation relies on Information Barriers, this is a significant gap.
How to do it
Identify which agents in your tenant use embedded files as knowledge sources (filter in Agent Registry). For each, check whether the uploaded files contain content that should be restricted by Information Barriers. If so, consider whether the agent should be scoped to a limited audience, or whether the embedded files should be replaced with a reference to a properly IB-protected SharePoint site.
Microsoft docs
eDiscovery procedures updated for agent dataData Security & ComplianceReview quarterly
Why this matters
If your organisation faces a legal investigation or regulatory inquiry that involves agent activity, your legal team needs to be able to find, hold, and produce agent interaction data. Agent interactions are stored in user mailboxes and are searchable through eDiscovery, but only if your search procedures account for them.
How to do it
Update your eDiscovery search templates to include agent interaction data. In the Purview portal, when creating an eDiscovery search, use the condition Type, then Contains any of, then Copilot activity to capture all AI app interactions including agents. Brief your legal team on what agent interaction data looks like and where it is stored.
Microsoft docs