One-click policies sound great - here is why you should not use them as-is
Microsoft Purview now offers one-click policies through DSPM that can secure your tenant in minutes. The catch is that several jump straight to block mode. In production, that breaks things. A crawl-walk-run approach gives you the same protection without the business disruption.
What one-click policies actually create
Microsoft's Data Security Posture Management (DSPM) includes a quick setup that generates around 13 default policies across DLP, Insider Risk Management, Communication Compliance, and Information Protection. The idea is sound - get a baseline level of protection across your tenant with minimal effort.
The discovery policies are generally fine. They run in audit mode and monitor things like users visiting AI sites, sensitive data being pasted into browser-based AI tools, and risky prompts in Copilot. These give you visibility without affecting users.
The problem is with the protection policies. Several of them deploy with block actions from the start:
- Block sensitive info from AI sites - blocks users from pasting sensitive data into AI tools in Edge, Chrome, and Firefox (with override for elevated-risk users)
- Block elevated risk users from submitting prompts - outright blocks users from submitting any prompts to AI apps in Edge
- Block sensitive info from AI apps in Edge - blocks prompts containing common Sensitive Information Types
- Protect sensitive data from Copilot processing - blocks Copilot from processing items with certain sensitivity labels
If your tenant has never had DLP policies before, turning these on means users will suddenly be blocked from things they were doing yesterday.
Why blocking on day one causes problems
Most organisations do not have a clean picture of what sensitive data exists, where it lives, or how it flows through the business. Turning on block policies before you have that picture is like installing speed cameras before you know where the roads are.
Here is what typically happens:
Users get blocked from legitimate work. A finance analyst pastes quarterly figures into Copilot to help draft a board summary. Blocked. A lawyer copies contract clauses into an AI tool to compare language. Blocked. These are not risky activities - they are normal business use of AI tools.
IT gets flooded with exception requests. Every blocked action generates a support ticket or a complaint. If you cannot explain why something was blocked and how to work around it, trust in the tooling erodes fast.
People find workarounds. If the policy blocks pasting into Edge, users switch to a personal device or an unmanaged browser. You have not reduced risk - you have moved it somewhere you cannot see it.
Leadership loses confidence. If the first experience of Purview is "it stopped me doing my job", getting buy-in for the next phase of rollout becomes much harder.
The crawl-walk-run approach
Take the same policy concepts from DSPM but deploy them in stages. The goal is the same destination - you just get there without breaking things along the way.
Crawl - Audit and discover (weeks 1 to 4)
Deploy all the discovery policies as-is. They run in audit mode and give you data without affecting users. For the protection policies, create them but set the action to audit only instead of block. This shows you exactly what would have been blocked, how often, and by whom.
Use this phase to answer: What sensitive data are people actually sharing with AI tools? How often? Is it legitimate business use or genuine risk?
Walk - Warn and educate (weeks 4 to 8)
Switch the protection policies from audit to warn with override. Users see a policy tip explaining that what they are about to do involves sensitive data, but they can proceed if they have a business justification. This educates users, creates a paper trail of overrides, and catches genuine mistakes without blocking work.
Review the override reasons. If 90% of overrides are for legitimate use, your policy is too broad - tune the conditions. If overrides are rare, users are learning.
Run - Enforce where it matters (weeks 8 and beyond)
Now you have data. You know which actions are genuinely risky and which are normal business. Switch specific high-risk policies to block - but only the ones where the data supports it.
For example, blocking the upload of financial data to unmanaged AI tools might be justified. Blocking all prompts to AI apps in Edge probably is not. Be selective.
Which DSPM policies to keep, modify, or skip
Keep as-is (discovery):
- Detect sensitive info added to AI sites (audit mode - good baseline)
- Detect when users visit AI sites (Insider Risk - visibility only)
- Detect risky AI usage (Insider Risk - visibility only)
- Capture interaction policies (collection only - feeds reporting)
Modify before enabling (protection):
- Block sensitive info from AI sites - change to warn with override first
- Block elevated risk users from prompts - change to audit first, this one is aggressive
- Block sensitive info from AI apps in Edge - change to warn first, tune SIT thresholds
- Protect sensitive data from Copilot processing - review which labels are scoped, start with only your highest sensitivity labels
Review carefully:
- Unethical behaviour in AI apps (Communication Compliance) - make sure Legal and HR are aware before enabling content scanning of employee interactions
- Sensitivity labels and policies - if you already have labels, the default ones may conflict with your existing taxonomy
The conversation to have first
Before enabling any of this, align with three groups:
Business stakeholders - explain that you are adding guardrails around AI usage, not blocking it. Frame it as enabling safe AI adoption, not restricting it.
IT support - brief them on what policy tips look like and what to tell users who see them. The first few weeks will generate questions.
Legal and compliance - confirm they are comfortable with the level of monitoring being enabled, especially Communication Compliance policies that scan prompt content.
The one-click setup is a great reference architecture. It tells you what Microsoft thinks a well-protected tenant looks like. Use it as a blueprint, not a deployment plan.
Comments
No comments yet. Be the first to share your experience.