1. Access Control
Which AI app are people meant to use, which are off-limits, and what
happens with everything else.
2. Data Protection
Stop sensitive data and risky file uploads from leaving the browser.
3. Prompt Collection
Keep the right evidence so you can prove the policy is doing its job.
4. Dialog Templates
The in-browser experience users see, in the customer’s voice.
How to use this with the customer
Walk the four pages above in order. Each one is short and decision-led, so it maps cleanly to a working session:- Agree the approved AI app they’ve licensed (or pick one to license).
- Agree the hard blocks and what happens for everything else.
- Agree what counts as sensitive data and what to do about it.
- Agree what gets retained for audit.
- Brand the end-user dialogs so they read as the customer’s policy, not a vendor message.
Set this up on the pilot group first, confirm the experience matches what
you agreed, then roll out tenant-wide.
What the customer gets back
Once the policy is enforcing, the value shows up in two places:- In-browser: users see the customer’s AUP at the moment it’s relevant, with a clear path to the approved tool.
- In review: the Audit Log and Reports → Overview become the artifact you walk through together. See Client reporting.

