Skip to main content
A customer’s AI Acceptable Use Policy is usually a Word doc no one rereads. Longwave makes it real. The Policy page inside any tenant has four tabs; each is a single conversation you have with the customer.

1. Access Control

Which AI app are people meant to use, which are off-limits, and what happens with everything else.

2. Data Protection

Stop sensitive data and risky file uploads from leaving the browser.

3. Prompt Collection

Keep the right evidence so you can prove the policy is doing its job.

4. Dialog Templates

The in-browser experience users see, in the customer’s voice.

How to use this with the customer

Walk the four pages above in order. Each one is short and decision-led, so it maps cleanly to a working session:
  1. Agree the approved AI app they’ve licensed (or pick one to license).
  2. Agree the hard blocks and what happens for everything else.
  3. Agree what counts as sensitive data and what to do about it.
  4. Agree what gets retained for audit.
  5. Brand the end-user dialogs so they read as the customer’s policy, not a vendor message.
When you’re done, the customer’s policy isn’t just written down. It’s running.
Set this up on the pilot group first, confirm the experience matches what you agreed, then roll out tenant-wide.

What the customer gets back

Once the policy is enforcing, the value shows up in two places:
  • In-browser: users see the customer’s AUP at the moment it’s relevant, with a clear path to the approved tool.
  • In review: the Audit Log and Reports → Overview become the artifact you walk through together. See Client reporting.