Skip to Main Content

Share your product feedback

Status Future consideration
Categories AI assistant
Created by Nathaniel Collum
Created on Oct 16, 2025

AI governance policies, approvals, and audit trails

What is the challenge?

Organizations need precise control over how AI is used in Aha! to stay compliant and maintain transparency. Admins should be able to define clear policies, approve sensitive actions, and track all activity with confidence.

Admins can currently...

  • Turn AI features on or off for different areas (assistant, search results, release notes, ideas analysis, transcripts, knowledge base)

  • Choose which workspaces and record types can use the AI assistant

  • Control what AI can do: internet search, image creation, and scripts

  • Create a list of trusted websites for AI-generated links

  • Set default instructions that show up when chats start

Organizations need finer controls to prevent unauthorized use.

What is the impact?

Helps teams work with more confidence and less risk. Teams can work faster because simple tasks have clear rules and quick approvals, while risky changes need more careful review. Leaders can see and export a record of everything AI does, which helps with internal checks and outside audits.

Describe your idea

A built‑in AI governance layer that builds on the controls above and adds three enterprise‑ready capabilities:

  1. Policies that map to risk

    1. Define named policies that scope the assistant by workspace and record type, leveraging existing controls

    2. Classify assistant actions by risk tier - for example, low: summarize or generate comment; medium: update description or create feature; high: bulk edit or publish script-capable artifact

  2. Lightweight approvals where needed

    1. Configure which risk tiers require approval and who approves

    2. Always provide a preview or diff of proposed changes

    3. Flag context risks on requests with badges - for example, "External search used" or "Script-capable artifact"

  3. Enterprise audit trail

    1. Capture per-action history including user, time, action taken, records affected, preview or diff, and any citations

    2. Log whether internet search was used and whether links target a trusted domain

    3. Provide a searchable UI and export options (CSV or JSON) for audits

Example(s)

  • A PM tries to bulk append customer contact emails to features that will appear in public release notes. The assistant blocks the change because it would expose PII on a public surface. It suggests a safe alternative that references account names only, requires Compliance approval to override, and logs the blocked action with the proposed diff.

  • A PM requests release notes that cite performance benchmarks found via external search. The assistant includes citations only from trusted domains and flags that external search was used. Any untrusted links are removed, an approver can add them back if needed, and the audit log lists all candidate links.

  • Admin adds a vendor domain to the trusted list. A PM then asks to export an account list with contact emails to that domain. The assistant allows the export with automatic email masking per policy, routes the file for Data Owner approval, and records the export with a checksum in the audit trail.


  • Attach files
  • +1