Skip to main content

Best Practices for AI Governance

Get the most out of ServiceOps AI by configuring governance the right way from the start and keeping it current as your organization grows.

Effective AI governance is not a one-time setup. It requires deliberate configuration, phased rollout, regular review, and clear incident response procedures. This guide brings together best practices across every area of the Governance section: General Settings, Vector Store, Analytics, and Knowledge Collections.

Phase Your Rollout: Do Not Enable Everything at Once

The most common mistake when enabling AI is turning on all features for all users at the same time. A phased approach reduces risk and gives you time to validate quality before broader exposure.

Recommended rollout order:

  1. Configure all Governance settings: PII Detection, Restricted Topics, Blocked Words, and Tools Access Policy.
  2. Set up Knowledge Collections with your most reliable and up-to-date content.
  3. Configure AI Agents and Teams in AI Studio and attach the relevant Knowledge Collections.
  4. Enable AI Agents Globally and activate the Technician Portal only.
  5. Run internal testing with a small group of technicians for one to two weeks.
  6. Review Analytics for quality, credit usage, and any policy violations.
  7. Extend access to the Support Portal for end users once quality is validated.

Restrict Access to Governance Settings

Governance configuration directly controls what AI can and cannot do across the entire organization. Access to these settings should be limited to senior administrators only.

  • Assign Governance access only to administrators who understand the business impact of each setting.
  • Avoid sharing admin credentials or allowing multiple people to make changes simultaneously.
  • Document every configuration change with a reason and date so the team can audit the history of governance decisions.
  • Review who has admin access to the AI module regularly. See the Governance Review Cadence section for recommended frequency.

Test and Validate Before Go-Live

Before enabling AI for users, validate that governance controls are working as intended. Use the following checklist:

PII Detection:

  • Send a test message containing an email address, phone number, and IP address to the AI and confirm the configured action (Mask or Block) is applied correctly.
  • Test both Input and Output scope separately.

Restricted Topics:

  • Type a sample phrase that matches a configured restricted topic and confirm the AI blocks the response.
  • Test with variations of the phrase to confirm detection is not too narrow.

Blocked Words:

  • Submit a query that includes a blocked word and verify the AI response is suppressed.

Ask AI Quality:

  • Ask three to five representative questions your users are likely to ask and review whether the responses are accurate, relevant, and sourced from the correct Knowledge Collection.
  • If responses are inaccurate, review the Knowledge Collection content and regenerate embeddings before go-live.

Smart Suggestions:

  • Create a test ticket and verify that category, group, and resolution suggestions appear and are relevant.

Responsible AI: Layer All Three Controls

The three Responsible AI controls in General Settings work best when used together. Relying on only one leaves gaps that the others would catch.

  • PII Detection: Protects sensitive personal data such as email addresses, phone numbers, credit card numbers, IP addresses, etc. Enable for both Input and Output. Start with predefined patterns and add custom patterns for organization-specific sensitive data such as employee IDs or internal reference numbers.
  • Restricted Topics: Blocks AI from engaging with specific subjects entirely, such as password sharing, competitor comparisons, or legally sensitive areas. Write clear, specific definitions and include at least three to five sample phrases per topic to improve detection accuracy.
  • Blocked Words: Filters specific words or phrases from AI responses. Use this for terms your organization has explicitly flagged, such as internal code words, competitor names, or inappropriate language. Keep the list focused, an overly broad list degrades AI response quality.

Key principles:

  • Configure all three controls before enabling AI for users.
  • Test each control individually before enabling all three together.
  • Review and update all three controls regularly. See the Governance Review Cadence section for recommended frequency.
  • When updating a restricted topic, add new sample phrases rather than deleting old ones, to maintain detection breadth.

Control Tool Access by Agent Function

The Tools Access Policy in General Settings controls which actions AI agents are permitted to perform. Agents should only have access to the tools they need for their specific function.

  • Review available tools module by module, for example, Service Catalog, Request, and remove anything outside an agent's intended scope.
  • An agent focused on answering knowledge questions does not need tools to create or update tickets.
  • After restricting tools, test each agent to confirm it still performs its intended function correctly.
  • Revisit tool access whenever an agent's purpose changes or a new module is added to ServiceOps.

Optimize Vector Store for Accurate AI Results

The Vector Store determines how well AI finds relevant records, articles, and resolutions. Poor configuration here affects every AI feature.

Similarity Score:

  • Start at 0.70 for most organizations.
  • Increase toward 0.80 or higher if results are too broad or irrelevant.
  • Decrease toward 0.60 if the AI is missing clearly related records.
  • Never set below 0.50 as this produces unreliable results.

Maximum Number of Results:

  • Start with 5 results.
  • Increase to 10 for high-volume environments where technicians benefit from more options.
  • Avoid setting above 15 as this increases AI processing time and may dilute response quality.

Field Selection:

  • Use only free-text fields such as Subject, Description, and custom fields.
  • Do not add numeric, date, status, or dropdown fields as these carry no semantic meaning.
  • Start with Subject and Description only, then add additional fields gradually and observe whether result quality improves.

Maintain an Embedding Schedule

Embeddings must be kept current for AI results to remain accurate. Use the following schedule as a starting point:

ActionWhen to Run
RefreshWeekly or after routine ticket and knowledge article updates
RegenerateAfter adding or removing fields in module configuration, after changing the AI model, after importing large volumes of historical data
Review Embedding LogsAfter every Regenerate and monthly as a health check
  • Schedule Regenerate during off-peak hours, for example, overnight or on weekends, to minimize impact on users.
  • After Regenerate, check the Embedding Logs for any Failed status entries and resolve them before the next working day.
  • If a module's logs consistently show failures, review the field configuration for that module and verify the AI model is reachable.

Maintain Knowledge Collections as a Governance Responsibility

Knowledge Collections power Ask AI, Smart Suggestions, and the Solution Assistant. Outdated or inaccurate knowledge produces wrong AI answers, which reduces user trust.

  • Assign a named owner for each Knowledge Collection who is responsible for reviewing and updating its content.
  • Remove outdated articles, retired procedures, and deprecated product documentation from Knowledge Collections promptly.
  • Add new knowledge articles to the relevant collection within 48 hours of publishing them in the knowledge base.
  • After updating a Knowledge Collection, follow the embedding refresh schedule in the Maintain an Embedding Schedule section to ensure changes are reflected in AI responses.
  • Review each Knowledge Collection regularly. See the Governance Review Cadence section for recommended frequency.

Control Costs Through Agent Design

Credit consumption is directly tied to how agents are designed and how broadly they are deployed.

  • Keep agent instructions concise and specific. Long, complex instructions increase token consumption per query.
  • Consolidate underused agents into a single agent with a broader scope rather than maintaining many low-traffic agents.
  • Use AI Teams for complex multi-domain queries rather than having multiple agents attempt the same query independently.

Incident Response: When AI Behaves Unexpectedly

If AI returns harmful, inaccurate, or policy-violating content, act immediately using the following steps:

  1. Disable AI globally using the Global Enable/Disable toggle in General Settings. This immediately removes Ask AI from all portals without affecting any configuration.
  2. Identify the source using the Analytics dashboard. Check which agent or team was involved in the problematic interaction and at what time.
  3. Review the governance rules that should have caught the issue: Was PII Detection enabled? Was the topic covered by a Restricted Topic? Was the word in the Blocked Words list?
  4. Update the relevant rule: Add the missing restricted topic, PII pattern, or blocked word.
  5. Test the fix using sample inputs that replicate the original issue before re-enabling AI.
  6. Re-enable AI globally once you have confirmed the issue is resolved.
  7. Document the incident: Record what happened, what rule was missing, and what was changed. Use this to inform your next quarterly governance review.

Establish a Regular Governance Review Cadence

AI governance settings should be treated as living policies, not one-time configuration. Establish a regular review schedule:

FrequencyWhat to Review
MonthlyAnalytics export, Embedding Logs, Knowledge Collection accuracy
QuarterlyPII Detection patterns, Restricted Topics list, Blocked Words list, Tool Access Policy, admin access permissions
After major changesAny time new agents are deployed, new modules are added, or organizational policies change
After an incidentFull review of all Responsible AI controls and agent configurations