What to inventory
Track LLM endpoints, model-provider accounts, AI plugins, and autonomous agents. Include source signal, environment, repository or account context, and whether the asset can reach the public internet.
AI Governance
Use this baseline to discover unmanaged AI usage, assign accountable owners, and prioritize the exposure patterns most likely to create policy, privacy, or operational risk.
Track LLM endpoints, model-provider accounts, AI plugins, and autonomous agents. Include source signal, environment, repository or account context, and whether the asset can reach the public internet.
Treat unapproved AI usage, public exposure, sensitive-data access, missing ownership, external sharing, and agent-triggered actions as the first baseline checks.
Preserve owner mapping, approval status, risk flags, and remediation notes so security, engineering, and audit teams can follow the same control record.
Pull signals from code repositories, SaaS integrations, and cloud inventories on a recurring cadence so unmanaged AI usage appears in the same operating queue as other exposure data.
Every asset should have a named team or contact before it is allowlisted. Unknown owners make escalation slower and create blind spots during incidents or legal review.
Escalate AI assets that combine public reachability with sensitive-data access, external sharing, or autonomous external actions. These pairings usually justify a finding and immediate remediation guidance.