See what your AI coding agents are actually doing
The Unbound Governance Assessment gives security and engineering leaders a complete picture of their AI coding agent risk surface — and a concrete path to policy — in just two weeks.
No contracts. No commitment. Just clarity on what's running in your environment and what to do about it.
- Full AI agent & MCP server discovery scan
- Risk posture assessment & scoring
- Policy recommendations tailored to your environment
- Executive readout with phased roadmap
Free for qualified engineering teams with 50+ developers
The Blind Spot
Your developers adopted AI coding agents. Your security team has no visibility.
Most organizations have 3–5x more AI coding tools in use than IT or Security knows about. Cursor, Claude Code, GitHub Copilot, Windsurf, Cline, Roo Code — each with their own MCP servers, terminal permissions, and auto-approve configurations. None of it governed. None of it audited.
73%
of engineering orgs have zero visibility into AI agent configurations
3–5x
more AI coding tools in use than IT/Security knows about
89%
of developers have auto-approve enabled for agent actions
What's Included
A complete governance assessment in two weeks
No contracts. No commitment. Just clarity on what's running in your environment and what to do about it.
Discovery Scan
Full inventory of AI coding tools (Cursor, Copilot, Claude Code, Cline, Roo Code, Gemini CLI, Codex, and 20+ others), connected MCP servers, sub-agents, agent rules, and extension configurations across your organization.
Risk Posture Assessment
Identification of risky configurations: auto-approve settings, broad write permissions, unsanctioned MCP server connections, permissive agent rules, and shadow tool sprawl.
Policy Recommendations
Initial governance policy framework tailored to your environment: sanctioned vs. unsanctioned tools, recommended guardrails for terminal commands and MCP actions, data protection rules for secrets and PII.
Executive Readout
Board-ready presentation of findings with risk prioritization, peer benchmarks, and a phased implementation roadmap for full agent governance.
How It Works
From kickoff to policy roadmap in 14 days
Days 1–5
Discover
Deploy lightweight discovery scan via your existing MDM (Kandji, Jamf, Intune, JumpCloud). Inventory all AI coding tools, MCP servers, configurations, and user patterns across your engineering org. Zero disruption to developer workflows.
Days 5–10
Assess
Analyze risk posture across your environment. Flag auto-approve drift, unsanctioned MCP connections, shadow agents, overly permissive terminal access, and sensitive data exposure paths. Benchmark against peer organizations.
Days 10–14
Recommend
Deliver executive readout with findings, risk prioritization, and a phased policy roadmap. Walk your CISO and VP Eng through exactly what to fix, in what order, and how Unbound automates enforcement going forward.
Your Deliverables
What you get at the end
Data Collected
- Every AI coding tool installed (name, version, user)
- All MCP servers and their connection status
- Agent and sub-agent configurations
- Auto-approve and permission settings per user
- Extension and plugin inventory
- Shadow tool detection (unsanctioned installs)
Report Includes
- Executive risk summary (board-ready)
- Tool-by-tool risk breakdown
- User-level configuration audit
- Policy gap analysis
- Prioritized remediation roadmap
- Peer benchmark comparison
- Recommended Unbound deployment plan
Is This for You?
Built for security and engineering leaders at scale
The Governance Assessment is free for qualified organizations. Here's who gets the most value:
CISO / VP Security
“I have no visibility into what AI coding tools my developers are using, what permissions they have, or what data they're exposing.”
Get a complete risk picture to present to your board.
VP / Director of Engineering
“I want my team to use AI coding tools productively, but I need guardrails before someone's agent drops a production table.”
Understand what's running without slowing anyone down.
Head of Platform / DevSecOps
“I'm responsible for rolling out Cursor and Claude Code to 500 engineers. I have no way to audit what's actually happening.”
Get the configuration baseline before your next rollout.
Qualification criteria
- 50+ software engineers / developers
- Active use of AI coding tools (Cursor, Claude Code, Copilot, or similar)
- Existing MDM deployment (Kandji, Jamf, Intune, JumpCloud, or similar)
Don't meet every criterion? Reach out anyway — we evaluate on a case-by-case basis.
Ready to see what's running in your environment?
Schedule a kickoff call to start your 2-week Governance Assessment. No cost. No commitment. Just visibility.