Identity · Observability · AI Agents
AI agents that take real actions need more than a system prompt to be safe.
This is genuinely useful. It's also genuinely risky.
"Are you allowed to do that?"
Identity
Delegated authorization, scoped tokens, user-confirmation flows — ensuring the agent acts within sanctioned boundaries.
"What did you actually do?"
Observability
Structured traces of every LLM call, tool invocation, and retrieval — so you can audit, debug, and improve.
Without both, you have a powerful agent you can't trust or debug.
A personal AI assistant built on Vercel AI SDK + Next.js. Secured with Auth0 for identity management. Traced end-to-end with Arize AX.
shopOnlineTool
In Arize AX: trace shows the auth interrupt span with timing, wait period, and execution after approval.
Arize AX: Traces → click trace → expand child spans → inspect I/O tab on each tool span
Key: observability makes retrieval quality visible — not just "it answered," but "what did it find and who was allowed to see it?"
getCalendarEventsTool → listRepositories
"Did my agent do the right thing,
and can I prove it?"
This is the baseline for production AI agents.
Let's talk identity, observability, and making AI agents worth trusting.