10. SDK Runtime Integration
If your application is primarily C# or Java and you are not using an official SDK, use the dedicated direct integration guide:
What Automatically Creates Dashboard Activity
You only get a full runtime lifecycle in AgentID when both phases happen:
guard -> model execution -> ingest
That means:
guard()creates or updates the preflight side of the lifecyclelog()persists the post-execution telemetry row- SDK wrappers combine both when you use a supported wrapped surface
If the application only calls guard and never calls ingest, Activity, cost, and downstream graphs can look empty or incomplete.
Supported Automatic Wrapper Surfaces
Current supported official wrapper surfaces:
- JS SDK:
wrapOpenAI(...).chat.completions.create(...) - Python SDK:
wrap_openai(...).chat.completions.create(...) - Vercel AI SDK wrapper:
generateText()/streamText()withwithAgentId(...)wrapped models
Current unsupported surfaces unless you add your own integration:
responses.create- Assistants API
- arbitrary custom provider methods
- app-local helper functions that never call
agent.log()
The unsupported list still applies across JS, Python, and Vercel AI SDK flows unless you add an explicit integration path.
JS SDK Semantics
The published agentid-sdk package is designed so that:
agent.guard(...)is synchronous and awaitedagent.log(...)returns a promise and can be awaited directlywrapOpenAI()calls/guardbeforechat.completions.create- on non-streaming completions, the wrapper performs the primary
/ingestwrite before the wrapped call resolves agent.guard(...)can accept plain text, content-part arrays, or message arrays and will extract text-only input for scanning while shipping attachment metadata to the backend- wrapped OpenAI calls preserve multimodal content parts such as images/audio/files in the provider request while still scanning the extracted text and logging attachment evidence
Practical implication:
If an app logs only create:start / create:ok, that does not prove AgentID ingest happened. It only proves the application believes the wrapped model call succeeded.
You still need to confirm one of these:
- the call was really
secured.chat.completions.create(...) - or the application explicitly awaited
agent.log(...)
Vercel AI SDK Wrapper Semantics
The dedicated agentid-vercel-sdk package is designed so that:
withAgentId(...)wraps an existing Vercel AI SDK modelgenerateText()andstreamText()stay unchanged at the application callsite- the wrapper calls
/guardbefore the provider request - denied prompts throw
AgentIdSecurityErrorbefore the provider is billed - allowed prompts can be rewritten from
transformed_inputbefore execution - completion telemetry is written through
/ingest sdk_ingest_msis finalized through/ingest/finalize- multimodal
messages[].content[]payloads are safe: text parts are scanned, attachments pass through unchanged, and attachment metadata is logged
Multimodal Safety Contract
Across JS SDK, Python SDK, and the Vercel AI SDK wrapper, the current contract is:
- text parts are extracted and scanned by deterministic rules, local ML, and async forensic audit
- image/audio/file/document parts are not parsed in the guard hot path
- attachment presence is logged through
request_identity.agentid_input_context - prompts that contain only attachments do not crash the wrapper; the text scan runs on an empty string and the original multimodal payload still reaches the upstream model
Streaming behavior:
- the user-visible stream is not blocked by post-flight telemetry
- the wrapper observes a forked stream branch and finalizes telemetry after the stream completes
Current provider coverage in this repo:
@ai-sdk/openainon-stream@ai-sdk/openaistream@ai-sdk/anthropicnon-stream@ai-sdk/anthropicstream
Common Reasons No AgentID Event Appears
1) The app used an unsupported provider surface
Example:
await client.responses.create(...)
If only chat.completions.create is wrapped, this path bypasses AgentID wrapper telemetry.
For Vercel AI SDK apps, the equivalent mistake is calling an unwrapped model directly instead of the result of withAgentId(...).
2) The app only called guard
Guard can allow the prompt and still produce no final complete row if the post-model ingest step never runs.
3) The app returned HTTP 200 before its own background telemetry completed
If the application starts background work after sending the response, the worker/runtime may drop the ingest request depending on the framework and hosting model.
4) The wrong system or key was used
Always confirm:
AGENTID_API_KEYAGENTID_SYSTEM_IDbaseUrl
match the target AgentID environment.
Recommended Verification Pattern
When debugging a client integration, verify in this order:
GET /api/v1/agent/configreturns200POST /api/v1/guardreturns200or403with the expectedclient_event_idPOST /api/v1/ingestreturns200withsuccess: true- the same
client_event_idappears inai_events
If step 2 works but steps 3 and 4 do not, the bug is in the post-model telemetry path, not the guard engine.
Minimal Explicit Integration Pattern
If you are not using a supported automatic wrapper surface, do it explicitly:
1. await agent.guard(...)
2. call the model/provider
3. await agent.log(...)
This is the most reliable integration pattern for custom app architectures.
If the application already uses Vercel AI SDK and does not need custom manual orchestration, prefer agentid-vercel-sdk instead of rebuilding this lifecycle by hand.
For C# and Java teams building this explicit lifecycle over raw HTTP instead of an SDK wrapper, see: