Skip to main content

10. SDK Runtime Integration

If your application is primarily C# or Java and you are not using an official SDK, use the dedicated direct integration guide:

What Automatically Creates Dashboard Activity

You only get a full runtime lifecycle in AgentID when both phases happen:

guard -> model execution -> ingest

That means:

  • guard() creates or updates the preflight side of the lifecycle
  • log() persists the post-execution telemetry row
  • SDK wrappers combine both when you use a supported wrapped surface

If the application only calls guard and never calls ingest, Activity, cost, and downstream graphs can look empty or incomplete.

Supported Automatic Wrapper Surfaces

Current supported official wrapper surfaces:

  • JS SDK: wrapOpenAI(...).chat.completions.create(...)
  • Python SDK: wrap_openai(...).chat.completions.create(...)
  • Vercel AI SDK wrapper: generateText() / streamText() with withAgentId(...) wrapped models

Current unsupported surfaces unless you add your own integration:

  • responses.create
  • Assistants API
  • arbitrary custom provider methods
  • app-local helper functions that never call agent.log()

The unsupported list still applies across JS, Python, and Vercel AI SDK flows unless you add an explicit integration path.

JS SDK Semantics

The published agentid-sdk package is designed so that:

  • agent.guard(...) is synchronous and awaited
  • agent.log(...) returns a promise and can be awaited directly
  • wrapOpenAI() calls /guard before chat.completions.create
  • on non-streaming completions, the wrapper performs the primary /ingest write before the wrapped call resolves
  • agent.guard(...) can accept plain text, content-part arrays, or message arrays and will extract text-only input for scanning while shipping attachment metadata to the backend
  • wrapped OpenAI calls preserve multimodal content parts such as images/audio/files in the provider request while still scanning the extracted text and logging attachment evidence

Practical implication:

If an app logs only create:start / create:ok, that does not prove AgentID ingest happened. It only proves the application believes the wrapped model call succeeded.

You still need to confirm one of these:

  • the call was really secured.chat.completions.create(...)
  • or the application explicitly awaited agent.log(...)

Vercel AI SDK Wrapper Semantics

The dedicated agentid-vercel-sdk package is designed so that:

  • withAgentId(...) wraps an existing Vercel AI SDK model
  • generateText() and streamText() stay unchanged at the application callsite
  • the wrapper calls /guard before the provider request
  • denied prompts throw AgentIdSecurityError before the provider is billed
  • allowed prompts can be rewritten from transformed_input before execution
  • completion telemetry is written through /ingest
  • sdk_ingest_ms is finalized through /ingest/finalize
  • multimodal messages[].content[] payloads are safe: text parts are scanned, attachments pass through unchanged, and attachment metadata is logged

Multimodal Safety Contract

Across JS SDK, Python SDK, and the Vercel AI SDK wrapper, the current contract is:

  • text parts are extracted and scanned by deterministic rules, local ML, and async forensic audit
  • image/audio/file/document parts are not parsed in the guard hot path
  • attachment presence is logged through request_identity.agentid_input_context
  • prompts that contain only attachments do not crash the wrapper; the text scan runs on an empty string and the original multimodal payload still reaches the upstream model

Streaming behavior:

  • the user-visible stream is not blocked by post-flight telemetry
  • the wrapper observes a forked stream branch and finalizes telemetry after the stream completes

Current provider coverage in this repo:

  • @ai-sdk/openai non-stream
  • @ai-sdk/openai stream
  • @ai-sdk/anthropic non-stream
  • @ai-sdk/anthropic stream

Common Reasons No AgentID Event Appears

1) The app used an unsupported provider surface

Example:

await client.responses.create(...)

If only chat.completions.create is wrapped, this path bypasses AgentID wrapper telemetry.

For Vercel AI SDK apps, the equivalent mistake is calling an unwrapped model directly instead of the result of withAgentId(...).

2) The app only called guard

Guard can allow the prompt and still produce no final complete row if the post-model ingest step never runs.

3) The app returned HTTP 200 before its own background telemetry completed

If the application starts background work after sending the response, the worker/runtime may drop the ingest request depending on the framework and hosting model.

4) The wrong system or key was used

Always confirm:

  • AGENTID_API_KEY
  • AGENTID_SYSTEM_ID
  • baseUrl

match the target AgentID environment.

When debugging a client integration, verify in this order:

  1. GET /api/v1/agent/config returns 200
  2. POST /api/v1/guard returns 200 or 403 with the expected client_event_id
  3. POST /api/v1/ingest returns 200 with success: true
  4. the same client_event_id appears in ai_events

If step 2 works but steps 3 and 4 do not, the bug is in the post-model telemetry path, not the guard engine.

Minimal Explicit Integration Pattern

If you are not using a supported automatic wrapper surface, do it explicitly:

1. await agent.guard(...)
2. call the model/provider
3. await agent.log(...)

This is the most reliable integration pattern for custom app architectures.

If the application already uses Vercel AI SDK and does not need custom manual orchestration, prefer agentid-vercel-sdk instead of rebuilding this lifecycle by hand.

For C# and Java teams building this explicit lifecycle over raw HTTP instead of an SDK wrapper, see: