Skip to main content

11. Vercel AI SDK Wrapper

Purpose

agentid-vercel-sdk is the official AgentID wrapper for the Vercel AI SDK.

Use it when the application already relies on:

  • generateText()
  • streamText()
  • @ai-sdk/openai
  • @ai-sdk/anthropic

The goal is to keep the client callsite unchanged while still enforcing AgentID guardrails and telemetry.

import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { withAgentId } from "agentid-vercel-sdk";

const secureModel = withAgentId(openai("gpt-4o"), {
systemId: process.env.AGENTID_SYSTEM_ID!,
apiKey: process.env.AGENTID_API_KEY!,
});

const result = await generateText({
model: secureModel,
prompt: "Write a short refund confirmation.",
});

Runtime Flow

The wrapper keeps the same backend-first contract as the main JS SDK:

  1. extract the user prompt from the Vercel AI SDK request
  2. call AgentID /guard
  3. block early when guard denies
  4. optionally apply transformed_input before provider execution
  5. call the wrapped provider model
  6. persist /ingest telemetry
  7. finalize SDK timing with /ingest/finalize

This means denied prompts are blocked before the provider is billed.

Multimodal Prompt Support

The wrapper now supports multimodal Vercel AI SDK payloads:

  • text parts are extracted for /guard
  • image/audio/file parts remain untouched and continue to the provider
  • attachment metadata is attached to the guard request so Activity and log detail can show has_attachments, attachment_count, and attachment_types

This applies to both mixed text+attachment prompts and attachment-only prompts.

Failure Behavior

Default behavior is backend-first and fail-open unless the effective system policy says otherwise.

  • /guard is the primary authority
  • clientFastFail is optional and disabled by default
  • strictMode or failureMode: "fail_close" activates fail-close behavior
  • if backend guard is temporarily unreachable and the effective mode is fail-close, the wrapper can apply the same local deterministic fallback contract as agentid-sdk

This keeps the wrapper aligned with the main JS SDK instead of inventing a separate security model.

Edge Runtime Compatibility

The wrapper is designed for Vercel AI SDK and Edge-safe server runtimes.

  • no fs
  • no native Node crypto dependency in the wrapper path
  • streaming telemetry is observed on a forked ReadableStream branch so the response stream stays non-blocking for the caller

Provider Coverage

The package is provider-agnostic at the Vercel AI SDK layer, but this repo validates concrete provider behavior with integration tests for:

  • @ai-sdk/openai non-stream
  • @ai-sdk/openai stream
  • @ai-sdk/anthropic non-stream
  • @ai-sdk/anthropic stream

Per-request Overrides

Request-scoped identity can be passed through providerOptions.agentid.

const result = await generateText({
model: secureModel,
prompt: "Summarize this customer ticket.",
providerOptions: {
agentid: {
userId: "customer-123",
requestIdentity: {
tenantId: "acme",
sessionId: "sess-42",
},
expectedLanguages: ["en"],
clientEventId: "11111111-1111-4111-8111-111111111111",
},
},
});

Supported request-level overrides:

  • userId
  • requestIdentity
  • expectedLanguages
  • clientEventId

Operational Notes

  • The wrapper is multimodal-safe. File-only or mixed text+attachment prompts no longer fail prompt extraction.
  • The current guard hot path scans extracted text only. Binary attachment contents are not OCR- or vision-scanned yet.
  • The wrapper delegates capability fetch, retry behavior, guard correlation, and finalize semantics to agentid-sdk.
  • If you are not using Vercel AI SDK, prefer the base JS SDK (agentid-sdk) or call /guard + /ingest explicitly.