11. Vercel AI SDK Wrapper
Purpose
agentid-vercel-sdk is the official AgentID wrapper for the Vercel AI SDK.
Use it when the application already relies on:
generateText()streamText()@ai-sdk/openai@ai-sdk/anthropic
The goal is to keep the client callsite unchanged while still enforcing AgentID guardrails and telemetry.
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { withAgentId } from "agentid-vercel-sdk";
const secureModel = withAgentId(openai("gpt-4o"), {
systemId: process.env.AGENTID_SYSTEM_ID!,
apiKey: process.env.AGENTID_API_KEY!,
});
const result = await generateText({
model: secureModel,
prompt: "Write a short refund confirmation.",
});
Runtime Flow
The wrapper keeps the same backend-first contract as the main JS SDK:
- extract the user prompt from the Vercel AI SDK request
- call AgentID
/guard - block early when guard denies
- optionally apply
transformed_inputbefore provider execution - call the wrapped provider model
- persist
/ingesttelemetry - finalize SDK timing with
/ingest/finalize
This means denied prompts are blocked before the provider is billed.
Multimodal Prompt Support
The wrapper now supports multimodal Vercel AI SDK payloads:
- text parts are extracted for
/guard - image/audio/file parts remain untouched and continue to the provider
- attachment metadata is attached to the guard request so Activity and log detail can show
has_attachments,attachment_count, andattachment_types
This applies to both mixed text+attachment prompts and attachment-only prompts.
Failure Behavior
Default behavior is backend-first and fail-open unless the effective system policy says otherwise.
/guardis the primary authorityclientFastFailis optional and disabled by defaultstrictModeorfailureMode: "fail_close"activates fail-close behavior- if backend guard is temporarily unreachable and the effective mode is fail-close, the wrapper can apply the same local deterministic fallback contract as
agentid-sdk
This keeps the wrapper aligned with the main JS SDK instead of inventing a separate security model.
Edge Runtime Compatibility
The wrapper is designed for Vercel AI SDK and Edge-safe server runtimes.
- no
fs - no native Node crypto dependency in the wrapper path
- streaming telemetry is observed on a forked
ReadableStreambranch so the response stream stays non-blocking for the caller
Provider Coverage
The package is provider-agnostic at the Vercel AI SDK layer, but this repo validates concrete provider behavior with integration tests for:
@ai-sdk/openainon-stream@ai-sdk/openaistream@ai-sdk/anthropicnon-stream@ai-sdk/anthropicstream
Per-request Overrides
Request-scoped identity can be passed through providerOptions.agentid.
const result = await generateText({
model: secureModel,
prompt: "Summarize this customer ticket.",
providerOptions: {
agentid: {
userId: "customer-123",
requestIdentity: {
tenantId: "acme",
sessionId: "sess-42",
},
expectedLanguages: ["en"],
clientEventId: "11111111-1111-4111-8111-111111111111",
},
},
});
Supported request-level overrides:
userIdrequestIdentityexpectedLanguagesclientEventId
Operational Notes
- The wrapper is multimodal-safe. File-only or mixed text+attachment prompts no longer fail prompt extraction.
- The current guard hot path scans extracted text only. Binary attachment contents are not OCR- or vision-scanned yet.
- The wrapper delegates capability fetch, retry behavior, guard correlation, and finalize semantics to
agentid-sdk. - If you are not using Vercel AI SDK, prefer the base JS SDK (
agentid-sdk) or call/guard+/ingestexplicitly.