Working with Items
Understanding the items-based streaming paradigm for callModel
The Items Paradigm
callModel is built on OpenRouter’s Responses API which uses an items-based
model rather than the messages-based model used by OpenAI Chat or Vercel AI
SDK.
The key insight: items are emitted multiple times with the same ID but progressively updated content. You replace the entire item by ID rather than accumulating stream chunks.
Messages vs Items
Item Types
getItemsStream() yields these item types:
How Streaming Works
Each iteration yields a complete item with the same ID but updated content:
The same pattern applies to function calls:
React Integration
The items paradigm eliminates manual chunk accumulation. Use a Map keyed by item ID and let React’s reconciliation handle updates:
Benefits
- No chunk accumulation - Each item emission is complete
- Natural React updates - Setting state triggers re-render automatically
- Concurrent item handling - Function calls and messages stream in parallel
- Works with React 18+ - Compatible with concurrent features and Suspense
- Type-safe - Full TypeScript inference for all item types
Comparison with Chunk Accumulation
Traditional streaming requires manual accumulation:
With items, each emission replaces the previous:
The items approach is especially powerful when the model produces multiple outputs simultaneously (e.g., thinking + tool calls + text).
Migrating from getNewMessagesStream()
getNewMessagesStream() is deprecated in favor of getItemsStream(). The
migration is straightforward:
The key difference: getItemsStream() includes all item types (reasoning,
function calls, etc.), not just messages.