MCP Apps is the official standard for interactive widgets in the Model Context Protocol ecosystem. It provides a standardized way to create rich, interactive user interfaces that work across MCP-compatible clients.
mcp-use Advantage : Write your widgets once using type: "mcpApps" and
they’ll work with both MCP Apps clients AND ChatGPT automatically. This
unique dual-protocol support means maximum compatibility with minimal effort.
Why MCP Apps?
MCP Apps is the recommended widget protocol for MCP servers because it:
MCP-native : Built specifically for the Model Context Protocol ecosystem
Open Standard : Ensuring long-term compatibility across the MCP ecosystem
Secure : Double-iframe sandbox architecture with granular CSP control
Feature-rich : JSON-RPC 2.0 communication with full MCP integration
Future-proof : Supported by growing ecosystem of MCP clients
Quick Start
Start with the MCP Apps template:
npx create-mcp-use-app my-mcp-server --template mcp-apps
cd my-mcp-server
npm install
npm run dev
This creates a project with dual-protocol support out of the box.
MCP Apps vs ChatGPT Apps SDK
While both protocols enable interactive widgets, they have key differences:
Feature MCP Apps (Standard) ChatGPT Apps SDK Protocol JSON-RPC 2.0 over postMessage window.openai global APIMIME Type text/html;profile=mcp-apptext/html+skybridgeSpecification MCP Apps (open standard) OpenAI proprietary Architecture Double-iframe sandbox Single iframe CSP Format camelCase (connectDomains) snake_case (connect_domains) Client Support MCP Apps clients (Claude, Goose, etc.) ChatGPT mcp-use Support ✅ Full support ✅ Full support
Best of Both Worlds : With mcp-use, you don’t have to choose! Using type: "mcpApps" generates metadata for both protocols automatically, so your
widgets work everywhere.
Widgets can receive partial tool arguments in real time (partialToolInput / isStreaming) when the host streams them, enabling live previews as the LLM generates the tool call. See useWidget — Streaming tool arguments for details.
Using Dual-Protocol Type (Recommended)
The mcpApps type enables your widgets to work with both MCP Apps clients and ChatGPT:
import { MCPServer } from "mcp-use/server" ;
const server = new MCPServer ({
name: "my-server" ,
version: "1.0.0" ,
baseUrl: process . env . MCP_URL || "http://localhost:3000" ,
});
// Register a dual-protocol widget
server . uiResource ({
type: "mcpApps" , // 👈 Works with BOTH MCP Apps AND ChatGPT
name: "weather-display" ,
htmlTemplate: `
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Weather Display</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/resources/weather-display.js"></script>
</body>
</html>
` ,
metadata: {
// Unified CSP configuration - works for both protocols
csp: {
connectDomains: [ "https://api.weather.com" ],
resourceDomains: [ "https://cdn.weather.com" ],
},
prefersBorder: true ,
// Invocation status text — shown in inspector and ChatGPT while tool runs / after completion
invoking: "Fetching weather..." , // auto-default: "Loading weather-display..."
invoked: "Weather loaded" , // auto-default: "weather-display ready"
// ChatGPT-specific metadata (optional)
widgetDescription: "Displays current weather conditions" ,
},
});
This single configuration automatically generates metadata for both protocols :
For MCP Apps Clients:
{
mimeType : "text/html;profile=mcp-app" ,
_meta : {
ui : {
resourceUri : "ui://widget/weather-display.html" ,
csp : {
connectDomains : [ "https://api.weather.com" ],
resourceDomains : [ "https://cdn.weather.com" ]
},
prefersBorder : true
}
}
}
For ChatGPT (Apps SDK):
{
mimeType : "text/html+skybridge" ,
_meta : {
"openai/outputTemplate" : "ui://widget/weather-display.html" ,
"openai/widgetCSP" : {
connect_domains: [ "https://api.weather.com" ],
resource_domains: [ "https://cdn.weather.com" ]
},
"openai/widgetPrefersBorder" : true ,
"openai/widgetDescription" : "Displays current weather conditions"
}
}
Per the MCP Apps specification , a tool result has three fields with different visibility:
Field LLM sees it? Widget sees it? Purpose contentYes Yes Text summary for the model’s context structuredContentNo Yes (as props) Rendering data for the widget _metaNo Yes (as metadata) Protocol + custom metadata
This separation enables a powerful pattern: the server computes data for the widget (like search results, computed visualizations, etc.) without polluting the model’s context window.
Server: Returning data
server . tool ({
name: "search-products" ,
schema: z . object ({ query: z . string () }),
widget: { name: "product-search-result" },
}, async ({ query }) => {
const results = await db . search ( query );
return widget ({
// Widget rendering data — LLM does NOT see this
props: { query , results },
// Text summary — LLM sees this
output: text ( `Found ${ results . length } products matching " ${ query } "` ),
});
});
const { props , toolInput , isPending } = useWidget < ProductSearchProps >();
// props = structuredContent from tool result (computed by server)
// toolInput = tool call arguments (what the model sent, e.g. { query: "mango" })
The props and toolInput are intentionally separate:
toolInput is what the model decided to pass (e.g. { query: "mango" })
props is what the server computed and returned (e.g. { query: "mango", results: [...16 items] })
Display Modes
Widgets can request different display modes from the host:
const { displayMode , requestDisplayMode } = useWidget ();
// Request fullscreen
await requestDisplayMode ( "fullscreen" );
// Request picture-in-picture
await requestDisplayMode ( "pip" );
// Return to inline
await requestDisplayMode ( "inline" );
The host may decline the request — always check displayMode for the actual state. Common pattern for a toolbar:
const isFullscreen = displayMode === "fullscreen" ;
const isPip = displayMode === "pip" ;
return (
< div >
{ ! isFullscreen && ! isPip && (
<>
< button onClick = { () => requestDisplayMode ( "pip" ) } > PiP </ button >
< button onClick = { () => requestDisplayMode ( "fullscreen" ) } > Fullscreen </ button >
</>
) }
{ ( isFullscreen || isPip ) && (
< button onClick = { () => requestDisplayMode ( "inline" ) } > Exit </ button >
) }
</ div >
);
Follow-Up Messages
Widgets can send messages to the conversation on behalf of the user using sendFollowUpMessage. This triggers a new LLM turn:
const { sendFollowUpMessage } = useWidget ();
< button onClick = { () => sendFollowUpMessage ( "Tell me more about mangoes" ) } >
Ask the AI
</ button >
Per the MCP Apps specification, this maps to the ui/message JSON-RPC request. The host adds the message to the conversation and triggers the model to respond.
Host Context: Locale, Theme, and More
useWidget exposes host context fields provided by the host:
const { theme , locale , timeZone , safeArea , userAgent , maxHeight } = useWidget ();
Field Type Description theme"light" | "dark"Current host theme localestringBCP 47 locale (e.g. "en-US", "fr-FR") timeZonestringIANA timezone (e.g. "America/New_York") safeArea{ insets: { top, right, bottom, left } }Safe area for notched devices userAgent{ device: { type }, capabilities: { hover, touch } }Device info maxHeightnumberMax available height in pixels maxWidthnumber | undefinedMax available width (MCP Apps only)
Use the Inspector’s playground controls to test different locales, devices, and themes during development.
The tool result’s _meta field is available as metadata in useWidget. Use the metadata option in the widget() helper to pass extra data that doesn’t belong in structuredContent:
// Server side
return widget ({
props: { items: filteredItems },
metadata: { totalCount: 1000 , nextCursor: "abc123" },
output: text ( `Showing ${ filteredItems . length } of 1000 results` ),
});
// Widget side
const { props , metadata } = useWidget ();
// props = { items: [...] } (from structuredContent)
// metadata = { totalCount, nextCursor } (from _meta)
This works across both MCP Apps (ui/notifications/tool-result → params._meta) and ChatGPT (window.openai.toolResponseMetadata).
Your widget code is identical regardless of which protocol the host uses:
// resources/weather-display.tsx
import React from "react" ;
import { useWidget , McpUseProvider , type WidgetMetadata } from "mcp-use/react" ;
import { z } from "zod" ;
const propSchema = z . object ({
city: z . string (),
temperature: z . number (),
conditions: z . string (),
});
export const widgetMetadata : WidgetMetadata = {
description: "Display weather information" ,
props: propSchema ,
};
const WeatherDisplay : React . FC = () => {
const { props , isPending } = useWidget < z . infer < typeof propSchema >>();
if ( isPending ) {
return < div > Loading weather... </ div > ;
}
return (
< McpUseProvider autoSize >
< div className = "weather-card" >
< h2 > { props . city } </ h2 >
< div className = "temperature" > { props . temperature } °C </ div >
< div className = "conditions" > { props . conditions } </ div >
</ div >
</ McpUseProvider >
);
};
export default WeatherDisplay ;
The same widget code works with:
✅ Claude Desktop (MCP Apps protocol)
✅ ChatGPT (Apps SDK protocol)
✅ Goose (MCP Apps protocol)
✅ Any MCP Apps-compatible client
✅ Future clients supporting either protocol
Widgets can call MCP tools using the useCallTool hook for interactive functionality:
import { useWidget , useCallTool } from "mcp-use/react" ;
const InteractiveWidget = () => {
const { props , isPending : propsPending } = useWidget ();
const { callTool , data , isPending } = useCallTool ( "fetch-data" );
if ( propsPending ) {
return < div > Loading... </ div > ;
}
return (
< div >
< button onClick = { () => callTool ({ id: props . itemId }) } >
{ isPending ? "Loading..." : "Fetch Data" }
</ button >
{ data && (
< div > { JSON . stringify ( data . structuredContent ) } </ div >
) }
</ div >
);
};
Benefits:
✅ Type-safe : Automatic TypeScript inference for tool inputs and outputs
✅ State management : Built-in loading, success, and error states
✅ Cross-protocol : Works identically in MCP Apps and ChatGPT
See useCallTool() for complete documentation.
State Management
State in an MCP Apps widget falls into three categories:
State type Where it lives Lifetime Example Business data MCP server / backend Long-lived Tasks, search results, documents UI state Widget instance Per-widget Selected row, expanded panel, filter Cross-session Your backend Across conversations Saved preferences, workspace settings
Business Data (Server-Owned)
Business data is the source of truth. The widget receives it via props (from structuredContent) and can request updates by calling server tools:
const { props } = useWidget <{ tasks : Task [] }>();
const { callTool , data } = useCallTool ( "update-task" );
const handleToggle = ( taskId : string ) => {
callTool ({ taskId , done: true });
// Server returns updated task list in structuredContent
// Widget re-renders with new props on next tool call
};
UI state describes how data is being viewed, not the data itself. Use setState from useWidget:
type MyState = { favorites : string [] };
const { props , state , setState } = useWidget < Props , Output , Meta , MyState >();
const favorites = state ?. favorites ?? [];
const toggleFavorite = ( id : string ) => {
const current = state ?. favorites ?? [];
const next = current . includes ( id )
? current . filter ( f => f !== id )
: [ ... current , id ];
setState ({ favorites: next });
};
What happens under the hood:
ChatGPT : Calls window.openai.setWidgetState() for host-managed persistence across message reloads.
MCP Apps : Updates local React state and sends ui/update-model-context to the host, so the model can reason about UI state on future turns.
ui/update-model-context is the MCP Apps equivalent of ChatGPT’s
widget state persistence. Each call overwrites the previous context. The host
includes this data in the model’s context on the next user message.
Cross-Session State (Backend-Owned)
For state that must persist across conversations, store it on your backend. Use callTool from the widget to read/write preferences via your server’s tools — don’t rely on localStorage.
MCP Apps Bridge API
For widgets that need direct protocol access, mcp-use provides the MCP Apps bridge:
import { getMcpAppsBridge } from 'mcp-use/react' ;
function MyWidget () {
const bridge = getMcpAppsBridge ();
// The bridge automatically connects
// All methods return promises
// Call an MCP tool
const result = await bridge . callTool ( 'search' , { query: 'hello' });
// Read an MCP resource
const data = await bridge . readResource ( 'file:///data.json' );
// Send a message to the host
await bridge . sendMessage ({ type: 'info' , text: 'Processing...' });
// Open a link
await bridge . openLink ( 'https://example.com' );
// Request display mode change
await bridge . requestDisplayMode ( 'fullscreen' );
return < div > My Widget </ div > ;
}
Most widgets won’t need the bridge directly. The useWidget() hook provides a
simplified API that works across both protocols automatically.
While mcp-use handles protocol differences automatically, you can provide protocol-specific metadata when needed:
server . uiResource ({
type: "mcpApps" ,
name: "my-widget" ,
htmlTemplate: `...` ,
metadata: {
// Shared metadata (used by both)
csp: { connectDomains: [ "https://api.example.com" ] },
prefersBorder: true ,
// MCP Apps specific (ignored by ChatGPT)
autoResize: true ,
supportsLocalStorage: true ,
// ChatGPT specific (ignored by MCP Apps clients)
widgetDescription: "Special description for ChatGPT" ,
widgetDomain: "https://chatgpt.com" ,
},
});
Migration from Apps SDK
If you have existing widgets using type: "appsSdk", you can migrate to dual-protocol support. Note that this requires updating both the type and the metadata format.
Before (ChatGPT only):
server . uiResource ({
type: "appsSdk" , // Only works with ChatGPT
name: "my-widget" ,
htmlTemplate: `...` ,
appsSdkMetadata: {
"openai/widgetCSP" : {
connect_domains: [ "https://api.example.com" ],
resource_domains: [ "https://cdn.example.com" ],
},
"openai/widgetPrefersBorder" : true ,
"openai/widgetDescription" : "My widget description" ,
},
});
After (Universal compatibility):
server . uiResource ({
type: "mcpApps" , // Works with ChatGPT AND MCP Apps clients
name: "my-widget" ,
htmlTemplate: `...` ,
metadata: {
csp: {
connectDomains: [ "https://api.example.com" ],
resourceDomains: [ "https://cdn.example.com" ],
},
prefersBorder: true ,
widgetDescription: "My widget description" ,
},
});
Key changes:
type: "appsSdk" → type: "mcpApps"
appsSdkMetadata → metadata (field name change)
"openai/widgetCSP" → csp (remove openai/ prefix)
connect_domains → connectDomains (snake_case → camelCase)
resource_domains → resourceDomains (snake_case → camelCase)
"openai/widgetPrefersBorder" → prefersBorder (remove openai/ prefix)
Your widget code requires no changes - only the server registration changes.
Migration Options
You have three options when migrating:
Option 1: Full Migration (Recommended)
Migrate completely to the new metadata format for maximum clarity:
server . uiResource ({
type: "mcpApps" ,
name: "my-widget" ,
htmlTemplate: `...` ,
metadata: {
csp: { connectDomains: [ "https://api.example.com" ] },
prefersBorder: true ,
},
});
Option 2: Backward Compatible Migration
Keep both formats if you need ChatGPT-specific overrides:
server . uiResource ({
type: "mcpApps" ,
name: "my-widget" ,
htmlTemplate: `...` ,
// New unified format (used by both protocols)
metadata: {
csp: { connectDomains: [ "https://api.example.com" ] },
prefersBorder: true ,
},
// ChatGPT-specific overrides (optional)
appsSdkMetadata: {
"openai/widgetDescription" : "ChatGPT-specific description" ,
},
});
Option 3: Stay on Apps SDK
If you only need ChatGPT support, you can stay on type: "appsSdk":
server . uiResource ({
type: "appsSdk" , // ChatGPT only
name: "my-widget" ,
htmlTemplate: `...` ,
appsSdkMetadata: {
"openai/widgetCSP" : {
connect_domains: [ "https://api.example.com" ],
},
},
});
Field Mapping Reference
Complete mapping from Apps SDK to MCP Apps metadata:
Apps SDK (appsSdkMetadata) MCP Apps (metadata) Notes "openai/widgetCSP"cspCSP configuration object connect_domainsconnectDomainsArray of connection domains resource_domainsresourceDomainsArray of resource domains frame_domainsframeDomainsArray of frame domains redirect_domainsredirectDomainsArray of redirect domains (ChatGPT-specific) script_directivesscriptDirectivesArray of script CSP directives style_directivesstyleDirectivesArray of style CSP directives "openai/widgetPrefersBorder"prefersBorderBoolean "openai/widgetDomain"domainString (custom domain) "openai/widgetDescription"widgetDescriptionString (widget description) "openai/widgetAccessible"widgetAccessibleBoolean (ChatGPT-specific, can stay in appsSdkMetadata) "openai/locale"localeString (ChatGPT-specific, can stay in appsSdkMetadata) "openai/toolInvocation/invoking"invokingString — status text while tool runs (auto-defaulted) "openai/toolInvocation/invoked"invokedString — status text after tool completes (auto-defaulted)
See Content Security Policy for full CSP configuration and field details.
Migration Checklist
Follow these steps to migrate:
✅ Change the widget type :
type : "appsSdk" → type : "mcpApps"
✅ Rename the metadata field :
appsSdkMetadata : { ... } → metadata : { ... }
✅ Transform CSP configuration :
// Before
"openai/widgetCSP" : {
connect_domains: [ "..." ],
resource_domains: [ "..." ]
}
// After
csp : {
connectDomains : [ "..." ],
resourceDomains : [ "..." ]
}
✅ Transform other metadata fields :
Remove "openai/ prefix from all keys
Convert remaining snake_case to camelCase
✅ Test in both environments :
Test in ChatGPT (Apps SDK protocol)
Test in MCP Inspector with protocol toggle
Test in MCP Apps-compatible client (Claude, Goose, etc.)
✅ Verify widget behavior :
Props received correctly
Theme syncing works
Tool calls function properly
CSP allows required resources (see Content Security Policy )
Comparison with MCP UI
MCP Apps is designed for interactive widgets , while MCP UI is better for simpler, static content:
Feature MCP Apps MCP UI Use Case Interactive widgets with state Static/simple content Interactivity Full React components, tool calls Limited (mostly display) State Management setState + ui/update-model-contextNo state Tool Calls ✅ Can call other MCP tools ❌ No tool access ChatGPT Support ✅ Via dual-protocol ❌ Not supported Complexity Higher (full React app) Lower (simple HTML)
When to use MCP Apps:
Interactive dashboards
Forms and data entry
Multi-step workflows
Real-time updates
Complex visualizations
When to use MCP UI:
Simple content display
Read-only views
Lightweight embeds
See MCP UI Resources for more on the alternative approach.
Using the Inspector
The MCP Inspector fully supports both MCP Apps and ChatGPT Apps SDK protocols:
Start your server : npm run dev
Open Inspector : http://localhost:3000/inspector
Protocol Toggle : Switch between MCP Apps and ChatGPT protocols
Debug Controls : Test different devices, locales, CSP modes
Display Modes : Test inline, picture-in-picture, and fullscreen
See Debugging Widgets for the complete guide.
Testing in Production Clients
ChatGPT:
Enable Developer Mode in Settings → Connectors → Advanced
Add your MCP server URL
Start a conversation and trigger your tools
Claude Desktop (upcoming MCP Apps support):
Add your server to Claude’s MCP configuration
Widgets render automatically when tools return them
Goose:
Configure your MCP server in Goose
Call tools that return widgets
Protocol Adapters (Advanced)
Behind the scenes, mcp-use uses protocol adapters to transform your widget configuration:
import { McpAppsAdapter , AppsSdkAdapter } from "mcp-use/server" ;
// These are created automatically when you use type: "mcpApps"
const mcpAppsAdapter = new McpAppsAdapter ();
const appsSdkAdapter = new AppsSdkAdapter ();
// Transform your unified metadata to each protocol
const mcpAppsMetadata = mcpAppsAdapter . transformMetadata ( yourMetadata );
const appsSdkMetadata = appsSdkAdapter . transformMetadata ( yourMetadata );
You rarely need to interact with adapters directly, but they’re available for advanced use cases.
Next Steps
Learn More