What Is an MCP App? Architecture, Hosts, and How to Build One (April 2026)
If you’ve worked with MCP (Model Context Protocol), you know it lets AI models call tools, read resources, and interact with external systems. MCP Apps take that a step further: they add interactive UI to those interactions, rendered directly inside the AI conversation.
TL;DR: An MCP App is an interactive UI that runs inside ChatGPT, Claude, and other AI hosts. Instead of returning plain text from a tool call, MCP Apps render HTML (usually React components) in a sandboxed iframe, so users get forms, dashboards, charts, and other real interfaces inside their chat. The MCP App standard defines how this works across hosts, and sunpeak is an open-source MCP App framework for building them.
MCP Apps vs. MCP Servers
A regular MCP server exposes tools that return text or structured data. When a model calls get_weather, it gets back {"temp": 72, "condition": "sunny"}, and the model formats that into a chat message. The user sees text.
An MCP App attaches UI to that same tool. When get_weather runs, the host renders an interactive weather card inside the conversation, with icons, a forecast graph, and a location selector the user can click. The model still sees the structured data, but the user gets an actual interface.
Here’s the practical difference:
- MCP server tool: returns data → model writes a text response → user reads it
- MCP App tool: returns data and renders UI → user interacts with the interface → model sees those interactions and can respond
MCP Apps don’t replace MCP servers. They extend them. Your MCP server still defines the tools and handles the logic. The app adds a visual layer on top. For a deeper comparison of agent UI approaches, see MCP Apps vs A2UI.
How MCP Apps Work
The architecture has three parts: the MCP server, the host, and the app UI.
1. The MCP server defines tools and points them at UI resources. When a tool declares a ui field in its metadata, it tells the host: “when this tool runs, render this resource.” Resources use the ui:// URI scheme defined in the ext-apps specification. (See registering tools and registering resources in the docs.)
2. The host (ChatGPT, Claude, etc.) pre-loads declared resource bundles into sandboxed iframes when your MCP App is first registered. When the host calls a tool and receives a result, it injects the tool output, display mode (inline, fullscreen, picture-in-picture), and host context into the already-loaded iframe.
3. The app UI is a bundled web application (HTML, CSS, JavaScript) that renders inside that iframe. It communicates with the host through JSON-RPC 2.0 messages sent over window.postMessage. The app can read tool data, respond to display mode changes, and call additional tools on the MCP server through the host bridge. For more on building interactive two-way UIs, see Interactive MCP Apps with useAppState.
The iframe sandbox is the security boundary. The app can’t access the host page, can’t read the conversation, and can’t make arbitrary network requests. It only sees what the host explicitly sends it. You can configure CSP domains if your app needs to call external APIs.
Where MCP Apps Run
As of April 2026, MCP Apps are supported by:
- ChatGPT — OpenAI contributed elements of their original Apps SDK to the MCP App standard and now supports both. ChatGPT renders MCP Apps in inline, picture-in-picture, and fullscreen display modes. The ChatGPT App Directory has 300+ published integrations.
- Claude — Anthropic’s web and desktop clients render MCP Apps natively. Launch partners include Figma, Canva, Asana, Slack, Amplitude, and Salesforce. Within two weeks of launch, over 75 apps were available. For building Claude-specific connectors, see the Claude Connectors tutorial.
- VS Code — GitHub Copilot Chat renders MCP Apps in the chat sidebar, first in VS Code Insiders and now in the stable release.
- Goose — Block’s open-source AI agent supports MCP Apps as a reference implementation.
JetBrains IDEs are publicly exploring MCP App integration, and several other hosts have indicated interest.
The ext-apps specification is maintained under the Linux Foundation alongside the core MCP spec. It’s the first official MCP extension, and it reached stable status on January 26, 2026. The SDK (@modelcontextprotocol/ext-apps) is at v1.6.0 with 2,100+ GitHub stars and 29 releases.
Because the spec is open, any AI host can implement it. Apps you build today will work on hosts that ship MCP App support tomorrow, as long as you build against the standard rather than a single host’s proprietary API. For a detailed comparison of available frameworks, see How to Choose an MCP App Framework.
What You Can Build
MCP Apps are web applications, so you can build anything you’d build with HTML and JavaScript. Some patterns that work well inside AI conversations:
- Data dashboards that visualize tool output (charts, tables, metrics)
- Forms and workflows that collect user input and feed it back to the model
- Interactive editors for code, documents, or structured data
- Maps and visualizations that present spatial or complex data
- Multi-step wizards that guide users through a process
- Collaborative tools like Figma’s inline FigJam diagram builder or Canva’s presentation editor
The model stays in the loop throughout. When a user interacts with your app (clicks a button, submits a form, selects an option) the model sees that interaction and can respond. This creates a conversation where the user alternates between chatting with the model and using your UI.
Building an MCP App
An MCP App has two pieces: a Resource (the UI component) and a Tool (the API action that triggers it).
With sunpeak, you define both using conventions that the framework discovers automatically. Here’s a minimal example, a resource that displays data from a tool:
// src/resources/greeting/greeting.tsx
import { useToolData } from 'sunpeak';
import type { ResourceConfig } from 'sunpeak';
export const resource: ResourceConfig = {
description: 'Display a personalized greeting',
};
export default function GreetingResource() {
const { output } = useToolData<{ name: string; message: string }>();
return (
<div>
<h1>Hello, {output.name}</h1>
<p>{output.message}</p>
</div>
);
}
The resource export tells sunpeak this is an MCP resource with a name and description. The component uses useToolData to access the output from the tool that triggered the render.
The Tool that triggers this resource lives in src/tools/:
// src/tools/get-greeting.ts
import { z } from 'zod';
import type { AppToolConfig, ToolHandlerExtra } from 'sunpeak/mcp';
export const tool: AppToolConfig = {
resource: 'greeting',
title: 'Get Greeting',
description: 'Generate a personalized greeting',
annotations: { readOnlyHint: true },
};
export const schema = {
name: z.string().describe('Name of the person to greet'),
};
export default async function (args: Record<string, unknown>, _extra: ToolHandlerExtra) {
return {
structuredContent: {
name: args.name,
message: `Welcome back, ${args.name}!`,
},
};
}
sunpeak auto-discovers resources from src/resources/{name}/{name}.tsx and tools from src/tools/*.ts. The tool references the resource by name string (e.g. resource: 'greeting'), which tells sunpeak to render that resource when the tool is called. For a full walkthrough, see the MCP App tutorial or the step-by-step ChatGPT App tutorial.
To run it locally:
npx sunpeak new sunpeak-app
cd sunpeak-app && pnpm dev
The inspector at localhost:3000 renders your app exactly as it would appear in ChatGPT or Claude. You can test different display modes, themes, and tool inputs without connecting to a live host. For more on the inspector, see Claude Simulator for MCP Apps.
Why the Standard Matters
Before MCP Apps, ChatGPT had its own proprietary Apps SDK. If you built a ChatGPT App, it only worked in ChatGPT. Claude, Goose, and VS Code each had their own approaches (or none at all).
The MCP App standard changed that. ChatGPT, Claude, VS Code, and Goose now implement the same rendering model, iframe sandbox, and communication protocol. An app built against the standard works everywhere. And the ecosystem has grown fast: over 10,000 public MCP servers are available, with 97 million combined monthly SDK downloads across MCP client libraries.
This matters for two reasons. First, your addressable user base multiplies with each new host that adopts the standard. Second, you avoid the risk of building on a proprietary API that could change or disappear.
The practical way to get this portability is to use a framework that targets the standard. sunpeak’s core APIs (useToolData, useHostContext, useDisplayMode) are built around the MCP App interface. Host-specific runtime features are available through optional imports (sunpeak/chatgpt), so your base app code stays portable without giving up platform-specific capabilities. See Building One MCP App for ChatGPT and Claude for a hands-on guide to writing cross-host code.
Testing MCP Apps
MCP Apps introduce testing challenges that regular web apps don’t have. Your UI depends on tool data pushed by a host, renders in a sandboxed iframe, and needs to work across display modes and themes. You can’t test this by opening index.html in a browser.
sunpeak solves this with simulation files, JSON files that define deterministic states for your app. (See the testing framework guide for setup details.)
{
"tool": "get_greeting",
"toolInput": { "name": "Alice" },
"toolResult": { "name": "Alice", "message": "Welcome back!" },
"userMessage": "Greet Alice"
}
These simulations feed into sunpeak’s testing framework, which covers multiple layers:
- Unit and E2E tests (
pnpm test) run against the local inspector with Playwright - Visual regression tests (
pnpm test:visual) catch unintended UI changes with screenshot comparison - Live host tests (
pnpm test:live) validate against real ChatGPT and Claude - Multi-model evals (
pnpm test:eval) test tool calling across GPT-4o, Claude, Gemini, and other LLMs
All tests except live host tests run in CI without a live AI host, so you get reliable automation on every push without burning any credits. For detailed guides, see the complete guide to testing MCP Apps, snapshot testing, visual regression testing, and CI/CD with GitHub Actions.
Get Started
npx sunpeak new
Further Reading
- MCP App Tutorial - build and test your first MCP App
- How to Build an MCP App - architecture for cross-host interactive UI
- MCP Concepts Explained - tools, resources, and how MCP Apps use them
- Complete Guide to Testing MCP Apps - E2E, visual, and live host testing
- How to Choose an MCP App Framework - framework evaluation guide
- MCP App Framework - cross-host portability features
- ChatGPT App Framework - ChatGPT-specific capabilities
- Claude Connector Framework - Claude Connector capabilities
- Documentation - guides, API reference, and tutorials
- MCP Apps Introduction - core MCP App concepts and architecture
- MCP App Specification - ext-apps open standard on GitHub
- GitHub - source code and issue tracker
Frequently Asked Questions
What is an MCP App?
An MCP App is an interactive application built on the Model Context Protocol Apps extension that runs inside AI hosts like ChatGPT and Claude. MCP Apps consist of Resources (UI views rendered in sandboxed iframes) and Tools (API actions), enabling rich interfaces like forms, charts, and dashboards directly in chat conversations. The MCP App standard (ext-apps) defines how hosts render these UIs, and frameworks like sunpeak help you build them.
What is the difference between an MCP App and an MCP server?
An MCP server exposes tools, resources, and prompts that return text or structured data to an AI model. An MCP App extends this by attaching interactive UI to those tools. When a tool runs, the host renders the app UI in a sandboxed iframe inside the conversation, so users can interact with forms, charts, and other components instead of reading plain text.
Which AI hosts support MCP Apps?
As of April 2026, MCP Apps run in ChatGPT, Claude (web and desktop), VS Code (via GitHub Copilot Chat), and Goose. JetBrains IDEs are exploring integration. The MCP App standard is maintained under the Linux Foundation, so more hosts are expected to adopt it. The ext-apps SDK is at v1.6.0 with 2,100+ GitHub stars.
How do MCP Apps communicate with the AI host?
MCP Apps render in sandboxed iframes and communicate with the host using JSON-RPC 2.0 messages sent over window.postMessage. The host pushes tool data and context to the app, and the app can call tools on the MCP server through the host bridge. Resources use the ui:// URI scheme defined in the ext-apps specification.
Do I need a paid ChatGPT or Claude account to build MCP Apps?
No. sunpeak includes a local inspector at localhost:3000 that replicates the MCP App runtime for both ChatGPT and Claude. You can build, test, and iterate on your app entirely offline without any paid subscriptions. See the guide on building a ChatGPT App without a paid account for details.
What framework should I use to build MCP Apps?
sunpeak is an open-source (MIT) MCP App framework that includes a local inspector, 20+ typed React hooks, pre-built UI components, and a built-in testing framework with E2E, visual regression, live host, and multi-model eval support. It targets the MCP App standard so your code works across ChatGPT, Claude, and other hosts.
Can one MCP App run on both ChatGPT and Claude?
Yes. The MCP App standard defines a common rendering model, communication protocol, and iframe sandbox that all hosts implement. If you build your app against the standard using a portable framework like sunpeak, it runs on every host without changes. Host-specific features are added through optional imports like sunpeak/chatgpt.
How do I test MCP Apps?
sunpeak provides simulation files that define deterministic UI states and a built-in testing framework. Run pnpm test to execute both unit and e2e tests, or use pnpm test:unit and pnpm test:e2e to run them separately. Add pnpm test:visual for visual regression tests, pnpm test:live for live host tests, and pnpm test:eval for multi-model evals. Tests run against the local inspector in CI without requiring a live AI host.