MCP Concepts Explained: Tools and Resources, and How MCP Apps Use Them
You’ll run into two terms constantly when building MCP Apps: Tools and Resources. They’re the actual building blocks MCP defines, and MCP Apps are built from them. Once you understand what each one does, how MCP Apps work clicks into place.
TL;DR: MCP Apps are built from two MCP primitives: Tools (callable actions) and Resources (readable data). An MCP App links a Tool to a Resource that contains an HTML interface. When the model calls the Tool, the host renders that Resource as an interactive UI in the conversation. sunpeak is an open-source MCP App framework that wires this up so you can focus on the UI.
What MCP Is
MCP (Model Context Protocol) is an open standard for connecting AI applications to external systems. It defines a common language that AI hosts and external servers use to talk to each other, so a model can call your tools, read your data, and take actions without a custom integration for each host.
The official docs describe it as USB-C for AI: before USB-C you needed a different cable for each device, and before MCP you needed a custom integration for each AI host. MCP standardizes the connection.
Hosts and Servers
MCP has two sides.
A host is the AI application the user talks to. ChatGPT, Claude, Goose, and VS Code are all MCP hosts. The host connects to external servers on the user’s behalf, routes model requests to them, and displays the results.
A server is the backend you build. It exposes your data and actions through the Model Context Protocol (MCP). The host discovers what your server can do and makes those capabilities available to the model.
For MCP Apps specifically, the host controls the conversation and the rendering environment, while your server provides the logic and the UI bundle. They work through the protocol but neither can reach into the other’s internals.
The MCP Primitives That Matter for Apps
MCP servers can expose three primitives: Tools, Resources, and Prompts. MCP Apps only use the first two. Prompts are for model behavior, not UI.
Tools
A Tool is a callable action. You define it on your server with a name, description, and input schema. When the model decides your Tool is relevant to what the user asked, it calls it with structured arguments and gets back structured output.
{
"name": "get_weather",
"description": "Get the current weather for a city",
"inputSchema": {
"city": { "type": "string" }
}
}
The model reads the description and decides when to invoke a Tool. You don’t trigger them yourself. That means orchestration is the model’s job, which works well most of the time and is worth knowing when you need a specific call sequence.
Resources
A Resource is a readable data source. It has a URI and a MIME type, and the host or model can fetch it directly. Standard Resources return text or binary data: documents, files, configuration.
uri: file:///path/to/report.pdf
mimeType: application/pdf
An MCP App Resource returns something different: an HTML/JavaScript bundle, the actual UI code.
How MCP Apps Combine Tools and Resources
MCP Apps don’t add new primitives. They change how Tools and Resources relate to each other.
The MCP Apps specification describes the pattern this way: a Tool declares a reference to a UI Resource in its description, and that Resource contains an interactive HTML interface. When the model calls the Tool, the host renders the Resource.
Here’s what that looks like at the protocol level. A Tool that supports an MCP App has an extra field in its metadata:
{
"name": "get_weather",
"description": "Get the current weather for a city",
"inputSchema": { "city": { "type": "string" } },
"_meta": {
"ui": {
"resourceUri": "ui://get-weather/app.html"
}
}
}
The _meta.ui.resourceUri field points to a Resource with a ui:// URI. That Resource contains the app bundle, HTML, CSS, and JavaScript packed into a single file.
When the host sees a Tool with this field, it:
- Fetches the Resource at the declared URI and preloads it before the Tool is ever called
- Injects the tool output into the preloaded iframe when the Tool runs
- Renders the interactive UI inside the conversation alongside the tool result
Because the host fetches the app bundle when your server is first connected rather than on-demand, there’s no loading delay. The UI is already there when the model calls the Tool.
The Communication Protocol
An MCP App UI runs in a sandboxed iframe, so it can’t reach arbitrary URLs or touch the host page. All communication goes through window.postMessage.
The messages use JSON-RPC 2.0, forming a dialect of MCP specific to the app context. Methods use a ui/ prefix to distinguish them from standard MCP calls:
ui/initialize: the host sends this when the app iframe is ready, with tool data and host contextui/toolResult: the host sends updated tool data as it streams intools/call: the app sends this to call a Tool on your MCP server through the host bridge
The flow is bidirectional: the app receives data from the host and can call back to your server through the host’s channel. The app never contacts your server directly. Everything goes through the host, which enforces the user’s permissions and consent.
The Security Model
The iframe sandbox exists so a host can render your app inside a user’s conversation without fully trusting you as a server author.
What an MCP App cannot do:
- Access the host page’s DOM
- Read the host’s cookies or local storage
- Navigate the parent frame
- Execute scripts in the parent context
- Make network requests to origins not listed in
_meta.ui.csp
What it can do:
- Render any UI that works in a browser (React, charts, maps, video, forms)
- Call Tools on your MCP server through the host bridge
- Send messages that appear in the conversation
- Read tool output, host context, display mode, and user locale that the host pushes in
The _meta.ui.csp field is where you declare which external origins your app loads resources from. The host enforces this as a Content Security Policy.
Display Modes
Every MCP App host supports multiple display modes that control how much screen space your UI gets:
- Inline: the app appears in the chat stream, compact
- Fullscreen: the app expands to fill the viewport
- Picture-in-picture: the app floats over the conversation
Your app reads the current mode and can request a change. A summary card in inline mode might have an “Expand” button that requests fullscreen. The host can grant or deny based on its own policies.
This is standardized across ChatGPT, Claude, Goose, and VS Code, so the same API works on all of them.
Building an MCP App
You can implement all of this directly: write an HTML file, handle the postMessage events, speak JSON-RPC 2.0 to the host. The spec is public.
Or use sunpeak, an open-source MCP App framework that handles resource registration, bundle generation, and host communication. A minimal Resource component looks like this:
// src/resources/weather-resource.tsx
import { useToolData, useDisplayMode } from 'sunpeak';
import type { ResourceConfig } from 'sunpeak';
export const resource: ResourceConfig = {
name: 'weather',
description: 'Show current weather conditions',
};
export default function WeatherResource() {
const { output } = useToolData<{ city: string; temp: number; condition: string }>();
const { displayMode } = useDisplayMode();
return (
<div>
<h1>{output.city}</h1>
<p>{output.temp}° - {output.condition}</p>
{displayMode === 'inline' && <p>Click expand for the full forecast.</p>}
</div>
);
}
useToolData reads the tool output the host injected, and useDisplayMode reads the current display mode. sunpeak auto-registers the resource, generates the bundle, and wires it to the corresponding Tool.
To run it:
pnpm add -g sunpeak && sunpeak new
sunpeak dev
The local simulator at localhost:3000 runs the full MCP App runtime: the same iframe sandbox, postMessage protocol, and display mode system that ChatGPT, Claude, and other hosts implement. You build and test without connecting to a live host, or you can build and test on a live host with HMR. For automated testing, sunpeak’s MCP App testing helps you programmatically set all possible app state so you can exhaustively test end-to-end with Vitest and Playwright.
Where MCP Apps Run
As of February 2026, MCP Apps run in ChatGPT (web), Claude (web and desktop), Goose, and VS Code Insiders. The MCP App specification is under the Linux Foundation alongside core MCP, and more hosts are implementing it.
Because it’s an open standard, an app built against it runs on every host that implements it: ChatGPT, Claude, and whatever comes next. sunpeak targets this standard, so your app works across all of them without changes.
Get Started
sunpeak is open source (MIT) and free:
pnpm add -g sunpeak && sunpeak new
- Documentation: API reference, guides, and tutorials
- GitHub: source code and issues
- What Is an MCP App?: architecture deep-dive
- How to Build an MCP App: cross-host build guide
- MCP App Framework: sunpeak capabilities overview
- ChatGPT App Framework: ChatGPT-specific features
Frequently Asked Questions
What are the core primitives of the Model Context Protocol?
MCP defines three primitives that servers can expose to AI hosts: Tools (callable actions), Resources (readable data sources), and Prompts (conversation templates). MCP Apps only use Tools and Resources. A Tool declares a UI resource in its metadata, and that Resource contains the interactive HTML/JavaScript bundle the host renders.
What is the difference between an MCP host and an MCP server?
An MCP host is the AI application the user interacts with, like ChatGPT, Claude, Goose, or VS Code. An MCP server is the backend you build that exposes Tools and Resources through the Model Context Protocol (MCP). The host connects to your server, discovers its capabilities, and routes model requests to it.
How do MCP Apps use Tools and Resources together?
An MCP App adds a _meta.ui.resourceUri field to a Tool description, pointing at a ui:// Resource that contains an HTML/JavaScript bundle. When the model calls that Tool, the host renders the Resource in a sandboxed iframe alongside the tool result. The Tool triggers execution; the Resource handles the visual interface.
What is a ui:// resource URI in MCP?
A ui:// URI is the URI scheme used by MCP App resources. When a Tool declares a _meta.ui.resourceUri pointing to a ui:// URI, it tells the host that an interactive UI resource exists for that Tool. The host fetches the resource (an HTML/JavaScript bundle), preloads it, and renders it in a sandboxed iframe when the Tool is called.
How does communication work between an MCP App and its host?
MCP Apps run in sandboxed iframes and communicate with the host using JSON-RPC 2.0 messages over the postMessage API. The host pushes tool data and context into the app. The app can call server Tools through the host bridge, send messages, and update the model context, all through this secure channel.
What can an MCP App not do due to the iframe sandbox?
An MCP App in a sandboxed iframe cannot access the host page's DOM, read the host's cookies or local storage, navigate the parent frame, or run scripts in the parent context. All communication goes through the postMessage channel the host controls. This lets hosts render third-party apps safely without fully trusting the server author.
How do I build an MCP App with sunpeak?
sunpeak is an open-source MCP App framework that handles resource registration, bundle generation, and host communication. Install it with "pnpm add -g sunpeak && sunpeak new", then define your Resource components in src/resources/. The local simulator at localhost:3000 replicates the MCP App runtime so you can build and test without a live AI host.
What MCP primitives do MCP Apps use?
MCP Apps use two primitives: Tools and Resources. A Tool triggers the interaction, the model calls it based on the user request. A Resource holds the UI, an HTML/JavaScript bundle the host renders in a sandboxed iframe when that Tool runs. MCP also defines a third primitive, Prompts, but MCP Apps do not use it.