Build Your Own ChatGPT App (Run It Locally)
Source: Dev.to
Introduction
ChatGPT recently opened submissions for its Apps feature, allowing developers to embed interactive widgets from third‑party apps directly into conversations. This creates new opportunities to enhance user experience beyond plain text interactions.
In this article I share my experience exploring this feature and provide a step‑by‑step tutorial for building and running your own ChatGPT App locally for quick prototyping.
What Is a ChatGPT App?
A ChatGPT App is an interactive widget that appears within the chat, either triggered automatically from the conversation context or manually by the user. It consists of three main components:
MCP Server
The backend that defines tools (functions the model can call) and resources (UI templates that ChatGPT renders).
Widget
HTML that runs in a sandboxed iframe with its own state, event handling, and direct communication with your backend through the window.openai object injected by ChatGPT.
Key methods include:
window.openai.callTool()– invoke a tool on the server.window.openai.sendFollowUpMessage()– send a message back into the chat to keep the conversational loop going.
ChatGPT Host
The ChatGPT interface that orchestrates tool calls, renders the widget, and maintains the overall conversation state.
Tool Definition Example
Below is a simple example of a tool definition that could be registered on the MCP server.
{
"tool": "find_houses",
"description": "Finds houses by location and number of rooms",
"inputs": {
"location": { "type": "string", "required": true },
"number_rooms": { "type": "integer", "required": false }
},
"output": "ui://widget/list.html"
}
- Description – used by ChatGPT to decide when to trigger the tool.
- Inputs – schema outlining required and optional parameters.
- Output – a resource URL that tells ChatGPT which UI template to render with the returned data.
Visualizing the Flow
The overall flow can be visualized as:
- ChatGPT decides a tool is needed based on the conversation.
- ChatGPT calls
window.openai.callTool()in the widget. - MCP Server executes the tool logic and returns data.
- ChatGPT renders the specified UI resource (e.g.,
list.html) inside the widget iframe. - The widget may send follow‑up messages or further tool calls as needed.
Example App Repository Structure
The sample app used for local testing follows this simple layout:
chatgpt-app/
├─ public/
│ └─ widget.html
├─ server.js
├─ package.json
└─ README.md
public/widget.html– the iframe UI.server.js– MCP server that registers tools and serves resources.package.json– project metadata and scripts.
You can clone the open‑source repository (link in the original post) and adapt the files to suit your own logic.
Create the App
-
Initialize the project and install dependencies:
npm install -
Start the local MCP server (development mode):
npm run dev
Expose the Local Server via HTTPS
ChatGPT requires an HTTPS URL to reach your server. For local development, use ngrok:
ngrok http 8787
Copy the generated URL (e.g., https://abcd-1234.ngrok.app) and append /mcp. This full URL will be used when registering the app in ChatGPT.
Connect the App to ChatGPT
-
Open Settings → Apps → Advanced settings → Developer mode and toggle it on.
-
Go to Settings → Apps → Create App.
-
In the creation form, set the Connector URL to the ngrok address with
/mcp, e.g.:https://abcd-1234.ngrok.app/mcp
Save the connector. ChatGPT will now be able to communicate with your MCP server.
Trigger the App
Once the connector is established, you can invoke the app from a conversation (e.g., by asking a question that matches the tool’s description). The widget will appear inline, and you can interact with it as designed.
Learnings & Considerations
- Caching – During development I often found it faster to delete the connector and recreate it rather than trying to clear cached UI or metadata.
- State Management – Relying heavily on tool calls for state updates caused unnecessary re‑renders and made debugging harder. Keeping most UI state locally inside the widget resulted in a more reliable experience.
- Production Readiness – This tutorial focuses on local prototyping. Before going live, address:
- Security and authentication
- Robust error handling
- Idempotent calls, rate limiting, and privacy concerns
Refer to the official OpenAI documentation for best practices.
Conclusion
With over 800 million weekly active users on ChatGPT, ChatGPT Apps present a significant opportunity for developers. Building a local prototype is straightforward, and the concepts demonstrated here can serve as a foundation for more complex, production‑grade integrations.
What do you think? Is this just another feature, or do you see potential in developing your own ChatGPT App?