A technical deep dive into the Metaplay Portal MCP – how we gave AI assistants secure, scoped access to live game environments, logs, and metrics.
Most AI integrations stop at documentation. Your AI can read about how your system works, but when something breaks at 6pm on a Friday, you still end up clicking through dashboards and writing LogQL queries yourself.
The Metaplay Portal MCP changes that. It gives AI assistants – Claude Code, Codex, ChatGPT, whatever you use – direct access to your live Metaplay environments. Production logs, metrics, role management, environment configuration. The same things you'd normally click through in the portal, your AI can now query in conversation.
This post is a technical walkthrough of what we built, why we built it this way, and what it means for how you operate your game.
The problem: cognitive overhead
Here's a scenario every game developer knows. Something is off in production. You open the portal, navigate to the right environment, switch to the logs tab, figure out the right LogQL query, scan the results, cross-reference with metrics in another tab, and try to piece together what happened.
Each step is simple. But strung together, the cognitive load adds up fast. You need to know which environment to check, how to write the query, what the error codes mean, which service handles that flow. And you're doing all of this before you even start thinking about the fix.
The Portal MCP compresses that entire process into a conversation. You ask "what errors showed up in production in the last hour?" and the AI resolves which project and environment you mean, runs the log query, summarizes the results, and suggests next steps. The information is the same. The effort to get there isn't.
In raw time, it's maybe 2-3x faster. But it feels like a hundred times better, because you're spending your energy on the problem instead of on the mechanics of finding information.
What it exposes
The Portal MCP exposes your Metaplay portal through a set of MCP tools that AI assistants can call. Here's what's available.
Read operations:
listOrganizations,getOrganization– your org structurelistProjects,getProject– projects within your orglistEnvironments,getEnvironment– environments per projectlistMembers,listMachineUsers– team members and service accountsgetEnvironmentDau– daily active users per environment
Observability:
queryEnvironmentLogs– search production logs using LogQLqueryEnvironmentMetrics– query metrics using PromQL
Mutations:
editOrganization,editProject,editEnvironment– update metadata and descriptionscreateMachineUser,deleteMachineUser– manage service accountssetOrgRole,setProjectRole,setEnvironmentRole– manage access across the hierarchy
Every tool has semantic annotations that tell the AI whether an operation is read-only, mutating, or destructive. The AI knows which actions need confirmation before executing.
Security: same access, nothing more
This was non-negotiable from day one. The AI gets your permissions. Not more, not less.
The Portal MCP uses OAuth 2.0 with token binding to prevent confused-deputy attacks. When you authenticate, the AI gets a token that's scoped to your exact permissions. If you can't delete an environment in the portal, neither can the AI. If you're a viewer on a project, the AI can read but not write.
Permissions follow Metaplay's existing hierarchy: organization → project → environment. Roles inherit downward. A project admin is automatically an admin on all environments in that project. The MCP respects this exactly.
Every mutation is audit logged. You can see what the AI did, when, and on whose behalf. There's no hidden elevation, no backdoor access, no special AI permissions. The same security standard we hold human users to applies to AI agents.
How it works under the hood
The MCP server runs on Nuxt.js with H3 and TypeScript, using the mcp-toolkit library. The portal backend sits on Supabase (PostgreSQL), with Ory Hydra handling OAuth 2.0 authorization and Ory Kratos managing identity.
For observability, we pipe into the same systems that power the portal dashboard: Grafana Loki for logs and Prometheus for metrics. When the AI queries your production logs, it's hitting the same data source you'd query manually – just through a programmatic interface instead of a dashboard.
One design decision worth mentioning: we built a structured error taxonomy with eight error kinds, each carrying retryability flags. When a tool call fails, the AI knows whether it should retry, ask for different input, or give up. This matters because AI agents can get stuck in retry loops if they don't understand why something failed. Structured errors give them the same kind of "this is a permission problem, not a transient failure" signal that a developer would pick up instinctively.
We also added output schemas on tools – not just input validation, but structured output contracts. This gives AI assistants predictable response shapes they can reason about reliably, rather than having to parse free-form text.
Real workflows
Here's what it actually looks like in practice.
Investigating a production issue
"What errors appeared in production in the last 30 minutes?"
You ask from Claude Code, ChatGPT, or whatever MCP client you use.
Resolves context automatically
The AI calls listProjects to find your game, listEnvironments to identify the production environment, and queryEnvironmentLogs with the right time range and error filter.
Summarizes and links back
Returns a summary of the errors grouped by type, with a deep-link to the Grafana dashboard if you want to dig deeper.
Managing access at scale
"Add bob@studio.com as a viewer on all environments in project Starfall"
A new team member joins. Instead of clicking through each environment in the portal, you say it once.
Chains the right calls
The AI calls listEnvironments to enumerate all environments in the project, then setEnvironmentRole for each one. Ten seconds instead of ten clicks.
Auditing your infrastructure
"List environments missing descriptions and suggest good ones"
You want to clean up your environment metadata without clicking through every edit form.
Enumerates and proposes
The AI scans every environment, identifies which ones have empty description fields, and proposes descriptions based on names and configuration. You review, adjust, and confirm.
Post-deploy verification
"Check CPU and memory metrics around today's deployment"
You just shipped a feature and want to make sure nothing blew up.
Queries and reports
Runs queryEnvironmentMetrics with PromQL for the relevant time window and tells you whether resource usage changed. Deployment confidence without leaving your editor.
The bleeding edge
I'll be honest about where we are. MCP and the ChatGPT app ecosystem are months old. This is genuinely new territory.
We originally designed the Portal MCP for ChatGPT Apps, then MCP Apps emerged as a separate standard. They're not 1:1 compatible. We had to support multiple host environments, test against both ChatGPT and Claude, and build compatibility layers for the differences. It's a lot like early 2000s web development – you're targeting different browsers with different quirks.
The development tooling isn't fully there yet either. We built custom preview tooling for the MCP widget, testing light and dark modes, simulating mobile and desktop contexts. When the ecosystem is this young, you end up building your own infrastructure alongside the product.
But that's also why this is exciting. The shift from advisory AI (an AI that tells you about things) to operational AI (an AI that does things on your behalf, with proper authorization) is happening right now. The combination of reasoning capability, secure scoped access, and cross-tool orchestration didn't exist a year ago.
Setup
The MCP server URL is:
https://portal.metaplay.dev/api/mcp
Claude Code:
claude mcp add --transport http --scope project metaplay-portal https://portal.metaplay.dev/api/mcp
Or in your .mcp.json:
{
"mcpServers": {
"metaplay-portal": {
"type": "http",
"url": "https://portal.metaplay.dev/api/mcp"
}
}
}
Codex:
codex mcp add metaplay-portal --url https://portal.metaplay.dev/api/mcp
Claude (web, mobile, desktop): Go to Settings → Connectors, add the URL, authenticate once.
You'll need a Metaplay portal account. The first connection prompts an OAuth consent screen – click through once and you're set.
Where this goes
Today the Portal MCP covers org structure, environments, logs, metrics, and role management. That's version one.
The trajectory is clear. Yesterday, AI had access to our docs and source code. Today, it can see your org structure, projects, environments, production logs, and performance metrics. Tomorrow, coding agents combine all of that – docs, portal context, live production state – to assist in feature development, debugging, and operations as one continuous workflow.
We're shipping this now, ahead of GDC. Not because it's finished, but because the foundation is solid and the teams using it are already getting real value. Infrastructure-level capabilities like this are what make agentic workflows possible.
Get started
- Set up Portal MCP – Setup guide
- Set up Docs MCP – Setup guide
- Read the announcement – Announcing Metaplay AI
- Practical guide to all AI tools – Get to Know Metaplay's AI Assistants
- Technical backstory – Building an AI That Actually Knows Your Docs
- Talk to us – Get a demo or join our Discord
FAQ
What can the Portal MCP actually access?
Your organizations, projects, environments, team members, machine users, production logs (via LogQL), and infrastructure metrics (via PromQL). It can also modify metadata, manage roles, and create or delete machine users – subject to your permissions.
Can the AI break something in production?
The AI can only perform actions you have permission for. All mutations require confirmation, and destructive operations are flagged as such. Every action is audit logged. The AI cannot modify game configs, player data, or server settings.
Do I need to be a Metaplay customer?
Yes. The Portal MCP connects to your Metaplay portal account and your live game environments. You need an active account with the appropriate permissions.
Does it work with ChatGPT too?
Yes. You can add the Portal MCP as a connector in the Claude apps (web, mobile, desktop) and in ChatGPT. It's not limited to coding tools.
What's the difference between the Docs MCP and the Portal MCP?
The Docs MCP gives your AI access to Metaplay's documentation, SDK source code, and sample projects – it knows how Metaplay works in general. The Portal MCP gives your AI access to your specific game infrastructure – it knows how your game is running right now. They complement each other.




