DebuggAI Logo
DebuggAI
MCPDocs
Log in
๐Ÿงช Official MCP Server ยท 21 Tools

The MCP AI Server That Gives Your Agents Eyes In The Browser

Let your AI coding agent verify its own work in a real browser before opening the PR.

Get Free API Key Watch Demo

The only MCP AI server with auto-tunneled localhost, write-only credentials, and full project control. 21 tools. Point it at http://localhost:3000 or any URL.

npx -y @debugg-ai/debugg-ai-mcp ยท works with Claude Desktop, Claude Code, Cursor, and any MCP client

Why developers pick this MCP AI server

Most browser MCPs are thin wrappers around a driver. This is the whole testing platform, built for AI coding agents.

Test your localhost

Point the agent at http://localhost:3000 and it just works. A secure tunnel is created automatically. No ngrok, no port forwarding, no config.

Passwords never leak

Store a credential once and agents use it by role (like 'admin' or 'guest'). The raw password never comes back out of any tool, even when you rotate it.

Works with every MCP client

Claude Desktop, Claude Code, Cursor, LangChain, or your own client over stdio. One API key, no SDK lock-in, no per-client setup.

The whole platform, as tools

Projects, environments, credentials, and run history are all first-class tools. Your agent can set up a new project, run a test, and inspect the result in one session.

Cancel runaway runs

If an agent goes sideways, call cancel_execution and the browser session stops cleanly. No burned tokens, no stuck processes.

Free to start

Grab an API key at app.debugg.ai, npx the server, and go. No credit card, no trial clock, no seat minimums.

Zero-config localhost

Test your dev server from an AI agent. No ngrok, no yak-shaving.

Most browser automation MCPs only reach public URLs. To test localhost, you manually set up ngrok, copy-paste tunnel URLs, and hope the agent doesn't echo them back into logs.

DebuggAI's MCP AI server creates the tunnel for you on every check_app_in_browser call, and the tunnel URL never leaks back to the agent.

  • Pass any localhost URL, any port
  • A secure tunnel is created per run
  • Tunnel URLs never returned to the agent
  • Concurrent runs stay isolated

# You run your dev server

$ npm run dev

# Listening on http://localhost:3000

# Agent calls check_app_in_browser

url: "http://localhost:3000"

description: "Sign up with a new email, confirm redirect to /onboarding"

โœ“ Tunnel opened automatically

โœ“ Remote browser reached your dev server

โœ“ Agent signed up and was redirected

โœ“ Status: pass ยท Screenshot captured

The headline tool

check_app_in_browser

Give an AI agent eyes on a live website or app. The agent browses it, interacts with it, and tells you whether a given task or check passed.

Input parameters
Works on localhost or any URL. The only required params are description + url.
NameTypeRequiredPurpose
descriptionstringโœ…Natural language. What to test or evaluate.
urlstringโœ…Public URL or localhost (auto-tunneled)
environmentIdstringNoUUID of a saved environment
credentialIdstringNoUUID of a saved credential
credentialRolestringNoPick a credential by role (e.g. admin, guest)
usernamestringNoEphemeral login, not persisted server-side
passwordstringNoEphemeral login, not persisted, not logged
repoNamestringNoOverride auto-detected git repo name

21 tools, grouped by what they do

Not just a browser. The whole testing platform as tools your agent can call.

check_app_in_browser
headline

Run an AI browser agent against your app. Navigates, interacts, and reports pass/fail with screenshots.

description, url (+ optional environmentId, credentialId, credentialRole, username, password, repoName)

Credentials

Logged-in flows without leaking passwords

You want an AI agent to test authenticated flows. You do not want the password showing up in a transcript, a prompt cache, or an error message.

DebuggAI stores credentials server-side. Agents reference them by UUID or role, so they never see the password itself.

  • Passwords go in, not out

    The raw password never appears in any tool response, even when an agent rotates it.

  • Pick credentials by role

    Pass credentialRole="admin" and the agent logs in without ever touching the credential directly.

  • Or skip the vault

    Pass username + password directly for a one-off run. Used once, never stored.

Create โ†’ use flow
// 1. Create the credential (password goes in)
create_credential({
  environmentId: "env-abc-123",
  label: "Admin test user",
  username: "admin@example.com",
  password: "supersecret",
  role: "admin"
})
// โ†’ { uuid: "cred-xyz", username, role }
//   password? nope, never echoed

// 2. Agent runs a test by role
check_app_in_browser({
  url: "http://localhost:3000",
  description: "Log in and open /admin",
  environmentId: "env-abc-123",
  credentialRole: "admin"
})
// โ†’ agent resolves the credential
//   server-side, never sees the password

// 3. Rotate. Still no echo
update_credential({
  uuid: "cred-xyz",
  environmentId: "env-abc-123",
  password: "new-supersecret"
})
// โ†’ { uuid, username, role }. No password
Execution history

Every run tracked. Every run cancellable.

Browser tests run async, so history and cancel are first-class tools. Your agents can look back at what failed, and you can pull the plug on a run that's going sideways.

list_executions

Scroll back through every run your agents have made. Filter by status to find the failures.

get_execution

See exactly what the agent did step by step, so you (or another agent) can reason about why a test failed.

cancel_execution

Stop a runaway run before it burns more time or tokens. One call and the browser session closes cleanly.

Composed workflow

21 tools, one natural-language session

An AI agent setting up a new DebuggAI project from scratch and running its first test. Every step is a tool call.

// Agent: discover a team and a GitHub-linked repo
const { teams } = await list_teams({ q: "acme" })
const { repos } = await list_repos({ q: "checkout-service" })

// Create the project
const project = await create_project({
  name: "Checkout E2E",
  platform: "web",
  teamUuid: teams[0].uuid,
  repoUuid: repos[0].uuid,
})

// Add a staging environment + admin credential
const env = await create_environment({
  projectUuid: project.uuid,
  name: "staging",
  url: "https://staging.checkout.acme.dev",
})
await create_credential({
  environmentId: env.uuid,
  label: "Admin",
  username: "admin@acme.dev",
  password: process.env.ADMIN_PW,  // goes in, never comes back out
  role: "admin",
})

// Run a browser test by role, not by password
await check_app_in_browser({
  url: "http://localhost:3000",      // localhost auto-tunneled
  description: "Add item to cart, apply coupon CART20, checkout, verify receipt page",
  environmentId: env.uuid,
  credentialRole: "admin",
})

// Later: inspect or cancel from history
const { executions } = await list_executions({ status: "running", limit: 5 })
if (executions[0].durationMs > 120_000) {
  await cancel_execution({ uuid: executions[0].uuid })
}

How DebuggAI's MCP AI compares to other browser MCPs

Spot-check these claims against the linked repos before relying on them. They're moving targets.

FeatureDebuggAI MCP AIplaywright-mcppuppeteer-mcp
Tool count21~20~10
Localhost auto-tunneling
Saved credentials (passwords never returned)
Run history and mid-run cancel
Project and environment management
Managed remote browsers (no local Chrome)
Open sourceApache-2.0Apache-2.0Apache-2.0

Quickstart

Grab a free API key at app.debugg.ai, then wire up the MCP AI server in your client of choice.

npx
One command. Works with any MCP client over stdio.
DEBUGGAI_API_KEY=your_api_key npx -y @debugg-ai/debugg-ai-mcp
Config

One env var. That's it.

DEBUGGAI_API_KEY=your_api_key

Repo, branch, and file context are auto-detected from the cwd. No DEBUGGAI_LOCAL_* boilerplate required.

MCP AI server FAQ

Try the MCP AI server in five minutes

Free API key. Zero config beyond one env var. Works with every MCP client you already have.

Get Free API Key View on GitHub

Made with ๐Ÿฉธ, ๐Ÿ’ฆ, and ๐Ÿ˜ญ in San Francisco

DebuggAI Logo
DebuggAI

AI that reviews your code to test your app in the browser & catch those frustrating UI issues that unit tests miss

Product

  • Surfs.dev - Build Browser Agents
  • Text-Based Tests
  • Playwright Testing

Resources

  • Surfs.dev Platform
  • Tools Directory
  • Blog & Resources
  • Documentation
  • AI PR Reviews Guide
  • VS Code Extension
  • MCP Server
  • YC Companies Directory
  • What is MCP?

Company

  • Discord Community
  • Privacy Policy
  • Terms of Service

ยฉ 2026 DebuggAI. All rights reserved.

Cookie PolicySitemap