Skip to content

Save dashboard MCP tool - persist agent work to project

Problem

The agent workflow has a gap between exploration and persistence. Today's tools are all stateless:

  1. render_dashboard takes YAML content, returns HTML — but the YAML vanishes after the call.
  2. execute_query returns data — but the query isn't saved anywhere.
  3. review_dashboard checks YAML — but has no way to commit the reviewed result.

An agent can build a perfect dashboard through 10 iterations of render/review, but when the conversation ends, the work is gone. The user has to manually copy YAML from the chat, create a file, put it in the right directory.

This matters because the natural analyst workflow is explore first, save later. You try different queries, chart types, layouts — most of it throwaway. When something works, you want to say "save this" and have it land in the project as a proper .yml file in faces/, ready to serve.

Without a save tool, the agent can never close the loop from "idea" to "artifact in the repo."

Context

Current tool surface (all read/render, no write): - render_dashboard — validate + render YAML → HTML - execute_query — run SQL → rows - catalog — browse schema - search_dashboards — find existing dashboards - review_dashboard — check against design heuristics - list_sources — discover data sources

Project conventions: - Dashboards live in faces/ directory as .yml files (e.g., faces/sales_dashboard.yml) - Partials live in faces/partials/ prefixed with _ (e.g., faces/partials/_header.yml) - The DatafaceAIContext has dashboards_directory for scoping path resolution and resolve_dashboard_path() with path-traversal protection (dataface/ai/context.py) - Tool schemas live in dataface/ai/tool_schemas.py (canonical source of truth) - Tool dispatch in dataface/ai/tools.pydispatch_tool_call() routes to implementations - MCP server wiring in dataface/ai/mcp/server.py

Security consideration: The MCP server runs with filesystem access. A save tool that writes arbitrary paths is a security risk. We need path scoping (must be within dashboards directory) and validation (must be valid YAML that compiles).

Possible Solutions

One tool that takes YAML content + a path, validates, and writes:

SAVE_DASHBOARD = {
    "name": "save_dashboard",
    "description": (
        "Save a dashboard YAML file to the project. Validates the YAML "
        "first — returns errors if invalid, so fix before re-saving. "
        "Path is relative to the faces/ directory. Use this after "
        "iterating with render_dashboard to persist the final version. "
        "Will not overwrite existing files unless overwrite=true."
    ),
    "input_schema": {
        "type": "object",
        "properties": {
            "yaml_content": {
                "type": "string",
                "description": "Dashboard YAML content to save",
            },
            "path": {
                "type": "string",
                "description": (
                    "File path relative to faces/ directory "
                    "(e.g., 'revenue.yml', 'reports/monthly.yml')"
                ),
            },
            "overwrite": {
                "type": "boolean",
                "description": "Overwrite if file already exists (default false)",
            },
            "commit": {
                "type": "boolean",
                "description": "Git add + commit after saving (default from context config)",
            },
        },
        "required": ["yaml_content", "path"],
    },
}

Agent workflow:

> Build me a revenue dashboard by region

[agent iterates with render_dashboard, tweaks layout, reviews...]

> This looks good, save it

Saving to faces/revenue-by-region.yml...
  ✓ YAML validates
  ✓ Written to faces/revenue-by-region.yml
  View: http://localhost:9876/faces/revenue-by-region/

Trade-offs: Simple, single-purpose. Does one thing. Agent already has the YAML from prior render_dashboard calls.

Option B: save_dashboard + update_dashboard

Separate tools for creating new vs editing existing dashboards.

Trade-offs: More explicit, but adds tool surface. The overwrite flag on a single tool covers this without two tools.

Option C: File-Level Write Tool (Generic)

A generic write_file tool that can write any file, not just dashboards. Like Claude Code's Write tool.

Trade-offs: More flexible (could write dbt models, queries, etc. later). But too generic — loses the ability to validate as a dashboard. Security is harder to scope. Shouldn't need this for M1 since the dft agent use case is dashboard-focused.

Plan

Single save_dashboard tool, validate-before-write, scoped to dashboards directory.

Implementation Steps

Files to modify: - dataface/ai/tool_schemas.py — add SAVE_DASHBOARD schema - dataface/ai/mcp/tools.py — add save_dashboard() implementation - dataface/ai/tools.py — add dispatch case for save_dashboard - dataface/ai/mcp/server.py — register tool in handle_list_tools - dataface/ai/skills/building-dataface-dashboards/SKILL.md — document the save workflow

Implementation (save_dashboard()):

  1. Resolve path — use DatafaceAIContext.resolve_dashboard_path() which already handles:
  2. Relative path resolution (relative to dashboards_directory / faces/)
  3. Path traversal protection (rejects paths that escape scoped directory)
  4. Absolute path rejection

  5. Validate YAML — compile the YAML to catch errors before writing:

  6. Parse YAML
  7. Run through the compiler
  8. If validation fails, return structured errors (same format as render_dashboard)
  9. Do NOT write invalid YAML

  10. Check for conflicts — if file exists and overwrite is not true:

  11. Return error with the existing file's content (so agent can diff)
  12. Suggest overwrite: true if intentional

  13. Write file — create parent directories if needed, write the .yml file

  14. Return confirmation — path written, serve URL, validation summary

Return schema:

{
    "status": "saved",
    "path": "faces/revenue-by-region.yml",
    "absolute_path": "/Users/.../faces/revenue-by-region.yml",
    "url": "http://localhost:9876/faces/revenue-by-region/",
    "validation": {"errors": 0, "warnings": 0},
}

Error cases:

# Invalid YAML
{"status": "error", "reason": "validation_failed", "errors": [...]}

# File exists
{"status": "error", "reason": "file_exists", "path": "...",
 "existing_content": "...", "hint": "Use overwrite=true to replace"}

# Path traversal
{"status": "error", "reason": "path_rejected",
 "message": "Path must be within faces/ directory"}

Tests

  1. Save valid YAML → file created, content matches
  2. Save invalid YAML → error returned, no file written
  3. Save to existing path without overwrite → conflict error
  4. Save to existing path with overwrite → file replaced
  5. Path traversal attempt (../../etc/passwd) → rejected
  6. Nested path (reports/monthly/revenue.yml) → directories created
  7. Path without .yml extension → auto-appended or error

Relationship to Cloud Chat Embeddable Dashboards

The Cloud chat task (embeddable-dashboards-in-chat-inline-preview-modal-expand-and-save-to-repo.md) builds a "Save Dashboard" button in the web UI. That task's save flow should call this MCP tool rather than wiring directly to the Cloud-specific Django functions (write_dashboard_yaml, GitService.commit).

Layering:

┌─────────────────────────────────────────────────┐
│ Surfaces (consumers of save_dashboard)          │
│  ├── dft agent (terminal)                       │
│  ├── Cursor / Claude Code / Codex (via MCP)     │
│  ├── Cloud chat UI (via tool dispatch)          │
│  └── Playground (future)                        │
├─────────────────────────────────────────────────┤
│ MCP Tool: save_dashboard                        │
│  → validate YAML                                │
│  → resolve path (scoped)                        │
│  → write .yml file                              │
│  → return confirmation                          │
├─────────────────────────────────────────────────┤
│ Cloud-specific post-save hooks (Cloud only)     │
│  → update_dashboard_cache() (Django model)      │
│  → GitService.commit() (git add + commit)       │
│  → DashboardSnapshot (thumbnail)                │
└─────────────────────────────────────────────────┘

The MCP tool handles the universal part (validate + write file). The Cloud app adds its own post-save hooks (Django cache, git commit, snapshots) on top. Non-Cloud consumers (dft agent, IDE agents) get the file write without the Django overhead.

This task is the foundation — build the tool first, then the Cloud chat task wires its "Save" button to call dispatch_tool_call("save_dashboard", ...) and adds Cloud-specific post-processing.

Git Commit Behavior

The save_dashboard tool has an optional commit parameter that controls whether a git commit is created after writing the file:

"commit": {
    "type": "boolean",
    "description": "Git add + commit after saving (default from config)",
}

The default is configurable globally via DatafaceAIContext (or dataface.yml / environment): - Cloud (Suite): Default commit=True — the Cloud app manages the git repo; saving a dashboard should commit it so it appears in the project history. The Cloud app may also add its own post-save hooks (Django cache update, snapshot generation) on top. - IDE / MCP server (Cursor, Claude Code, Codex): Default commit=False — the user manages their own git workflow. The tool writes the file; the user decides when to commit. - Terminal (dft agent): Default commit=False — same as IDE. The agent writes files to the project; the user commits when ready.

The per-call commit parameter overrides the default. If not provided, the context default applies.

Implementation: DatafaceAIContext gets a auto_commit_saves: bool = False field. The Cloud app sets this to True when constructing the context. The MCP server and CLI agent leave it as False. save_dashboard() checks commit param → falls back to context.auto_commit_saves.

Future Considerations (Not M1)

  • delete_dashboard — remove a saved dashboard
  • rename_dashboard — move/rename
  • save_query — persist a tested SQL query as a reusable partial
  • Undo — track saves in session so the agent can revert

Implementation Progress

  • Added save_dashboard to the canonical tool schemas, OpenAI wrapper surface, MCP server registration, and shared dispatch layer.
  • Added DatafaceAIContext.auto_commit_saves and implemented commit override behavior in the tool (commit param overrides context default).
  • Implemented save_dashboard() with path scoping via resolve_dashboard_path(), compile-before-write validation, overwrite protection, nested directory creation, and optional single-file git commit.
  • Added the 7 plan tests plus commit-behavior coverage:
  • valid save creates file
  • invalid YAML does not write
  • conflict without overwrite
  • overwrite replaces file
  • path traversal rejected
  • nested directories created
  • missing .yml extension rejected
  • context auto-commit default honored
  • explicit commit: false overrides context default
  • Updated the dashboard-building skill doc to include the save workflow.

QA Exploration

  • QA exploration completed (or N/A for non-UI tasks)

Review Feedback

  • Focused tests pass: uv run pytest tests/core/test_mcp.py tests/core/test_ai_tools.py tests/ai/test_tool_contracts.py -q
  • just ci was run twice. Both full runs failed in unrelated existing tests that pass in isolation:
  • tests/core/test_inspect_server.py::TestServeExampleRouting::test_csv_example_uses_examples_root_for_assets
  • tests/docs/test_master_plans_build.py::test_master_plans_mkdocs_builds_in_strict_mode
  • tests/faketran/test_application_models.py::test_fake_companies_populate_application_database_models[fake_companies.pied_piper-240-23]
  • Because just ci is not green, PR creation is currently blocked by repo PR workflow guardrails.
  • cbox review found two required fixes:
  • unscoped default path handling for save_dashboard
  • commit-failure response incorrectly marking the entire save as failed
  • Review cleared