Save dashboard MCP tool - persist agent work to project¶
Problem¶
The agent workflow has a gap between exploration and persistence. Today's tools are all stateless:
render_dashboardtakes YAML content, returns HTML — but the YAML vanishes after the call.execute_queryreturns data — but the query isn't saved anywhere.review_dashboardchecks YAML — but has no way to commit the reviewed result.
An agent can build a perfect dashboard through 10 iterations of render/review, but when the conversation ends, the work is gone. The user has to manually copy YAML from the chat, create a file, put it in the right directory.
This matters because the natural analyst workflow is explore first, save later. You try different queries, chart types, layouts — most of it throwaway. When something works, you want to say "save this" and have it land in the project as a proper .yml file in faces/, ready to serve.
Without a save tool, the agent can never close the loop from "idea" to "artifact in the repo."
Context¶
Current tool surface (all read/render, no write):
- render_dashboard — validate + render YAML → HTML
- execute_query — run SQL → rows
- catalog — browse schema
- search_dashboards — find existing dashboards
- review_dashboard — check against design heuristics
- list_sources — discover data sources
Project conventions:
- Dashboards live in faces/ directory as .yml files (e.g., faces/sales_dashboard.yml)
- Partials live in faces/partials/ prefixed with _ (e.g., faces/partials/_header.yml)
- The DatafaceAIContext has dashboards_directory for scoping path resolution and resolve_dashboard_path() with path-traversal protection (dataface/ai/context.py)
- Tool schemas live in dataface/ai/tool_schemas.py (canonical source of truth)
- Tool dispatch in dataface/ai/tools.py — dispatch_tool_call() routes to implementations
- MCP server wiring in dataface/ai/mcp/server.py
Security consideration: The MCP server runs with filesystem access. A save tool that writes arbitrary paths is a security risk. We need path scoping (must be within dashboards directory) and validation (must be valid YAML that compiles).
Possible Solutions¶
Option A: Single save_dashboard Tool [Recommended]¶
One tool that takes YAML content + a path, validates, and writes:
SAVE_DASHBOARD = {
"name": "save_dashboard",
"description": (
"Save a dashboard YAML file to the project. Validates the YAML "
"first — returns errors if invalid, so fix before re-saving. "
"Path is relative to the faces/ directory. Use this after "
"iterating with render_dashboard to persist the final version. "
"Will not overwrite existing files unless overwrite=true."
),
"input_schema": {
"type": "object",
"properties": {
"yaml_content": {
"type": "string",
"description": "Dashboard YAML content to save",
},
"path": {
"type": "string",
"description": (
"File path relative to faces/ directory "
"(e.g., 'revenue.yml', 'reports/monthly.yml')"
),
},
"overwrite": {
"type": "boolean",
"description": "Overwrite if file already exists (default false)",
},
"commit": {
"type": "boolean",
"description": "Git add + commit after saving (default from context config)",
},
},
"required": ["yaml_content", "path"],
},
}
Agent workflow:
> Build me a revenue dashboard by region
[agent iterates with render_dashboard, tweaks layout, reviews...]
> This looks good, save it
Saving to faces/revenue-by-region.yml...
✓ YAML validates
✓ Written to faces/revenue-by-region.yml
View: http://localhost:9876/faces/revenue-by-region/
Trade-offs: Simple, single-purpose. Does one thing. Agent already has the YAML from prior render_dashboard calls.
Option B: save_dashboard + update_dashboard¶
Separate tools for creating new vs editing existing dashboards.
Trade-offs: More explicit, but adds tool surface. The overwrite flag on a single tool covers this without two tools.
Option C: File-Level Write Tool (Generic)¶
A generic write_file tool that can write any file, not just dashboards. Like Claude Code's Write tool.
Trade-offs: More flexible (could write dbt models, queries, etc. later). But too generic — loses the ability to validate as a dashboard. Security is harder to scope. Shouldn't need this for M1 since the dft agent use case is dashboard-focused.
Plan¶
Single save_dashboard tool, validate-before-write, scoped to dashboards directory.
Implementation Steps¶
Files to modify:
- dataface/ai/tool_schemas.py — add SAVE_DASHBOARD schema
- dataface/ai/mcp/tools.py — add save_dashboard() implementation
- dataface/ai/tools.py — add dispatch case for save_dashboard
- dataface/ai/mcp/server.py — register tool in handle_list_tools
- dataface/ai/skills/building-dataface-dashboards/SKILL.md — document the save workflow
Implementation (save_dashboard()):
- Resolve path — use
DatafaceAIContext.resolve_dashboard_path()which already handles: - Relative path resolution (relative to
dashboards_directory/faces/) - Path traversal protection (rejects paths that escape scoped directory)
-
Absolute path rejection
-
Validate YAML — compile the YAML to catch errors before writing:
- Parse YAML
- Run through the compiler
- If validation fails, return structured errors (same format as
render_dashboard) -
Do NOT write invalid YAML
-
Check for conflicts — if file exists and
overwriteis not true: - Return error with the existing file's content (so agent can diff)
-
Suggest
overwrite: trueif intentional -
Write file — create parent directories if needed, write the
.ymlfile -
Return confirmation — path written, serve URL, validation summary
Return schema:
{
"status": "saved",
"path": "faces/revenue-by-region.yml",
"absolute_path": "/Users/.../faces/revenue-by-region.yml",
"url": "http://localhost:9876/faces/revenue-by-region/",
"validation": {"errors": 0, "warnings": 0},
}
Error cases:
# Invalid YAML
{"status": "error", "reason": "validation_failed", "errors": [...]}
# File exists
{"status": "error", "reason": "file_exists", "path": "...",
"existing_content": "...", "hint": "Use overwrite=true to replace"}
# Path traversal
{"status": "error", "reason": "path_rejected",
"message": "Path must be within faces/ directory"}
Tests¶
- Save valid YAML → file created, content matches
- Save invalid YAML → error returned, no file written
- Save to existing path without overwrite → conflict error
- Save to existing path with overwrite → file replaced
- Path traversal attempt (
../../etc/passwd) → rejected - Nested path (
reports/monthly/revenue.yml) → directories created - Path without
.ymlextension → auto-appended or error
Relationship to Cloud Chat Embeddable Dashboards¶
The Cloud chat task (embeddable-dashboards-in-chat-inline-preview-modal-expand-and-save-to-repo.md) builds a "Save Dashboard" button in the web UI. That task's save flow should call this MCP tool rather than wiring directly to the Cloud-specific Django functions (write_dashboard_yaml, GitService.commit).
Layering:
┌─────────────────────────────────────────────────┐
│ Surfaces (consumers of save_dashboard) │
│ ├── dft agent (terminal) │
│ ├── Cursor / Claude Code / Codex (via MCP) │
│ ├── Cloud chat UI (via tool dispatch) │
│ └── Playground (future) │
├─────────────────────────────────────────────────┤
│ MCP Tool: save_dashboard │
│ → validate YAML │
│ → resolve path (scoped) │
│ → write .yml file │
│ → return confirmation │
├─────────────────────────────────────────────────┤
│ Cloud-specific post-save hooks (Cloud only) │
│ → update_dashboard_cache() (Django model) │
│ → GitService.commit() (git add + commit) │
│ → DashboardSnapshot (thumbnail) │
└─────────────────────────────────────────────────┘
The MCP tool handles the universal part (validate + write file). The Cloud app adds its own post-save hooks (Django cache, git commit, snapshots) on top. Non-Cloud consumers (dft agent, IDE agents) get the file write without the Django overhead.
This task is the foundation — build the tool first, then the Cloud chat task wires its "Save" button to call dispatch_tool_call("save_dashboard", ...) and adds Cloud-specific post-processing.
Git Commit Behavior¶
The save_dashboard tool has an optional commit parameter that controls whether a git commit is created after writing the file:
"commit": {
"type": "boolean",
"description": "Git add + commit after saving (default from config)",
}
The default is configurable globally via DatafaceAIContext (or dataface.yml / environment):
- Cloud (Suite): Default commit=True — the Cloud app manages the git repo; saving a dashboard should commit it so it appears in the project history. The Cloud app may also add its own post-save hooks (Django cache update, snapshot generation) on top.
- IDE / MCP server (Cursor, Claude Code, Codex): Default commit=False — the user manages their own git workflow. The tool writes the file; the user decides when to commit.
- Terminal (dft agent): Default commit=False — same as IDE. The agent writes files to the project; the user commits when ready.
The per-call commit parameter overrides the default. If not provided, the context default applies.
Implementation: DatafaceAIContext gets a auto_commit_saves: bool = False field. The Cloud app sets this to True when constructing the context. The MCP server and CLI agent leave it as False. save_dashboard() checks commit param → falls back to context.auto_commit_saves.
Future Considerations (Not M1)¶
delete_dashboard— remove a saved dashboardrename_dashboard— move/renamesave_query— persist a tested SQL query as a reusable partial- Undo — track saves in session so the agent can revert
Implementation Progress¶
- Added
save_dashboardto the canonical tool schemas, OpenAI wrapper surface, MCP server registration, and shared dispatch layer. - Added
DatafaceAIContext.auto_commit_savesand implemented commit override behavior in the tool (commitparam overrides context default). - Implemented
save_dashboard()with path scoping viaresolve_dashboard_path(), compile-before-write validation, overwrite protection, nested directory creation, and optional single-file git commit. - Added the 7 plan tests plus commit-behavior coverage:
- valid save creates file
- invalid YAML does not write
- conflict without overwrite
- overwrite replaces file
- path traversal rejected
- nested directories created
- missing
.ymlextension rejected - context auto-commit default honored
- explicit
commit: falseoverrides context default - Updated the dashboard-building skill doc to include the save workflow.
QA Exploration¶
- QA exploration completed (or N/A for non-UI tasks)
Review Feedback¶
- Focused tests pass:
uv run pytest tests/core/test_mcp.py tests/core/test_ai_tools.py tests/ai/test_tool_contracts.py -q just ciwas run twice. Both full runs failed in unrelated existing tests that pass in isolation:tests/core/test_inspect_server.py::TestServeExampleRouting::test_csv_example_uses_examples_root_for_assetstests/docs/test_master_plans_build.py::test_master_plans_mkdocs_builds_in_strict_modetests/faketran/test_application_models.py::test_fake_companies_populate_application_database_models[fake_companies.pied_piper-240-23]- Because
just ciis not green, PR creation is currently blocked by repo PR workflow guardrails. cbox reviewfound two required fixes:- unscoped default path handling for
save_dashboard - commit-failure response incorrectly marking the entire save as failed
- Review cleared