Generate Dashboards in Microsoft Fabric
Introduction
Microsoft Fabric brings data engineering, science, real-time analytics, governance, and BI into a single surface: OneLake, Lakehouses/Warehouses, Dataflows Gen2, Notebooks, Semantic Models, and Power BI. That consolidation makes it practical to give an AI agent the end-to-end job of turning raw data into a governed, explainable dashboard. This article describes how to design an agent that proposes a semantic model, defines trustworthy measures, composes a Power BI report, and publishes a dashboard with the right security, refresh, and collaboration defaults.
![agimageai_downloaded_image_f071bc74476445t933624554568b7c220e31bbef74]()
Why an agent for dashboard generation
Most dashboards stall on three friction points: stitching data sources, agreeing on metric definitions, and packaging the result in a repeatable, governed artifact. An agent reduces this friction by negotiating explicit contracts at each step. It pulls metadata from Fabric’s catalog, drafts a semantic model aligned to business terms, proposes DAX measures with citations back to definitions, chooses visuals that reflect the metric’s grain and distribution, and ships the dashboard with guardrails (RLS/OLS, labels, endorsements, refresh, subscriptions). Crucially, the agent never claims success without receipts: dataset created, model updated, report published, dashboard pinned, and subscription scheduled.
Reference architecture inside Fabric
The agent lives alongside Fabric workspaces and uses the platform’s primitives rather than bespoke glue. It inspects OneLake items and lineage to find trustworthy sources, prefers certified or promoted datasets, and reads Semantic Model schemas (tables, relationships, perspectives). When data modeling is required, it generates a Dataflow Gen2 or a Lakehouse SQL view, then materializes a clean star schema. Report authoring occurs against the Semantic Model, not the raw tables, so the agent’s choices are transparent and portable. Publishing runs through Power BI’s REST APIs with deployment pipelines for dev → test → prod, and Purview/tenant policies govern sharing, labels, and cross-tenant behavior.
What the agent actually does
A well-behaved dashboard agent follows a disciplined loop. It begins with a business intent (“show weekly active users by product and region with churn risk”) and translates that into a data contract: dimensions, facts, keys, and freshness targets. It drafts DAX measures (e.g., WAU
, ChurnRate
, RevenueYTD
) and attaches short descriptions sourced from the model’s documentation. It then assembles visuals chosen for the measure type—trends for rates, bars for categorical comparisons, decomposition trees for drill—while applying a readable layout, color-blind-safe defaults, and compact number formats. Before publishing, it validates row-level security roles for the intended audience and propagates the highest sensitivity label present in the model. Finally, it produces an explainer note that names the measures, filters, and last refresh, and pins key visuals to a team dashboard.
Governance first: definitions, security, lineage
The agent treats the Semantic Model as the contract. It proposes, but does not silently invent, business logic: every calculated column or measure includes a description and a pointer to the canonical definition. If the audience requires role-based scoping, the agent ensures RLS roles exist and can impersonate the recipient to test the experience. Sensitivity labels flow from source to report; when labels are Confidential or higher, the agent disables export and watermarks subscriptions by default. Lineage must be healthy: the agent refuses to publish from broken or uncertified chains unless a reason code is recorded and the data owner is mentioned in the audit note.
A practical workflow in Fabric
A product team asks for a “North America onboarding funnel dashboard with weekly refresh.” The agent searches the workspace for a certified “Product Analytics” model, finds tables for sessions, signups, activations, and regions, and verifies last refresh within the agreed SLA. It proposes measures—SignupRate
, ActivationWithin7Days
, TimeToFirstValue
—and shows sample results filtered to NA. After a quick approval, it generates a Power BI report with funnel and cohort visuals, creates a bookmark for “North America, last 12 weeks,” publishes to the team workspace, and pins KPIs to the “Growth Hub” dashboard. It schedules Sunday 02:00 refresh, adds a Monday 08:05 email subscription to the Growth channel, and logs receipts: dataset ID, report ID, dashboard ID, subscription ID, RLS roles validated, and label applied.
The agent’s contract (concise and testable)
role: "FabricDashboardAgent"
scope: >
Propose semantic measures and generate a Power BI report + dashboard from Microsoft Fabric assets.
Prefer certified/promoted models; enforce RLS and sensitivity labels; never expose data the recipient cannot query.
output:
type: object
required: [summary, model_plan, measures, visuals, publishing, governance, receipts]
properties:
summary: {type: string, maxWords: 80}
model_plan:
type: object
required: [dataset_id, lineage_ok, freshness_minutes]
measures:
type: array
items: {type: object, required: [name, dax, description, source_claim]}
visuals:
type: array
items: {type: object, required: [type, bound_measures, filters, rationale]}
publishing:
type: object
required: [workspace_id, report_name, dashboard_name, bookmarks[]]
governance:
type: object
required: [rls_roles_validated, sensitivity_label, export_disabled, watermark]
receipts:
type: array # dataset/report/dashboard/subscription ids
policy: "Certified-first. If uncertified, require owner mention and reason code; record in audit log."
citation_rule: "Cite the source measure description or definition for each DAX measure."
Minimal implementation sketch (Fabric/Power BI concepts)
Below is a conceptual Python outline that uses the Power BI REST and Fabric item APIs. In production, add robust error handling, idempotency keys, and CI tests with golden traces.
# pseudocode-ish; wire to official SDKs/APIs
from datetime import datetime, timedelta
def choose_semantic_model(intent, catalog):
candidates = search_catalog(intent, catalog) # rank: certified > promoted > none
return max(candidates, key=lambda c: (c['endorsement'], -c['staleness_minutes']))
def propose_measures(model, intents):
defs = []
for i in intents:
# example: translate "weekly active users" → WAU measure
dax = "WAU := DISTINCTCOUNT(Users[UserId] /* filtered to last 7 days */)"
defs.append({"name":"WAU","dax":dax,"description":"Distinct active users in last 7 days","source_claim":"doc:product_analytics#wau"})
return defs
def author_report(dataset_id, measures):
report_id = create_or_update_report(dataset_id, layout="12-column-grid", theme="accessible")
for m in measures: add_measure(dataset_id, m["name"], m["dax"], m["description"])
add_visual(report_id, kind="line", fields=["Date","WAU"], filters=["Region=NA"])
add_visual(report_id, kind="bar", fields=["Region","ActivationWithin7Days"], filters=["Date last 12 weeks"])
return report_id
def publish_dashboard(report_id, workspace_id):
dash_id = create_dashboard(workspace_id, name="Growth Hub")
pin_tile(dash_id, report_id, visual="WAU_trend", title="WAU (NA, 12 weeks)")
bookmark(report_id, name="NA-12w", state={"Region":"NA","Date":"12w"})
sub_id = create_subscription(report_id, group="Growth", time="Mon 08:05", export_disabled=True, watermark=True)
return {"dashboard_id": dash_id, "subscription_id": sub_id}
def enforce_governance(dataset_id, audience_groups):
rls_ok = validate_rls(dataset_id, audience_groups)
label = get_sensitivity_label(dataset_id)
set_export(report_id=None, allow=False) if label in {"Confidential","Highly Confidential"} else None
return {"rls_roles_validated": rls_ok, "sensitivity_label": label, "export_disabled": True, "watermark": True}
Design notes that save real time
Treat the model as an API. The agent should fail fast if the semantic model is missing keys, relationships, or descriptions. It is better to propose a Dataflow Gen2 or Lakehouse view to fix the schema than to hide complexity inside DAX. Prefer perspectives to keep the authoring surface small. Use named bookmarks as part of the publishing contract so viewers always land on the intended slice. For performance, the agent should switch to aggregations or DirectQuery-for-Power BI datasets when volumes demand it, but only after measuring query folding and refresh times against the agreed SLA.
Common pitfalls and how to avoid them
Agents that “hallucinate” metrics erode trust. Every measure must carry a description sourced from the model or documentation; new measures require owner approval. Over-eager visuals hurt comprehension: the agent should justify each choice with a brief rationale tied to the measure’s shape and audience. Security regressions often come from previews rendered as the agent’s identity; always impersonate the recipient for visual tests and refuse to render if impersonation is not possible. Finally, keep a clean audit trail: dataset/report/dashboard IDs, endorsements, sensitivity, RLS role tested, refresh schedule, and the exact filter state used in all examples.
Conclusion
Fabric’s integrated stack allows an AI agent to turn intent into a governed dashboard: select or shape a semantic model, codify measures, compose an explainable report, and publish a shareable dashboard—complete with security, labels, and refresh. If you encode those steps as a contract with receipts and treat the semantic model as the source of truth, you’ll get faster cycles, higher trust, and dashboards that teams actually use.