DP-600 Fabric Analytics Engineer – Structured Study Notes
Source: Dev.to
Overview
The DP‑600 exam focuses on designing, building, governing, and optimizing analytics solutions in Microsoft Fabric.
Key responsibilities include:
- Maintaining a data analytics solution
- Preparing data
- Implementing and managing semantic models
Governance, Administration, and Lifecycle
Security & Governance
-
Layers of security
- Workspace‑level roles
- Item‑level permissions
- Data‑level security (RLS / CLS / OLS)
- File‑level security
- Sensitivity labels
- Endorsement
-
Row‑Level Security (RLS) – restricts rows a user can see (e.g., a “Bangladesh” role sees only rows where
Country = 'BD'). -
Column‑Level Security (CLS) – hides columns or whole tables from specific roles (e.g., hide
Salary,SSN). -
File‑system security – separate permissions for browsing the OneLake file area versus querying tables via the semantic model.
-
Sensitivity labels – classify data (Public, General, Confidential, Highly Confidential – No Export) and can enforce:
- Block export to Excel/CSV
- Block publish to web
- Restrict external sharing
-
Endorsement – signals trustworthiness of items:
- Promoted – team‑level confidence
- Certified – organization‑level single source of truth (restricted to data stewards/admins)
Workspace Roles
| Role | Capabilities |
|---|---|
| Admin | Full control: change settings, add/remove users, delete items. |
| Member / Contributor | Create and edit items (Lakehouse, Dataflow, Reports, Pipelines, Notebooks). Cannot change workspace‑level admin settings (e.g., capacity). |
| Viewer | Read‑only: view reports, dashboards, semantic models, etc. Cannot create, edit, or publish new items. |
Exam tip: A user who can open a report but cannot edit it or create a new dataflow is likely a Viewer; they need at least Contributor rights for those actions.
Item‑Level Permissions
Each item inside a workspace (Lakehouse, Warehouse, Semantic model, Report, Notebook, Dataflow) has its own permission set.
A user may be a Contributor at the workspace level but still lack:
- Build permission on a semantic model → cannot create new reports or use “Analyze in Excel”.
Exam tip: “User can view a report but cannot use ‘Analyze in Excel’ or create a new report from the dataset.” → Missing Build permission on the semantic model.
Tenant, Capacity, and Workspace Settings
| Scope | Primary Controls |
|---|---|
| Tenant | Organization‑wide policies: Fabric enablement, export policies, guest/external sharing, sensitivity‑label integration, trial activation. |
| Capacity | Compute configuration (e.g., F64, F128, Premium), region, workload settings (Spark, Dataflows, Pipelines, DirectLake), pause/resume, concurrency limits, memory. |
| Workspace | Team/project scope; assigned capacity (shared vs. Fabric/Premium); stores Lakehouses, Warehouses, Dataflows, Pipelines, Semantic models, Reports. |
Exam tip: “A workspace does not show the option to create a Lakehouse or Dataflow Gen2, but users can still create reports.” → The workspace is on a shared capacity or the tenant has disabled Fabric item creation.
Common Capacity‑Related Issues
- Dataflows stuck in queue → capacity under pressure / workload disabled
- Notebooks fail to start → Spark capacity exhausted
- DirectLake slowdown → overloaded capacity, cache eviction
Development Lifecycle
Git Integration
- Connect a Fabric workspace to a Git repository.
- Store:
- Reports as PBIP / PBIR (text‑based)
- Notebooks (
.ipynbor scripts) - SQL scripts
- Pipeline definitions (code)
Benefits: collaboration, PR‑based review, history/rollback, CI/CD integration.
PBIX vs. PBIP / PBIR
| Format | Characteristics |
|---|---|
| PBIX | Binary, traditional Power BI file; harder to source‑control. |
| PBIP / PBIR | Text‑based project structure; separates metadata, model, and layout into files/folders; Git‑friendly. |
Exam focus: Understand why PBIP/PBIR is preferred for version control and automated deployments.
Deployment Pipelines
- Stages: Development → Test → Production
- Deployable items: semantic models (datasets), reports, dashboards, some Fabric items (via scripts).
- Stage‑specific rules/parameters (e.g., different data‑source connections per environment).
Exam tip: “Promote changes from Dev to Prod while pointing to different databases.” → Use deployment pipeline parameters for data‑source bindings.
XMLA Endpoint
Provides enterprise‑level management of semantic models via external tools such as:
- Tabular Editor
- SQL Server Management Studio (SSMS)
- Custom scripts for deployment and partition management
Typical actions include partitioning large tables, refreshing models, and applying role‑based security programmatically.
Monitoring & Impact Analysis
- Lineage visualizations show data flow from source (Lakehouse, Warehouse) through dataflows, pipelines, and semantic models to reports.
- Impact analysis helps assess downstream effects of schema changes or security updates.
Summary of Key Exam Patterns
| Scenario | Underlying Concept |
|---|---|
| User can view a report but cannot edit or create a dataflow | Viewer role vs. needed Contributor rights |
| User cannot use “Analyze in Excel” on a dataset | Missing Build permission on the semantic model |
| User can query a Lakehouse table via SQL but cannot open Files area | Lack of OneLake file‑system permission |
| Export blocked after applying a label | Highly Confidential – No Export sensitivity label |
| Central BI team wants single source of truth endorsement | Use Certification endorsement |
| Workspace missing Lakehouse/Dataflow creation options | Workspace on shared capacity or tenant‑level Fabric item creation disabled |
| Promote Dev → Prod with different DB connections | Deployment pipeline with environment‑specific parameters |