Performance Optimization
A fast report isn't just better UX — it's cheaper to run. Every query consumes CUs from your Fabric capacity. Optimize once, save capacity forever.
Optimization Priority Order
Fix these in order — each builds on the previous:
- Reduce visual count (biggest impact, easiest fix)
- Optimize DAX measures (medium effort, high impact)
- Hide unused columns (easy, DirectLake-specific)
- Ensure query folding (Power Query, one-time fix)
- Tune refresh scheduling (operational, ongoing)
1. Reduce Visual Count
Every visual on a page generates at least one query. A page with 30 visuals generates 30+ queries on every page load and every slicer change.
| Visual Count | Typical Page Load | CU Impact |
|---|---|---|
| 5–10 | < 2 seconds | Low |
| 10–20 | 2–5 seconds | Medium |
| 20–30 | 5–15 seconds | High |
| 30+ | 15+ seconds or timeout | Excessive |
Fixes:
- Combine visuals: Two cards showing "Total" and "Average" can be one card with a visual toggle
- Use bookmarks for views: Instead of showing everything at once, use bookmarks to switch between views
- Move detail to drillthrough: Keep the summary page light; put detailed tables on a drillthrough page that loads on demand
- Remove decorative visuals: Shapes, images, and text boxes that don't show data still contribute to render time
2. Optimize DAX Measures
Use Variables (VAR/RETURN)
-- BAD: CALCULATE runs twice
YoY Change =
[Total Storage GB] -
CALCULATE([Total Storage GB], SAMEPERIODLASTYEAR(Date[Date]))
-- GOOD: Each value computed once
YoY Change =
VAR Current = [Total Storage GB]
VAR PriorYear = CALCULATE([Total Storage GB], SAMEPERIODLASTYEAR(Date[Date]))
RETURN Current - PriorYear
Avoid FILTER(ALL(...)) — Use REMOVEFILTERS
-- BAD: Materializes entire table, then filters
% of Total = DIVIDE(
[Total Storage GB],
CALCULATE([Total Storage GB], FILTER(ALL(Person), TRUE()))
)
-- GOOD: Removes filter without materializing
% of Total = DIVIDE(
[Total Storage GB],
CALCULATE([Total Storage GB], REMOVEFILTERS(Person[Department]))
)
Avoid Iterators on Large Tables
SUMX, AVERAGEX, MAXX iterate row-by-row. On a 10-million-row table, that's 10 million iterations per visual per query.
-- BAD: Row-by-row iteration
Total GB = SUMX(EXO_Storage, EXO_Storage[Storage_Used_Bytes] / 1073741824)
-- GOOD: Aggregate first, then convert
Total GB = DIVIDE(SUM(EXO_Storage[Storage_Used_Bytes]), 1073741824)
3. Hide Unused Columns (DirectLake)
DirectLake loads all columns in every referenced table into the VertiPaq engine. If your EXO_Storage table has 50 columns but your report uses 8, the other 42 columns are loaded for nothing.
Fix:
- In Model view, right-click unused columns → Hide in report view
- Better: In the semantic model (TMDL), set
isHidden: trueon unused columns - Best: Create a Silver/Gold view that only includes the columns you need
Impact: Reducing from 50 to 8 columns can cut model size by 80%, keeping you within your DirectLake tier limit.
4. Ensure Query Folding (Power Query)
Query folding means Power Query translates your M transformations into SQL and pushes them to the source. If folding breaks, Power Query downloads the entire table into memory and transforms locally — which is vastly slower and more expensive.
How to Check
In Power Query Editor:
- Right-click a step → View Native Query
- If you see SQL → folding is working
- If it says "This step can't be folded" → you broke folding at this step
What Breaks Folding
| Operation | Folds? | Alternative |
|---|---|---|
| Column renames | Yes | — |
| Type conversions | Yes | — |
| Filters, sorts | Yes | — |
| Group By | Yes | — |
| Custom M functions | No | Inline the logic |
Table.AddColumn with M expression | Sometimes | Test with "View Native Query" |
| Merging two folded queries | Sometimes | Ensure both sources fold |
Text.Combine, complex string ops | No | Do in DAX instead |
The Rule
Do as much as possible in steps that fold. Move non-folding operations to the end of the query chain. Once folding breaks at a step, every step after it also doesn't fold.
5. Tune Refresh Scheduling
See Database Decision Matrix: Refresh Architecture for detailed scheduling guidance.
Key points:
- Stagger refreshes by 15–30 minutes
- Run heavy refreshes off-peak (overnight)
- Use incremental refresh for large tables
- Monitor with the Capacity Metrics App
Diagnostic Tools
| Tool | What It Shows | When to Use |
|---|---|---|
| Performance Analyzer (in Desktop or Service) | Per-visual query time, DAX vs render time | Diagnosing slow visuals |
| DAX Studio | Query plans, server timings, storage engine vs formula engine | Deep DAX optimization |
| Capacity Metrics App | CU consumption per workspace, per operation | Capacity right-sizing |
| Power Query "View Native Query" | Whether folding is active | Checking ETL efficiency |
Related Documentation
CU smoothing, spike diagnosis, refresh scheduling
Optimized DAX patterns for AggieDB