The counter-intuitive finding you can bring to the board
How the number breaks down
Methodology summary
Four views, two axes
Every practice scored on Revenue drivers (demand, capture, conversion) and EBITDA drivers (expense ratios, throughput, waste) from two independent sources (PBI financials, lagging; Dental Intel operations, leading).
Lever math
Revenue levers: captured revenue × marginal margin (avg+25pp, cap 65%) × 30% realization, capped at 8% of TTM.
Expense levers: expense-ratio gap × revenue base × 30% realization (flows 1:1 to EBITDA).
Empirical validation
24 months of DI history × 13 metrics tested for within-practice predictive power. Strongest leading indicator: grossHygieneProduction (|r|=0.62 @ lag 3). Methodology tab has full matrix.
PBI/DI stage disagreement — rows worth auditing
For each maturity curve we have two candidate scores (Power BI and Dental Intel). When they disagree by 2+ stages on the same practice, it's a data-quality flag — usually a metric-definition difference, period misalignment, or a partner brand whose mapping isn't right. Audit these rows first.
| Practice | ROD | PBI Rev | DI Rev | PBI EBITDA | DI EBITDA | Why we flagged it |
|---|
Top 10 $ opportunities
| # | Practice · ROD | TTM Rev | Margin | Revenue Driver | Rev $ | Expense Driver | Exp $ | Combined $ |
|---|
By Regional Ops Director
| ROD | Practices | TTM Revenue | TTM EBITDA | Margin | Top-lever $ (12mo) |
|---|
Coverage disclosure
What's next
- Monthly refresh — move from snapshot to automated month-end pipeline, feeding both DI + Gen4 PBI
- Complete INTACCT P&L coverage — currently 84 practices; adding the remaining ~180 unlocks practice-level expense-ratio levers network-wide
- Correlation re-run — with 36+ months of DI history and richer features (YoY change, trend z-scores) strengthens the empirical signal
- ROD drill-down dashboards — ROD-specific rollup with their own top-10 leverage list, exportable
Comparison Grid
Every in-scope practice on a single sortable surface. Revenue Maturity and EBITDA Maturity stages are the consolidated PBI-primary scores; the PBI/DI column flags rows where the two sources disagree by 2+ stages and need an audit.
| Practice · ROD | Revenue Maturity | EBITDA Maturity | TTM Rev | TTM EBITDA | Margin | PBI/DI | Top Lever $ |
|---|
EBITDA Leverboard
Every practice ranked by top-lever 12-month EBITDA upside, with projected margin delta after execution.
| Practice · ROD | TTM Rev | Margin | Top Revenue Lever | Revenue $ (12mo) | Top EBITDA Lever | EBITDA $ (12mo) | Combined $ |
|---|
1 · Driver classification — literature vs our data
2 · Multivariate combos — is 1 driver enough?
| Rank | Drivers | Adj R² | n obs |
|---|
3 · Segment-specific leading drivers
4 · EBITDA margin cross-section (n=52-59)
| Driver / ratio | r | p | Interpretation |
|---|
5 · Dashboard recommendations (empirically-grounded)
Two consolidated maturity curves
Every practice is scored on two independent axes — Revenue Maturity (top-line growth posture) and EBITDA Maturity (margin and cost-discipline posture). Each axis uses Power BI financial data as the primary source and Dental Intel operational data as the fallback when PBI is missing. Behind the scenes we still compute four sub-scores (PBI-Revenue, PBI-EBITDA, DI-Revenue, DI-EBITDA) for audit; when PBI and DI disagree by 2+ stages on the same axis we flag the practice for review on the Executive Brief tab.
The five stages — what each one really means
Stages apply identically to both axes. A practice can be Revenue S4 while EBITDA S2 (growing fast, leaking profit) — the lever engine targets exactly that gap.
The practice is too new or too small for its trajectory to be readable. Production base is low and margin is fragile — one bad month can flip the picture entirely. Plays here are about getting on the curve at all, not optimizing it.
Momentum is real and the practice is no longer just spinning up — there's a rising trajectory you can see in the numbers. But repeatability hasn't been proven yet; one ROD churn or hygiene departure can erase 3 months of progress. Levers here focus on locking in the systems behind the growth.
The practice is operationally real — capacity is filling, hygiene is producing, the front office runs without daily intervention. The constraint shifts from "can we run this?" to "can we run this at higher volume without breaking it?" Levers here are about scaling systems, not proving demand.
Fundamentals are strong, demand is steady, the team is mature. The next dollar of EBITDA doesn't come from more patients — it comes from doing the same work with tighter cost ratios, fewer broken appointments, and better case acceptance. This is where the EBITDA Levers earn their keep.
The practice is at or near its production ceiling — adding patients doesn't add output because the chairs and providers are full. Optimization plays plateau here too: the discipline is already in place. Future growth requires capacity expansion (more providers, more chairs, more days) rather than performance improvement.
Scoring weights (v2)
Revenue drivers vs expense drivers
Revenue S-Curve is scored on metrics that drive top-line: demand, capture, conversion, recall. EBITDA S-Curve is scored on expense drivers: cost ratios, labor waste, throughput efficiency. They move independently — a practice can be strong on one and weak on the other, and the levers for each are different.
Lever $ math — how every $ number is computed
Revenue levers flow through marginal margin:
EBITDA_impact = revenue_captured × marginal_margin × realization_factor marginal_margin = min(avg_margin + 25pp, 65%) realization_factor = 30% (12-month realistic capture) cap = 8% of annual revenue base (sanity) avg_margin uses REAL Gen4 TTM EBITDA margin when available
Expense levers flow 1:1 (reducing an expense ratio by 1pp on $10M revenue = $100K direct EBITDA):
EBITDA_impact = revenue_base × (current_ratio − target_ratio) × realization_factor target_ratio = portfolio top-quartile (p25 of expense-ratio distribution)
Correlation validation — empirical proof (24-mo within-practice)
24 months of DI history tested against grossProduction at lag 0 / +3 / +6 months. Within-practice correlation removes the practice-size confound and reveals whether changes in a metric predict changes in production.
How to read: within-practice correlations are much harder to surface than cross-practice because they reflect month-over-month volatility at one location. A |r| ≥ 0.30 at a forward lag is a real leading indicator; 0.10–0.30 is directional but noisy; <0.10 is not a statistically meaningful leading signal at this window. All levers still hold their math validity (deterministic benchmark gaps) regardless — this table tells you which signals deserve the most trust.
Data sources
v3 upgrade path
- Correlation validation — 24-mo DI history pulled; correlation against PBI forward production in progress
- Full INTACCT coverage — currently 84 practices with P&L decomp; needs full network
- Monthly refresh automation — currently snapshot; pipeline should run month-end
- Region/ROD rollup dashboards — mapping attached; rollup views next