SGA
Practice S-Curve
Revenue + EBITDA lifecycle · dual-source · per-practice leverage
As of
Practices
Top-lever EBITDA (12mo)
S-Curve Leverage Analysis · IPO Prep
Finding $— in EBITDA from the practice network
Target: $2M Window: 12-month realistic capture As of
TOP-LEVER EBITDA IDENTIFIED (12 MO)
Practices with a lever
All applicable levers
Network EBITDA (Gen4, TTM)

The counter-intuitive finding you can bring to the board

How the number breaks down

Methodology summary

Four views, two axes

Every practice scored on Revenue drivers (demand, capture, conversion) and EBITDA drivers (expense ratios, throughput, waste) from two independent sources (PBI financials, lagging; Dental Intel operations, leading).

Lever math

Revenue levers: captured revenue × marginal margin (avg+25pp, cap 65%) × 30% realization, capped at 8% of TTM.
Expense levers: expense-ratio gap × revenue base × 30% realization (flows 1:1 to EBITDA).

Empirical validation

24 months of DI history × 13 metrics tested for within-practice predictive power. Strongest leading indicator: grossHygieneProduction (|r|=0.62 @ lag 3). Methodology tab has full matrix.

PBI/DI stage disagreement — rows worth auditing

For each maturity curve we have two candidate scores (Power BI and Dental Intel). When they disagree by 2+ stages on the same practice, it's a data-quality flag — usually a metric-definition difference, period misalignment, or a partner brand whose mapping isn't right. Audit these rows first.

PracticeRODPBI RevDI RevPBI EBITDADI EBITDAWhy we flagged it

Top 10 $ opportunities

#Practice · RODTTM RevMarginRevenue DriverRev $Expense DriverExp $Combined $

By Regional Ops Director

RODPracticesTTM RevenueTTM EBITDAMarginTop-lever $ (12mo)

Coverage disclosure

What's next

  • Monthly refresh — move from snapshot to automated month-end pipeline, feeding both DI + Gen4 PBI
  • Complete INTACCT P&L coverage — currently 84 practices; adding the remaining ~180 unlocks practice-level expense-ratio levers network-wide
  • Correlation re-run — with 36+ months of DI history and richer features (YoY change, trend z-scores) strengthens the empirical signal
  • ROD drill-down dashboards — ROD-specific rollup with their own top-10 leverage list, exportable
Counter-intuitive findings from SGA's own 24-month data
Where industry wisdom and our data disagree
These are drivers where the standard DSO playbook doesn't hold for our portfolio. Each one is a non-obvious lever that requires a different response than the textbook answer. Built from within-practice fixed-effects analysis of 218 practices × 24 months.
Hero practice for this walkthrough
Revenue Maturity · Consolidated
Per-practice stage score combining PBI revenue signals (primary) with Dental Intel where PBI is missing. Color = stage. Size = TTM revenue.

Comparison Grid

Every in-scope practice on a single sortable surface. Revenue Maturity and EBITDA Maturity stages are the consolidated PBI-primary scores; the PBI/DI column flags rows where the two sources disagree by 2+ stages and need an audit.

Practice · ROD Revenue Maturity EBITDA Maturity TTM Rev TTM EBITDA Margin PBI/DI Top Lever $

EBITDA Leverboard

Every practice ranked by top-lever 12-month EBITDA upside, with projected margin delta after execution.

Practice · ROD TTM Rev Margin Top Revenue Lever Revenue $ (12mo) Top EBITDA Lever EBITDA $ (12mo) Combined $
Overnight research · DSO lead-indicator analysis
Which drivers actually predict revenue and EBITDA
v2 weights applied 2026-04-23 — based on 3,400-practice Planet DDS benchmarks, Henry Schein One Catalyst Index, Dental Intelligence performance data, Double Your Production research, and SGA's own 218-practice 24-month history
Empirical Quant Analysis · 24-month within-practice correlations
What the data actually says vs what the literature says
1,911 correlation pairs tested across every DI driver × 7 transforms × 7 lags × 3 outcome-transforms. Within-practice fixed-effects demeaning (the right lens for "does driver X predict outcome Y within the same practice?"). Headline finding: only grossHygieneProduction is a strong empirical lead. Most industry-cited metrics show weak-to-noise within-practice predictive power. Stage-specific and segment-specific leads are real and should drive per-segment playbooks.

1 · Driver classification — literature vs our data

Each driver's LITERATURE strength (from DSO research benchmarks) compared to its EMPIRICAL strength (from SGA's own 24-month within-practice data). Divergences flag where industry best-practice and our reality differ.

2 · Multivariate combos — is 1 driver enough?

Top driver combinations by adjusted R² in within-practice regression of forward gross production (lag 3 months).
RankDriversAdj R²n obs
Interpretation: grossHygieneProduction alone captures 38.8% of within-practice forward variance. Adding a second or third driver yields <0.001 additional R². Net: hygiene production is the dominant forward signal in our panel.

3 · Segment-specific leading drivers

The pooled view obscures meaningful segment-level dynamics. Below: the driver with the strongest forward correlation inside each segment. Playbooks should be segment-aware.

4 · EBITDA margin cross-section (n=52-59)

EBITDA margin history isn't monthly, so we analyse cross-sectional: aggregated drivers vs TTM margin across 52-59 practices. Expense ratios dominate; operational DI metrics are weak cross-sectionally.
Driver / ratiorpInterpretation

5 · Dashboard recommendations (empirically-grounded)

Two consolidated maturity curves

Every practice is scored on two independent axes — Revenue Maturity (top-line growth posture) and EBITDA Maturity (margin and cost-discipline posture). Each axis uses Power BI financial data as the primary source and Dental Intel operational data as the fallback when PBI is missing. Behind the scenes we still compute four sub-scores (PBI-Revenue, PBI-EBITDA, DI-Revenue, DI-EBITDA) for audit; when PBI and DI disagree by 2+ stages on the same axis we flag the practice for review on the Executive Brief tab.

Revenue Maturity
TTM scale · YoY growth · 3-mo trend · MTD/YTD budget attainment
EBITDA Maturity
Margin · AR>90 · GC rate · Clinical Comp % · Supplies+Lab % · Non-Clinical Comp % · G&A % · Occupancy %

The five stages — what each one really means

Stages apply identically to both axes. A practice can be Revenue S4 while EBITDA S2 (growing fast, leaking profit) — the lever engine targets exactly that gap.

S1
Launch
The practice is too new or too small for its trajectory to be readable. Production base is low and margin is fragile — one bad month can flip the picture entirely. Plays here are about getting on the curve at all, not optimizing it.
S2
Build
Momentum is real and the practice is no longer just spinning up — there's a rising trajectory you can see in the numbers. But repeatability hasn't been proven yet; one ROD churn or hygiene departure can erase 3 months of progress. Levers here focus on locking in the systems behind the growth.
S3
Scale
The practice is operationally real — capacity is filling, hygiene is producing, the front office runs without daily intervention. The constraint shifts from "can we run this?" to "can we run this at higher volume without breaking it?" Levers here are about scaling systems, not proving demand.
S4
Optimize
Fundamentals are strong, demand is steady, the team is mature. The next dollar of EBITDA doesn't come from more patients — it comes from doing the same work with tighter cost ratios, fewer broken appointments, and better case acceptance. This is where the EBITDA Levers earn their keep.
S5
Mature
The practice is at or near its production ceiling — adding patients doesn't add output because the chairs and providers are full. Optimization plays plateau here too: the discipline is already in place. Future growth requires capacity expansion (more providers, more chairs, more days) rather than performance improvement.

Scoring weights (v2)

Revenue drivers vs expense drivers

Revenue S-Curve is scored on metrics that drive top-line: demand, capture, conversion, recall. EBITDA S-Curve is scored on expense drivers: cost ratios, labor waste, throughput efficiency. They move independently — a practice can be strong on one and weak on the other, and the levers for each are different.

Lever $ math — how every $ number is computed

Revenue levers flow through marginal margin:

EBITDA_impact = revenue_captured × marginal_margin × realization_factor
marginal_margin = min(avg_margin + 25pp, 65%)
realization_factor = 30% (12-month realistic capture)
cap = 8% of annual revenue base (sanity)
avg_margin uses REAL Gen4 TTM EBITDA margin when available

Expense levers flow 1:1 (reducing an expense ratio by 1pp on $10M revenue = $100K direct EBITDA):

EBITDA_impact = revenue_base × (current_ratio − target_ratio) × realization_factor
target_ratio = portfolio top-quartile (p25 of expense-ratio distribution)

Correlation validation — empirical proof (24-mo within-practice)

24 months of DI history tested against grossProduction at lag 0 / +3 / +6 months. Within-practice correlation removes the practice-size confound and reveals whether changes in a metric predict changes in production.

Loading…

How to read: within-practice correlations are much harder to surface than cross-practice because they reflect month-over-month volatility at one location. A |r| ≥ 0.30 at a forward lag is a real leading indicator; 0.10–0.30 is directional but noisy; <0.10 is not a statistically meaningful leading signal at this window. All levers still hold their math validity (deterministic benchmark gaps) regardless — this table tells you which signals deserve the most trust.

Data sources

    v3 upgrade path

    • Correlation validation — 24-mo DI history pulled; correlation against PBI forward production in progress
    • Full INTACCT coverage — currently 84 practices with P&L decomp; needs full network
    • Monthly refresh automation — currently snapshot; pipeline should run month-end
    • Region/ROD rollup dashboards — mapping attached; rollup views next