Sources & Methodology
Published notes on how MigrateForce scores readiness, frames scenarios, and ranks interventions. This page is meant to make the logic inspectable, not decorative.
Scoring dimensions
6
Deterministic engines
8
Scenario outputs
3
Methodology review
Quarterly
Published notes
Primary data sources
We prefer auditable external references for market structure, labor economics, and peer context. No single source determines a recommendation on its own; the point is to triangulate inputs and make the reasoning legible.
IBISWorld Industry Reports
Market sizing, establishment counts, and growth-rate context used to anchor sector-level baselines.
- Advertising and marketing industry size benchmarks
- Establishment counts and segment composition
- Historical growth rates across 2019-2024 periods
Bureau of Labor Statistics
Employment, wage, and occupation-level data used when estimating execution capacity and labor leverage.
- Employment by function and geography
- Role-level wage reference points
- Occupational distribution for operating-model assumptions
Ad Age Agency Report
Revenue benchmarks and margin context used for comparative ranges in agency and services businesses.
- Top-agency revenue baselines
- Margin ranges and category-level economics
- Reference points for scaled-service operators
ANA, PitchBook, Statista, and peer research
Transaction context, market activity, and industry-adoption signals used to keep scenario ranges grounded.
- Market activity and deal-flow context
- Adoption and benchmark studies from operator-facing research
- Supplementary peer data used for triangulation rather than single-source claims
How the methodology works
The methodology is built around continuity from discovery through execution. We are not trying to produce an impressive number in isolation; we are trying to produce a number that can survive governance, financing, and delivery review.
Size the operating opportunity
We start from market structure, labor mix, and operating constraints rather than generic AI optimism. The goal is to identify where execution capacity and economics can change materially.
Score readiness across deterministic dimensions
Each dimension is evaluated through explicit scoring logic so the result can be inspected, challenged, and updated as new evidence arrives.
Model conservative, base, and upside scenarios
Recommendations never rely on single-point estimates. Outputs are framed as scenario bands with assumptions that can be reviewed by operators, finance, and technical stakeholders.
Rank interventions by impact and plausibility
The final priority order reflects financial contribution, execution feasibility, and evidence confidence instead of intuition or internal politics.
Validation and confidence
We express confidence as ranges because enterprise transformation work contains real execution risk. Precision theatre is worse than honest uncertainty.
Valuation and scenario ranges
±15%
Used for comparable-based estimates and scenario envelopes where market ranges are the right level of precision.
Growth projections
±20%
Applied to forward-looking growth estimates that rely on adoption speed, operating execution, and market conditions.
Market sizing estimates
±10%
Applied when multiple market sources can be triangulated and refreshed against current public benchmarks.
Why ranges matter
Calculation conventions
The platform uses explicit conventions when translating evidence into scenario outputs. The purpose is repeatability, not false certainty.
Revenue and EBITDA reference bands
We use market-based ranges instead of presenting hard-coded “true” multiples. The range shifts by business model, revenue quality, and operating maturity.
- Small service businesses often cluster at the low end of the band
- Recurring revenue, stronger margins, and workflow leverage move estimates upward
- AI-enabled operating leverage is treated as a scenario factor, not assumed by default
AI impact modeling
Productivity lift, cost reduction, and margin expansion are modeled separately so scenario math stays auditable.
- Productivity gains are assessed by workflow, not by headline vendor claims
- Cost reduction assumptions are discounted when change-management or integration work is high
- Margin expansion only counts when the operating model can actually capture it
Operational benchmarks
Utilization, staffing, and approval-path constraints are factored into the recommended intervention order.
- Capacity bottlenecks are treated differently from purely technical bottlenecks
- Workflow friction can cap value even when a model performs well
- Governance readiness affects how quickly a plan can move into execution
Refresh cadence
Not every input updates at the same speed. We refresh based on source behavior rather than forcing a single arbitrary cadence.
- Live market and benchmark inputs can update continuously
- Deal-flow and peer signals are reviewed weekly or monthly
- Methodology assumptions and scoring logic are reviewed quarterly
Limitations and disclaimers
These outputs are decision-support artifacts, not investment advice. Methodology improves the quality of judgment; it does not remove judgment from the process.
Important limitations
- Historical performance and comparable ranges do not guarantee future outcomes.
- Source data can lag real operating conditions, especially in fast-moving markets.
- Actual results depend on execution quality, operating discipline, and governance readiness.
- Illustrative scenarios should be challenged against company-specific evidence before commitment.
Need a deeper methodology review?
Use this page as the public baseline. For implementation-specific diligence, pair it with the product workflow and the corresponding assessment outputs.