Not every economic impact study is rigorous. Some are advocacy documents dressed up in spreadsheet language. If you sit on a board, work in government affairs, or are about to commission your first impact study, you need a way to tell the difference. Here are the questions we recommend asking — and yes, they apply to our work too.
1. Is the Methodology Documented?
A credible study has a methodology section that describes which model was used, what data went in, what assumptions were made, and how the geography was defined. If you cannot find this, the study is not credible. If the methodology section is one paragraph that says "we used IMPLAN," that is a yellow flag — every analysis using IMPLAN involves dozens of decisions about how to specify shocks, define regions, handle leakage, and treat substitution. A serious study walks through those decisions. A less serious one waves at IMPLAN as if it were a black box.
2. Does It Distinguish Net New From Substitution?
This is the single biggest source of inflated impact figures. If a festival study counts every dollar spent at the event, including dollars spent by local residents who would have eaten dinner somewhere else that night anyway, the impact figures will be wildly overstated. Net new spending — money brought in from outside the region by people who would not otherwise have come — is the only spending that represents a true economic impact for the host region. A rigorous study identifies and excludes substitution. A less rigorous study glosses over it or counts everything.
3. Are the Multipliers Reasonable?
Multipliers should rarely exceed 2.5x for output and are usually in the range of 1.5x to 2.0x for most industries in most regions. If a study reports a multiplier of 4 or 5, ask why. Sometimes there is a legitimate explanation (very deep regional supply chains), but more often it indicates something is wrong — double-counting, mis-specified industry codes, or a model run on a region too large to capture leakage realistically.
4. Is the Geography Defensible?
The geographic scope of an analysis dramatically affects the results. A study modeled at the state level will show much larger multipliers than the same study modeled at the county level, because state-level models capture more of the supply chain inside the region. Both can be defensible — but the choice should match the question being asked. A study defending a city's tax decision should use city or county geography. A study supporting state-level policy advocacy can reasonably use a statewide model. A study using statewide multipliers to argue for a single-city decision is overstating its case.
5. Are the Inputs Plausible?
Garbage in, garbage out. The model is only as good as the inputs you feed it. Look for:
- Documented data sources. Where did the spending figures come from? Audited financials? Vendor reports? Survey data?
- Realistic operating assumptions. Does the headcount match the payroll? Does the visitor count match historical attendance?
- Honest treatment of capital versus operating spending. Construction is one-time. Operations are recurring. Mixing them produces misleading "annual" figures.
6. Who Wrote It, and What Are Their Incentives?
The structural question every reader should ask: does the analyst have a stake in the outcome? An advocacy organization commissioning a study from its in-house economists will not produce the same kind of report as an independent firm with no client interest in the result. Neither is automatically wrong, but the reader should weight the findings accordingly. Independent firms have a strong incentive to keep their methodology defensible because their reputation is the product.
7. Does the Executive Summary Match the Report?
Executive summaries get press releases. Detailed methodology sections do not. The most common deception in impact studies is the gap between what the report carefully says and what the executive summary loudly claims. Read both. If the report says "out-of-region visitors generated approximately $4 million in net new economic activity" and the executive summary says "this festival generated $20 million in economic impact," the second number is the gross figure including all in-region spending and the report has been honest while the summary has been spin.
8. Will the Analyst Defend the Numbers in Public?
A rigorous study has an author who is willing to present it to a board, take questions from a journalist, and defend the methodology under cross-examination. A study where the author refuses to engage publicly or whose name does not appear on the report should be treated with suspicion. We sign every study we produce and are willing to present and defend our work to any audience.
A Final Thought
An economic impact study is a tool for communication. The numbers matter, but so does the trust that comes from a transparent, well-documented, defensible analysis. The next time someone hands you a 60-page impact report, run through this checklist before you cite a single figure from it. If the study survives the questions, you can use it with confidence. If it does not, you have learned something important.
Request a Proposal → More Insights →Related case studies
How our team has used these methods in real client engagements.