The National Telecommunications and Information Association (NTIA) recently released a study evaluating its implementation of the $4.7 billion broadband stimulus program (the Broadband Technology Opportunities Program, or BTOP). The NTIA paid ASR Analytics $5 million to do the study, which it touts as "independent."

The study's key finding is that counties that hosted at least one BTOP grant saw broadband penetration increase by 2 percentage points more than matched counties that had no BTOP grants. ASR Analytics then applies this result to other research that shows benefits of more broadband. Lo and behold, that 2 percentage point difference can then be extrapolated into large economic benefits, such as increased economic output of $5.7 billion to $21 billion per year.


A careful read of the report, however, shows that the most important part of the study was not at all independent: the choice of which BTOP awards to study.

Here's the problem. The report evaluated only a sample of counties that hosted BTOP projects. That would be fine if the sample were random, but it wasn't. Instead, as the report says twice, "NTIA selected these projects for inclusion in the evaluation study at the beginning of the study." Not the independent evaluator, but the NTIA — the very entity supposedly being evaluated.

And how did the NTIA choose which of its projects to include? According to an earlier report outlining the study design, to be included, a grantee must be "a willing participant that will be able to engage in a meaningful conversation about the impacts [of the grant]" and there must be "some high level of confidence that [the project] will be completed without significant technical or financial obstacles. ... [S]elected locations can represent grants of varying quality, but extremely troubled or returned grants should not be included."

That approach is bad enough when selecting case studies — how can they be representative when poorly performing projects cannot be considered? But it becomes even worse when it forms the basis of the empirical methodology. As ASR Analytics itself wrote in an interim report, "The selection of grants was purposeful and not meant to yield a statistical sample." Yet, they used those very grants as the basis for the counties they studied statistically.

As a result, it's not at all surprising that the analysis found positive effects of handpicked cases relative to a control group, although given that the fix was in, it is perhaps surprising that the effect was so small.

It didn't have to be this way.

First, the NTIA should not have been charged with evaluating itself. The misaligned incentives in such an arrangement are obvious. What were the chances that it would have found any serious fault with how it ran its own program?

Second, $5 million is a huge amount of money for such a study. Instead of hiring a single company, the NTIA could have made grants and data available to, say, graduate students and gotten hundreds of studies instead of one. Or it could have hired two or three independent researchers, provided them with data and allowed them to conduct evaluations without active NTIA involvement. Imagine what we could have learned not just about BTOP, but about program effectiveness and evaluation, with multiple reports from a variety of disciplines.

The underlying problem is that BTOP was not implemented with cost-effectiveness or evaluation in mind. Prior to giving out the grants, 71 economists (including this one) wrote a letter to the NTIA describing a systematic way to choose projects based on expected cost-effectiveness. The NTIA instead appeared to choose projects in an ad hoc manner. Indeed, Gregory Rosston of Stanford University and I found, in a paper published last year, that the expected cost-effectiveness of grants varied by a factor of 100. It strains credulity to believe that this spread reflected the results of a systematic, coherent process.

All of that said, the NTIA is a paragon of transparency and responsibility compared to the Rural Utilities Service (RUS), which distributed an additional $2.5 billion in broadband stimulus funds through the Broadband Initiative Program (BIP). The RUS's approach to evaluation appears to be publishing quarterly reports that detail how much money has been distributed and highlighting one or two success stories.

In short, we still have little rigorous, empirical information about the effectiveness of the $7.2 billion the stimulus allocated to broadband. But we do have further evidence that self-evaluation is unlikely to yield a truly independent review, even if you call it "independent" in a press release.

Wallsten is vice president for research and senior fellow at the Technology Policy Institute.