The article offers scant evidence, and perhaps tellingly invites readers to skip to the results-or to simply read the title. However, the most critical flaws are: 1) the study examines data for only five areas, and 2) the study only evaluates differences in supplier estimates for 2001. First, a sample of five areas is inadequate-especially when two of the five areas are counties, where suppliers use similar sources, and similarity among estimates has already been documented. More importantly, one would expect uncommonly high similarity among 2001 population estimates, which were based primarily on new census counts, and not the suppliers' usual estimation methods.Relevant comparisons would be based on estimates for 2000, or other years when estimates would reflect the methods unique to each supplier.One could not pick a less representative year for this analysis than 2001.
In 1998, Claritas reported a more thorough investigation of this question. Contrary to the article's limited sample, Claritas compared the estimates produced by four suppliers for all counties, tracts, block groups, and ZIP Codes nationwide. The results-presented in a professional paper at the meeting of the Population Association of America - found a mix of similarities and differences. As expected, county population estimates were similar across suppliers-with differences rarely exceeding five percent. In contrast, tract estimates showed similarities in many areas, but differences of 15 percent and higher in many others.Block group differences were even greater, and for ZIP Code estimates, differences across suppliers often exceeded 40 percent. In addition, differences among income and age estimates were greater than those for total population.
It is no revelation that supplier estimates are similar in many areas, but assertions that they are interchangeable are refuted by serious analyses, and by the many users who report significant differences. If the article's conclusion were true, it would be difficult to find areas where estimates differed substantially. However, readers need look no further than the article's Scottsdale site.Comparing the suppliers' 2000 population estimates for this area, users had a choice between estimates of 77,425, 64,345, 59,455, and 27,945.
But more important than the difference among estimates is their relative accuracy. When evaluated against the area's 2000 census count of 93,202, these 2000 estimates translate to errors of 17 percent, 31 percent, 36 percent, and 70 percent. These four estimates are hardly "created equal," and if more than a few areas are reviewed, other substantial differences would be easy to find.
To conclude, there are, in fact,
significant differences among the estimates provided by the data suppliers,
but more importantly, there are also significant differences in the accuracy
achieved by these estimates.Users do need to be concerned with the quality
of the data they select, and should expect their suppliers to provide thorough
and candid evaluations of their estimation accuracy rates.