EWG Response to 'A Review of the Science, Methods of Risk Communication & Policy Recommendations in Tap Water Blues'
Setting the Record Straight: Quality of the Science
Point 1. Baker argues that we overestimate risk because we have not used a time-weighted average. He cites one example in one year, Honey Creek in 1986, from which he implies that we have overestimated risk in the entire report by a factor of two. He argues that we have misanalyzed USGS data, and claims that he offered to help us with his data.
All risk and exposure estimates in Tap Water Blues are based on seasonally adjusted averages that correct for any seasonal sampling biases in the data. This methodology was also used by the American Water Works Association in their 1994 report, Seasonal Pesticide Variations in Surface Water Systems (See Note 1.) The results of this AWWA report indicate that EWG may have significantly underestimated the extent of herbicide contamination. More importantly, our methodology and overall data quality criteria yielded estimates of exposure and risk 29 percent lower than comparable data published by Baker or data published by Ciba using Baker's methodology. The methodology used in Tap Water Blues was reviewed by Don Goolsby, Chief of the USGS Midcontinent Herbicide Program, and by George Hallberg of the University of Iowa Hygienic Laboratory.
In collaboration with Ciba, the manufacturer of atrazine, Baker has published extensive analyses of atrazine exposure through drinking water in Iowa, Ohio, and Louisiana (See Notes 2 and 3.) . We have compared the data used in Tap Water Blues with comparable Baker/Ciba data for 18 cities or drinking water sources in three states serving a population of over 3 million people. This data show the problem to be even greater than EWG estimates. For these 18 cities or drinking water sources, Baker/Ciba estimates that the average atrazine concentration is 1.15 ppb, while the average atrazine concentration used for these same cities and sources in Tap Water Blues is 0.82 ppb, nearly one third lower than the average Baker/Ciba estimate (Table 1). In terms of the exposed population, Tap Water Blues estimates are lower for 13 of the 18 cities accounting for 84 percent of the affected population. In two of the remaining five communities where our analysis shows higher results than Baker, our analysis was based on more accurate and recent data, either collected by the water utility (Columbus) or the United States Geological Survey (Alum Creek, OH).
Finally, Baker has neither offered to help us with his data, nor made these data available publicly to us in any form. Some of these data, which were collected for the EPA, have been placed in STORET. The remainder have not been released except in summary published form. The Environmental Working Group requested these data via telephone in late 1993 and again in early 1994. Both requests were denied.
Point 2. Baker claims that the "statewide" average risks presented in Tap Water Blues are too high because we used simple averages and did not weight the calculations by population. Baker then accuses us of "...intentionally introducing high biases into the statewide averages."
We did not, as Baker asserts, intentionally, or through any oversight, introduce high biases into statewide averages in this report. In fact, as will be discussed below, we intentionally understated the actual levels of contamination and risk in order to protect ourselves against criticisms that we were biased in favor of protection of the public health. Although Baker correctly observes that some statewide averages would be lowered slightly if population weights were used in the calculations, he fails to point out in other cases and for other states they would be raised. For example, the simple statewide average lifetime risk for the state of Kansas reported in Tap Water Blues was 8.1 X 10-6. Were we to compute the statewide average using Baker's population weighted average, the result would be a higher estimate -- 9.6 X 10-6.
In other states because of data availability, our averages were based on data sets which clearly underestimated risks for much of the population. For example, many small and large towns in Louisiana use the Mississippi River for drinking water. Most smaller systems do not use any sort of carbon filtration to reduce contaminant levels. Our statewide risk assessments for Louisiana, however, were based on finished water samples from larger systems such as East Jefferson Parish and New Orleans, which use Powdered Activated Carbon (PAC) to reduce herbicide levels delivered tap water. As a result, for the state as a whole, our estimate of risks is clearly low. The same is true for Kentucky and Nebraska, where we based the statewide average lifetime cancer risks on data from large systems treating their water with PAC.
The important issue, of course, is not which method produces slightly higher or slightly lower statewide average exposure numbers; the important issue is the risk that people actually face, and for purposes of this review, which methodology provides the most meaningful estimate of real world risks for people living in communities with contaminated water. To answer this question risk estimates must be presented at the water system level, as they were in Tap Water Blues.
Contrary to Bakers assertion that we intentionally bias our exposure calculation to produce inflated risk estimates, virtually all of the estimates of cancer risk in Tap Water Blues are likely to underestimate the true risks faced by the population drinking herbicide contaminated water because:
For the vast majority of the exposed population, chlorinated herbicide metabolites were not included in our exposure calculations due to the absence of data, even though these metabolites are certainly present and have similar toxic properties as the parent compound (See Note 4.)
All herbicide detections over 50 ppb were excluded when calculating averages, even if the detections were in drinking water source water; this amounted to the elimination some 600 detections out of 15,000 used from STORET;
We made no compensation for tests with poor limits of detection. For example, 49 percent of atrazine non-detections in STORET were from samples where the limit of detection was 1 ppb. In contrast, the U.S. Geological Survey currently uses methods with a limit of detection of 0.05 ppb. In cases where no detections were reported because of high detection limits, we assumed that no herbicides were present even when we had substantial evidence indicating that herbicides were present.
- We made no compensation in risk assessments for peak periods of exposure when contamination exceeds the MCL, for the sensitivity of children or for the potential synergistic effects of the common mixtures of herbicides in drinking water throughout the Midwest.
Point 3. Baker claims that when calculating "statewide" average risks we were ambiguous about the populations represented and used only the most contaminated communities in the state.
Baker's assertion that we were ambiguous about the populations used when calculating statewide averages is simply wrong. As Baker himself admits, "...they cannot be accused of failing to document their methods." In each of the ten states analyzed, we went to great lengths to describe how the statewide risk estimates were calculated, making clear that they only apply to the population whose drinking water is actually contaminated with herbicides. Groundwater drinkers, those supplied by the Great Lakes, and communities for which we had no data were clearly excluded from the analysis. At no point did we imply that these statewide averages applied to every individual in that state -- we clearly indicated which communities were affected and what the risks were in those communities.
Baker believes we should average those people who have no herbicides in their drinking water into the statewide cancer risk estimates, as though the absence of exposure for these individuals would somehow lower the risk for those who are exposed. In our view, it is far more useful and relevant to describe the size of the exposed population and then to describe the risks to this population, as we did in the report.
It would be unfortunate if reporters, scientists, or policymakers were "confused" by these estimates; however, we have no evidence that this confusion actually occurred.
Point 4. Baker argues that we fabricate a one-in-one million risk standard, resulting in inflated perceptions of risk. He then continues by arguing that the proper basis of comparison for the five herbicides would be the risk of five in one million.
See pages 7 and 8.
Point 5. Baker argues that data presentation generally precludes calculation of estimated cancer occurrences for towns, cities, and states.
The data presented in Tap Water Blues have not precluded Baker or any other researcher from estimating cancer occurrence for towns, cities, and states. The data are presented in terms that an individual can understand: my increased risk of getting cancer is X in a million. In fact, it is Baker's analyses, which combine exposed and non-exposed populations into one "average" risk, that completely preclude any meaningful understanding of risks or exposure at the individual or local level.
Further, Baker's assertion that we have attempted to fool the media with this approach does not stand up to the facts. We have received over 1,000 copies of articles from the popular media and have not seen a single example of where the implications of risk estimates determined by EWG were misunderstood or misused as implied in Baker's review.
Point 6. Baker argues that we present the data in forms lacking toxicological context.
Baker argues that simply giving the percentage of positive herbicide detections has no meaning, and no toxicological significance. Besides being scientifically inaccurate, this line of reasoning presumes that people have no right to know how often their drinking water is contaminated with herbicides, a line of reasoning we reject.
As Baker would no doubt agree, there are at least two characteristics of exposure that influence the human health risks posed by a substance -- the dose, and the number of times (or length of time) that the dose is received. Presenting information on the frequency of contamination is clearly relevant in this context. Moreover, the EPA's standard cancer risk assessment models (being used for the triazines in the ongoing Special Review) assume a linear cancer model -- that is, that any exposure poses some level of increased risk. Although the risk of low level contamination may be small, it may be a risk that people choose to avoid. They can only make that choice, however, if they know whether and how often their water is contaminated.
Point 7. General failure to provide context for data.
Baker claims that the report is about cancer, but that we do not estimate cancer occurrences.
First, Baker is mistaken in his analysis that the report is "about cancer". The report is about exposure to herbicides, and the numerous health risks -- birth defects, cancer, disruption of the endocrine and hormonal systems -- attributable to pesticide exposure. As noted in response to point 5, when we presented risk information, we presented it in the most meaningful format for those assessing risks in the affected areas.
Baker complains that we do not mention background levels of cancer, and that we do not document the population of the corn belt.
True. Instead, we use the more meaningful number -- the total risk to individuals of exposure to these chemicals. This is an improvement over approaches like Baker's, which obscure these risks (which are avoidable through sound public policies), in a blizzard of irrelevant statistics. Simply put, the number of individuals getting lung cancer has no bearing on the risks from exposure to pesticides in drinking water. Again, the media has had no problems placing this risk in the proper context.
Point 8. Implication of greater significance of the risks than actually warranted.
There are no scientific issues raised in this point.
Point 9. Inappropriate comparison to drinking water standard and health advisories.
Again, Baker has not grasped the overall thrust of the report, which is that current drinking water standards are inadequate because they are based on suspect interpretations of toxicity of the triazines, and because monitoring and enforcement ignores peak periods of exposure. We, like the EPA in its recent Special Review, do not believe the the acute effects of the triazines are a health issue. Thus, Tap Water Blues contains no discussion of these effects nor any comparison with short term health standards that Baker presents in his critique.
We argue two points. First, lifetime standards (the MCL) should be based on the most sensitive chronic effects (cancer) and they are not. Because they are not, the current standards are too weak and allow excessive risk. Second, the entire way in which chronic risk and compliance with this standard is enforced is suspect because the standard makes no allowance for sustained peak periods of exposure that exceed the liftime health standard (MCL or LHA) year after year.
We know of no other federal contaminant standard where sustained and repeated exposures at levels in excess of the supposed standard are condoned. We consider this a major flaw in the way that standards are conceived, and intended Tap Water Blues, to illustrate the frequency and scope of exposures that violate MCL's for extended periods of time, but are nonetheless considered legal.