Sign up to receive email updates, action alerts, health tips, promotions to support our work and more from EWG. You can opt-out at any time. [Privacy]


Reform of federal law

April 3, 2003

PFCs: Global Contaminants: Reform of federal law

April 2003

Imagine a regulatory system designed in theory to protect hundreds of millions of people from the potential harm of tens of thousands of chemicals in products they use every day. Imagine that this system did not require any health or safety studies prior to the marketing and sale of a chemical; did not require any monitoring of chemicals once they were in use; allowed producers to claim virtually all information related to a chemical as confidential business information and thus forever shield it from public view; and did not allow the public any right to sue or otherwise force testing or monitoring when independent scientists confirmed that significant contamination or hazards may exist.

That is the reality of the Toxic Substances Control Act, the nation’s chief regulatory statute for commercial chemicals. TSCA, as it is known, is famous for the lack of authority it provides the Environmental Protection Agency. Under TSCA a chemical company is under no legal obligation to understand how its products might harm human health. And in fact, only after scientists have amassed a body of evidence linking the chemical to human harm can the federal government ban it or leverage a phase out. A string of Congressional hearings and reports from the General Accounting Office have thoroughly documented this fact. With no statutory power to request data on a chemical prior to proving harm, which it typically cannot prove without the data it is seeking, the EPA has essentially given up trying to use TSCA to better understand the potential hazards of the tens of thousands of chemicals in use today.

More than 63,000 chemicals were granted blanket approval for use in consumer and industrial products with the passage of TSCA in 1976. The federal government reviews the safety of chemicals invented since that time through an application process that does not require health and safety test data and that discourages voluntary testing. Companies submit basic toxicity data with fewer than half of all applications to manufacture new chemicals; the government approves 80 percent of these with no restrictions and no requests for tests. Eight of 10 new chemicals win approval in less than three weeks, at an average rate of seven a day.

Companies can volunteer any studies they may have performed to files and dockets maintained by the Environmental Protection Agency, but in the absence of any voluntary submissions, EPA is forced to rely on computer models to estimate if an industrial chemical might be toxic to humans.

In 1998 EPA found that chemical manufacturers had failed to volunteer even the most basic information on chemical properties or toxicity for an estimated 43 percent of the 2800 chemicals produced in the highest quantities in the U.S. (EPA 1998b). A voluntary testing program grew out of this finding. Under this program, called the High Production Volume chemical testing program, or HPV program, participating companies submit their interpretation (but not the data) of eighteen basic screening tests, only one-third of which are directly relevant to human health and none of which include even a standard two-year cancer study, or tests for birth defects linked to low doses.

The group that leads the federal government’s efforts to assess testing needs on the health effects of industrial chemicals, the Interagency Testing Committee, or ITC, recently identified through the use of computer models 392 industrial chemicals expected to build up in the human body for which EPA lacks basic data from the manufacturers on chemical properties, uses, and toxicity. Among these are chemicals used in fragrances, dyes and pigments, polyurethane foam, and pesticides. There is no plan to study the presence of these chemicals in humans.

In effect, the nation has no regulatory system for chemicals that are not directly added to food (pesticides and food additives). Instead, we have a shell of a program that by law has weak authority to study, much less restrict the use of chemicals in commerce.

This statutory void has produced:

  • Widespread, pervasive contamination of the human population with hundreds of chemicals at low dose mixtures that have never been examined for any of their potential health effects.
  • An industry that has no legal obligation to conduct safety tests or monitor for the presence of its chemicals in the environment or the human population – and a significant financial incentive not to do so.
  • A federal research establishment that is completely unequipped, both technically and financially, to monitor the human population for commercial chemicals or to study their health effects.
  • An ever increasing load of chemical contamination in the human population and global environment that is comprised almost entirely of poorly studied chemicals that have never before been encountered in all of evolutionary history.

The chemical industry and its supporters argue that the suite of industrial chemicals found in an individual’s bloodstream is safe, and account for negligible increased health risks. The doses, they say, are too low to cause harm.

But there is no science to support this assertion.

The truth is that nobody knows the effects of the low dose mixtures of chemicals identified in this study, and the hundreds of other chemicals that are certain to be present in the body, but for which we could not test. Federal law imposes few health and safety testing requirements on the chemical industry, and sets few public health goals for chemical exposure or use.

Instead, industry decides what tests are done, when they are done, what the results mean, and who gets to see them. Overall, this system has left a void of scientific knowledge on the health and environmental hazards of nearly all chemicals found in consumer products and in people.

Safety margins erode further - new chemicals are invented daily.

The chemical industry gains permission to put more than 2000 new chemicals into the biosphere each year, with no knowledge of the health impacts on the exposed human population. People are given no warning of this exposure nor do they have the option to not be exposed.

The predictable outcome of this arrangement is that the dangers of chemicals are discovered only after widespread exposure and harm has occurred. The more recently a chemical has been introduced into commerce, the less scientists understand its toxicity, and the less likely it is that scientists will know how to test for it in people and the environment. New chemicals enter the marketplace with no, or only a handful of, toxicity studies. The few chemicals or chemical families that have been well-studied are those for which scientists uncovered, often accidentally, catastrophes or widespread contamination. For instance, the earnest study of DDT toxicity did not start until the discovery that the chemical was driving into extinction a number of bird species, including bald eagles. Intense research on the toxicity of perfluorinated chemicals is beginning only now, after 3M discovered that these Scotchgard ingredients, in use for 50 years, have broadly contaminated humans and are more toxic than previously believed.

And even for the best-studied chemicals, scientists have yet to gain a full understanding of health effects. When setting safety standards for electrical insulators called PCBs, banned in the U.S. since the 1970s, the World Health Organization reviewed 1,200 studies on PCB’s harmful effects and properties, but found only 60 that were relevant. In a similar review of PCBs the U.S. government enumerated 14 categories of uncertainty encompassing every step from human exposure to manifestation of health effects (EPA 1996). PCBs are among the best-studied chemicals in the world.

Chemical companies are not required to develop or divulge methods to test for the presence of their chemicals in the environment or the human body. Typically, only after a compound has been on the market for decades and contaminated a significant portion of the environment do independent scientists learn how to detect and quantify it. At that point, the Centers for Disease Control and Prevention (CDC) may choose to test for it in the general population, but even then there is no guarantee that the manufacturer will provide CDC with the methodology to detect it, or that the methods will be reliable. For instance, three years after 3M announced that it was removing the principle perfluorinated compound, PFOS (Scotchgard), from the market, chiefly because it contaminated the blood stream of the entire human race, the CDC still does not have a test method that it considers reliable to find the chemical in human blood.

Ignorance by Design

Detailed analyses by the U.S. EPA (EPA 1998b) and Environmental Defense (ED 1998) make clear how few health effects studies are available even for chemicals produced in the highest volumes. In a review of all publicly available toxicity and environmental fate studies, they found no information – not a single test - for 43 percent of the 2600 chemicals produced in the highest volumes in the US, with yearly production volumes of more than 1,000,000 pounds. Our study offers stark confirmation: for 55 compounds found in the nine individuals tested (one third of the chemicals identified), there is no information available - on chemical uses or health effects - in any of the eight standard industry and government references used for this analysis.

The work by the EPA and ED was important in establishing a baseline measure of data availability. But the tallies in the EPA and ED reports are only as meaningful as the studies they are counting. Both analyses focus on a limited universe of toxicity screens that themselves are not detailed enough to support regulation, and are not targeted toward the most meaningful and relevant health effects. But far worse than the numbers is the policy outcome that these analyses produced: a voluntary program for industry to conduct hundreds more of these same toxicity screens.

Launched with much fanfare in 1999, the so-called high production volume chemical screening, or HPV program, has not yielded data for EPA to review. Instead chemical manufacturers are submitting summaries of the screening studies, leaving EPA and the public at the mercy of industry’s interpretations of the data, which are not subject to independent peer review. The program is voluntary, and the EPA is powerless to demand any additional information. At the same time the HPV program provides invaluable public relations cover for the chemical manufacturers in the form of thousands of “studies” being conducted “voluntarily” at “great expense.”

And even if the actual screening study data were submitted, much of it would be of limited use. Consider the so-called cancer screens. In reality, what industry calls a cancer screen for public relations purposes under the HPV program, is nothing but a mutagenicity assay in a lab dish that both industry and regulators routinely dismiss as inconclusive in the absence of two-year animal studies confirming a carcinogenic effect.

Scientists often study the wrong thing

The nature of our ignorance of chemical exposure is more complicated than tallies of study numbers can convey. There are fundamental problems with even the best regulatory study methodologies when they are applied to the body burden of chemicals identified in this study. The vast majority of toxicity tests required by government regulators have limited relevance to the exposures that are occurring in the human population.

In a typical animal study required by the EPA, scientists test a single chemical in adult animals at high doses. The outcomes analyzed can include increases in the occurrence of tumors, changes in organ weight, or visible birth defects. Scientist don’t typically look for functional changes in response, such as brain development, following developmental exposure. Required developmental toxicity studies do not evaluate development after birth and tend to be less sensitive than studies that do assess postnatal function. A 1998 EPA draft report titled “A Retrospective Analysis of Twelve Developmental Neurotoxicity Studies Submitted to the USEPA Office of Prevention, Pesticides and Toxic Substances (OPPTS)” found that the developmental neurotoxicity study resulted in a lower “no observed effect level” for 10 of 12 chemicals compared to the required developmental rat studies that did not look at brain development (Makris, et al. 1998). The developmental neurotoxicity test is not a required test. EPA has requested it for only a small number of chemicals.

In contrast to high dose regulatory studies, people are exposed to multiple chemicals, from conception to death, at relatively low doses. The effects that occur can be subtle, detected across the general population as slight drops in IQ or fertility, or increases in specific types of cancer.

Some scientists, particularly those employed by the chemical industry, argue that “the mere presence” of small amounts of hundreds of chemicals in your bloodstream is biologically insignificant. High dose animal studies are typically offered as proof of this assertion. The truth, however, is that high dose animal studies cannot prove or disprove the safety of chemical exposures at lower doses, particularly when these studies are conducted primarily on adult animals, do not look for health endpoints relevant to low dose exposures, and do not account for interactions with other chemicals to which people are routinely exposed.

Industry’s dogmatic allegiance to the high dose theory of toxicology can be traced to the 16th century philosopher Paracelsus, whose philosophy is summarized in the well-known adage “The dose makes the poison.” The scientific and regulatory infrastructure in the US is based on studies that feed animals high doses of chemicals in the belief that a high dose will elicit any and all toxic effects that a compound can produce. In practice, if a high dose doesn’t elicit a readily measured toxic effect, then industry argues and regulators assume that the substance is not toxic. We now know that this is not true.

Science has advanced in the past 500 years, and outside of regulatory toxicology it is generally accepted that other factors besides dose — most notably the timing of the dose — are as important in determining the toxic effect.

The most obvious example is fetal exposure, where exposure in utero can produce long lasting adverse effects at amounts that produce no observable effects in adults. This outcome is documented in the scientific literature for lead, mercury and PCBs, where exposures in the parts per billion range in the womb or during infancy can lower IQ’s or alter behavior, while the same dose produces no observable effects in an adult. Dioxin is another case in point. Men with 80 parts per trillion of dioxin in their blood father nearly twice as many girls as boys. This effect would not have been predicted based on studies of adults.

Policy Recommendations

TSCA reform

Seven chemicals or chemical classes have been regulated or banned under the Toxic Substances Control Act (TSCA). When compared to the 75,000 chemicals registered for commercial use, the impact of TSCA is nearly imperceptible in the overall context of human chemical exposure. It is little wonder that the chemical industry considers TSCA the only truly workable federal environmental law.

Under TSCA, chemicals are assumed safe until they are proven hazardous. At the same time, the law does not require that manufacturers conduct health and safety studies, nor does it impose a duty on manufacturers to monitor how their products are used or where they end up in the environment.

As a starting point for a major environmental statute, this is problematic.

TSCA puts the burden of proving a chemical’s hazards squarely on the shoulders of the EPA (section 4 (1)(A)). The statute then prohibits the EPA from requiring safety tests unless the agency can prove that the chemical presents an unreasonable risk – which it can almost never prove because it cannot require the studies needed to make that finding. If the agency assembles enough data to require industry to conduct safety studies, it must go through the lengthy process of promulgating a test rule, very similar to a regulatory rule making, to mandate even one test for one chemical. When the data are generated, industry can claim the tests as confidential business information or trade secrets, and thus shield the tests from independent peer review or public scrutiny.

This law is so fundamentally broken that the statute needs to be rewritten. Revisions to the nation’s toxic substance laws must include the following provisions:

  • For chemicals currently manufactured and used commercially, the chemical industry must submit to EPA all internal studies on the properties, environmental fate, potential human exposure pathways and exposure levels, concentrations in workers and the general population, levels in the environment, worker and community health, measured effects in wildlife, toxicity, mechanisms of action and any other information relevant to human exposures and potential health effects. These studies must be made available to the public.
  • Industry must be required to prove the safety of a new chemical before it is put on the market.
  • The EPA must have the unencumbered authority to request any and all new data on a chemical that is already on the market.
  • The EPA must have the clear authority to suspend a chemical’s production and sale if the data requested are not generated, or if they show that the chemical, as used, is not safe for the most sensitive portion of the exposed population.
  • Chemicals that persist in the environment or bioaccumulate in the food chain must be banned. Currently EPA cannot demand the data needed to make this determination, and industry is not volunteering it.
  • Chemicals found in humans, in products to which children might be exposed, in drinking water, food, or indoor air, must be thoroughly tested for their health effects in low dose, womb-to-tomb, multi-generational studies focused on known target organs, that include sensitive endpoints like organ function and cognitive development. Studies to define mechanisms of action (how a chemical harms the body) must be conducted.
  • The chemical industry must develop and make public analytical methods to detect their chemicals in the human body, and conduct biomonitoring studies to find the levels of their chemicals in the general population.
  • Chemical manufacturers must fully disclose the ingredients of their products to the public.