When the White House Office of Management and Budget (OMB) introduced its Program Assessment Rating Tool (PART) in 2003, it had what sounded like a worthwhile goal: get federal agencies to evaluate how well they do their jobs, in order to assure that taxpayer money is used efficiently. Like so much that comes out of the Bush White House, though, PARTÂ consumes too much agency time to produce something of questionable utility.
An âineffectiveâ rating can have serious adverse consequences for agencies and programs, so when the Environmental Protection Agency had particular difficulty demonstrating that it met PARTâs definition of efficiency, it asked the National Research Council for guidance. Iâm sure none of our readers will be surprised to hear that the NRC determined that the PARTâs one-size-fits-all approach to measuring agency progress is itself not very efficient or useful.
The press release accompanying NRCâs report recommends four changes to the way the federal government assesses efficiency at EPA and other agencies. Hereâs one of the sections about how PART doesnât work well for EPA:
EPA’s difficulties in answering PART’s questions about efficiency have grown out of OMB’s insistence that the agency find ways to measure the efficiency of its research based on outcomes, rather than outputs. Measuring research efficiency based on what the committee describes as “ultimate outcomes” — for example, whether a program eventually results in cleaner air or fewer deaths — is neither achievable nor valid, because such outcomes occur far in the future and are highly dependent upon actions taken by many other people who may or may not use the research findings. The committee’s review of practices across government R&D agencies revealed that no agency has found a way to demonstrate efficiency based on ultimate outcomes.
From the report itself, hereâs a section on the hurdles to efficiency that are outside of agency control:
Many forces outside the control of the researcher, the research manager, or OMB can reduce the efficiency of research, often in unexpected ways. Because these other forces can appreciably reduce the value of efficiency as a criterion by which to measure the results or operation of a research program, they are relevant here. For example,
â¢Â The efficiency of a research program is almost always adversely affected by reductions in funding. A program is designed in anticipation of a funding schedule. If funding is reduced after substantial funds are spent but before results are obtained, activities cannot be completed, and outputs will be lower than planned.
â¢Â When personnel ceilings are lowered, research agencies must hire contractors for research, and this is generally more expensive than in-house research.
â¢Â Infrastructure support consumes a large portion of the EPA Office of Research and Development (ORD) budget. Because the size and number of laboratories and other entities are often controlled by political forces outside the agency, ORD may be unable to manage infrastructure efficiently.
â¢Â Inefficiencies may be introduced when large portions of the budget are consumed by congressional earmarks. That almost always constitutes a budget reduction because the earmarks are taken out of the budget that the agency had intended to use to support its strategic and multi-year plans at a particular level.Still other factors may confound attempts to achieve and evaluate efficiency by formal, quantitative means. For example, the most efficient strategy in some situations is to spend more money, not less; a familiar example is the purchase of more expensive, faster computers. Or a research program may begin a search for an environmental hazard with little clue about its identity, and by luck a scientist may discover the compound immediately; does this raise the programâs efficiency? Such examples seem to support the argument that an experienced and flexible research manager who makes use of quantitative tools as appropriate is the best âmechanismâ for efficiently producing new knowledge, products, or techniques.
In short, agencies are being asked to do more with less, and what theyâre being told to measure isnât really a good indication of whether theyâre making good use of taxpayer dollars in carrying out their missions. Efficiency in government agencies is desirable, but not when the quest for it hobbles important work like improving air and water quality. The NRCâs suggestions are good ones; letâs hope the administration heeds them.
Many forces outside the control of the researcher, the research manager, or OMB can reduce the efficiency of research…
For example, if a researcher’s research is quashed before it is released to the public, it’s certainly not going to be very effective or “efficient”….
Exactly!
Also, if agency officials completely ignore the advice of their scientific advisory committees, that doesn’t look like a very efficient use of the experts’ time.