What Others Are Saying About PART

OMB Watch is not alone in criticizing the White House's Program Assessment Rating Tool. See what others have had to say recently about this flawed measure. PART Punishes Programs for Following the Law Clay Johnson, OMB deputy director, when asked in a congressional hearing, "[I]s it possible for a program to get a poor rating simply because it does what's required by statute and not necessarily what OMB might like for that program to do?": Yes. --Accountability and Results in Federal Budgeting: Hearing Before the Subcomm. on Federal Financial Management, Government Information & Int'l Security of the Senate Comm. on Homeland Security & Gov't Affairs, 109th Cong. (2005), 2005 WL 1409975 (F.D.C.H.) (colloquy between Sen. Carper and Clay Johnson III). PART Is Divorced from Reality From ThinkProgress: 1) "Federal Emergency Management Agency: Disaster Recovery": The Department of Homeland Security's Recovery program ensures that individuals and communities affected by disastes [SIC] of all sizes, including catastrophic and terrorist events, are able to return to normal function with minimal suffering and disruption of services. PERFORMING: Adequate (one star) Reality--Reuters: With no clear recovery plan in sight five months after Hurricane Katrina, many victims are simply hanging on, waiting anxiously for signs that their neighborhoods are either reviving or turning into permanent ghost towns. 2) "Preparedness--Grants and Training Office National Exercise Program": Prepare Federal, state, and local responders to prevent, respond to, and recover from acts of terrorism by providing the tools to plan, conduct, and evaluate exercises. PERFORMING: Effective (three stars) Reality--GAO: Although the [National Response Plan] framework envisions a proactive national response in the event of a catastrophe, the nation does not yet have the types of detailed plans needed to better delineate capabilities that might be required and how such assistance will be provided and coordinated. 3) "Federal Emergency Management Agency: Disaster Response": The Department of Homeland Security's Response program is designed to quickly, efficiently and effectively provide support to State, Tribal, and local governments, and Federal response teams in the event of a natural or manmade disaster, emergency or terrorist event. PERFORMING: Adequate (one star) Reality--Washington Post: Four years after the Sept. 11, 2001, attacks, administration officials did not establish a clear chain of command for the domestic emergency; disregarded early warnings of a Category 5 hurricane inundating New Orleans and southeast Louisiana; and did not ensure that cities and states had adequate plans and training before the Aug. 29 storm, according to the Government Accountability Office. PART Continues the Bush War on Science Statement of Dr. Genevieve Matanoski, EPA Science Advisory Board, to Subcommittee on Environment, Technology, and Standards, House Committee on Science, March 11, 2004, available on Westlaw at 2004 WL 506081 (by subscription only): [A]fter evaluating PART summaries for several research programs, our conclusion is that PART may, at this time, have a limited capacity to inform budget decisions on research programs. The Board is concerned with the manner in which the weighting formula in PART seems to influence the full analysis and thus favor programs with short-run results over those having long term results. There is also concern that an evaluator's subjective considerations might be able to bias those weights and the rating itself. Specifically, it appears that the weighting formula in the PART favors programs with near-term benefits at the expense of programs with long-term benefits. Since research inevitably involves more long-term benefits and fewer short-term benefits, PART ratings serve to bias the decision-making process against programs such as STAR ecosystem research, global climate change research, and other important subjects. The PART seems to be intended as a formula for predictions about likely program success. However, the weights that the PART assigns to different program characteristics do not seem to have been validated systematically against the contribution of each program characteristic to any independent objective measure of program success. If the weights in the tool are arbitrarily assigned, the PART may have characteristics that could lead to biases in evaluation that are related to the subjective judgments of its designers. We believe that the tool should be reviewed to determine its adequacy for its use in supporting budget decisions. As the Board observed significant decreases in science and research funding, it also noted a substantial resource increase in the State and Tribal Assistance Grant account (STAG) for an initiative for retrofitting school busses. The Board does not challenge the worthiness of this program, rather it notes that it has no information on the science supporting this initiative. The Board trusts that the benefits of this program have been rigorously reviewed. The real issue here is how research programs (and others) are to be evaluated and whether a different metric is necessary for basic vs. applied research programs. Also, of interest is whether research results should be evaluated separately from the outcomes of programs they are intended to support? Although the Board did not directly evaluate the PART itself, it is of obvious difficulty to conceive of a simple quantitative metric that could be applied across the broad areas of ecosystem quality, human health effects, endocrine effects, and technology development. The question is even more complex when you consider that some research is intended to develop limited data in the short-run to fill a specific knowledge gap and other research is intended to provide an understanding of whole systems in the long-term. Research program measurement is even more difficult because the knowledge and methods developed by EPA, especially ORD's researchers, are not usually directly applied by ORD, rather they are often used by others to support decisions on a broad suite of diverse statutory mandates. Thus, we believe that evaluations of the performance of research programs will need to consider the specific factors of each program that the research is intended to support. Further, it is unlikely that simple formulas will be able to handle this task well. It is more likely that realistic research program performance assessment will need to be a combination of quantitative metrics and other information and analyses which is then evaluated by groups of experts with relevant knowledge. I note that the NAS, in its review of STAR, also had concerns with quantitative routines used in performance assessments and noted that "The Committee judges that expert review by a group of people with appropriate expertise is the best method of evaluating broad research programs, such as the STAR program."
back to Blog