Experts optimistic about updated performance bill

Thursday - 2/10/2011, 6:44pm EST

By Meg Beasley
Reporter
Federal News Radio

For too long, the Government Performance and Results Act asked the wrong questions, and supported evaluations that were too slow and too expensive. And previous attempts to fix these problems haven't been very successful.

But former federal officials believe the Government Performance and Results Modernization Act - or GPRA 2.0 -- will be different.

Jonathan Bruel, executive director of the IBM Center for The Business of Government, was one of several speakers at a recent forum hosted by the Center for American Progress.

He said he is optimistic about GPRA 2.0's chances of success because lawmakers won't be starting from square one. He also said record-low government approval ratings will increase the pressure on Congress to make real changes.

"We now have 18 years of experience and there is a capability that was not present," Bruel said. "If there was a value in the original statute, it was a reflection that the House and the Senate, the Rs and the Ds, and the Congress and presidents of both parties could get together around this question of performance. One of the most important things in this new bill is that everyone has signed up again."

Last month President Obama signed GPRA 2.0. It is the first major government reform in almost twenty years.

The law requires agencies to measure program performance and report successes -- and failures -- to the Office of Management and Budget. GPRA 2.0 builds on past efforts to measure program effectiveness with a more cross-agency focus and detailed description of the role Congress will play.

Bruel said one of the biggest challenges to government reform has been an unwillingness to eliminate ineffective programs. He said politics play a big role in keeping unsuccessful programs alive.

"Programs are constantly feeling threatened," Bruel said. "So in a sense you get a series of special pleasers who come to defend their programs. And in many cases the evidence for those arguments are what we call 'faith based' - the arguments weren't based on the merits but you knew they were strong supporters of the program because that was their livelihood and what they knew."

Bruel said OMB needs an objective way to measure, and cut, unsuccessful programs.

"The difficulty is having some defensible basis to make choices and give policy officials the ability to make the inevitable choices they have to, to go in one direction or another," Bruel said.

Lawmakers have tried to use standard measures to evaluate programs in the past.

In 2002, OMB developed the Program Assessment Rating Tool (PART). The Bush administration wanted PART to give agency heads a way to asses and improve programs as well as inform budget decisions.

Robert Shea, former associate director of OMB, said the White House developed "common measures" but never actually used them because agencies refused to cooperate.

"It is difficult to get a sovereign program to agree to delegate its authority by agreeing to a measure set by someone else," Shea said. "If there is the slightest disagreement, there's no real forcing mechanism to get them to agree to this common way of measuring their performance."

Shea said that even when the administration forced agencies to adopt the measures, they often found another official who let them report something else.

Beth Blauer, director of Maryland's StateStat program, said metrics and goals need to be clear, open and consistent with agency purposes. She said it is important that goals are endorsed by the executive as well as program employees.

"The goals have to be meaningful to the practitioner," Blauer said. "Whether you are implementing on the front line or from a management level, the goals need to be meaningful. If you have goals that are unrelated to your practice areas or unrelated to your implementation areas, then they're absolutely going to be meaningless in implementation."

Her office has successfully used data to rate programs and steer services to meet Maryland's needs. However, she said it was a learning process.

"We definitely articulated goals that were meaningless," Blauer said. "And we learned very quickly that we can't find data that will support those goals, you can't find data that will help you attain those goals, and you can't find people that actually want to achieve those goals because they don't mean anything except a very high policy disconnect."

Blauer said leaders often get caught up trying to define the perfect measuring tools when the best way to find the right goals and metrics is to learn as you go.