Advisory Committee on Data for Evidence Building
Todd M. Richardson, General Deputy Assistant Secretary for Policy Development and Research.
The Evidence Act of 2019 requires each federal agency to have an evaluation officer. The evaluation officer has several requirements, including developing a Learning Agenda and an Annual Evaluation Plan, among other things. The evidence team at the Office of Management and Budget (OMB) has been convening regular meetings of the evaluation officers. I am HUD’s evaluation officer.
The OMB staff asked me and three other evaluation officers to also participate as part of a federal advisory committee to OMB on how to better use federal and state data for evidence building. This advisory committee is composed of folks from federal agencies — evaluation officers, chief data officers, and statistical officials, among others — folks from academia, folks from state agencies, and other public and private sector experts. Federal advisory committee meetings are public meetings. The notes and handouts from the meetings we have held to date are here: https://www.bea.gov/evidence.
Each week, different experts have presented on their area of interest. I have really enjoyed all of the presentations because the committee has many experts that I admire. Each of them is generously opening up the hood to show how they achieved something that is very hard, and all are being honest about the barriers they encounter trying to improve our ability to use data for evidence.
At the most recent meeting, held on Friday, January 22, the committee heard two presentations, one from the evaluation officers and one from the statistical officials from select agencies.
For the presentation from evaluation officers, I presented along with my fellow evaluation officers from the U.S. Department of Labor (Christina Yancey), U.S. Department of Education (Matthew Soldner), and the U.S. Department of Commerce (Christine Heflin, who also was representing performance officers).
This was a fun presentation to put together because it allowed the four of us to create a joint presentation. We learned from each other as we sought to form a presentation that communicated our common challenges to the larger group.
I began our presentation by providing a couple of examples to show how getting data from other agencies to support evaluation is often a matter of luck. Christina provided our “North Star,” what we hope the future looks like, Matt provided both low-hanging fruit on bureaucratic barriers that the committee should be able to solve to achieve that North Star, as well as bigger technological challenges that will be harder, and Christine showed how performance officers look at the data needs.
Our concluding summation slide made these key points:
- We have used administrative data successfully.
- Shared data can help us answer critical questions that original data cannot do, and it can answer the questions faster and cheaper.
- Many of the barriers are administrative, and that is the low-hanging fruit for this group; once those are solved there are some technological barriers to resolve.
- This opens up the data for more people to answer questions around programs; the data need to be more of a public good to accelerate learning.
- Solving the problems will help us steer more accurately toward the impacts we are trying to create and result in more cost-effective impacts.
This generated a very active discussion among the committee members. We discussed how sharing data across agencies for research involved many people reviewing and coming to agreement to safely share and appropriately use data. We discussed how every program in every agency seems to have its own policy regarding data sharing, there is no common policy. We discussed how standard application processes and standard forms can facilitate data sharing across agencies. Some members cautioned that a standard process should not be more cumbersome than the current process. We seemed to have agreement that a standard process for data sharing should be simple, and once established, the standard process should be repeatable with relative ease.
Technological barriers include differences in data standards across federal agencies. Metadata can be inconsistent across agencies and difficult for researchers to analyze. Data must be high quality and linkable. Longitudinal data are preferred to measure the impact of programs on target populations, particularly in long studies with large impacts. Standardizing data structure will allow researchers and evaluators to better understand and compare impacts and outcomes, particularly in long-term studies.
From this conversation we went on to an equally interesting conversation based on a presentation from the statistical officials. If you, like me, believe that administrative data are underused for research and we need a better way to make these powerful data answer critical questions in a timely manner, please feel free to listen in on our next conversation on February 19th. More information here: https://www.bea.gov/evidence.