Fundamental Science and Federal Management Reform

Eileen Collins*
Senior Assessment Studies Coordinator and Manager
Division of Science Resources Studies
National Science Foundation

D-Lib Magazine, September 1997

ISSN 1082-9873


As Americans approach the 21st century, they are experiencing a loss of faith in each other and in their institutions. See, for example, "Poll Finds Wide Pessimism About Direction of Nation" (Morin 1997) and "Americans Losing Trust in Each Other and Institutions" (Morin and Balz 1996).

In an effort to rebuild trust, organizations throughout the United States are taking steps to improve management and accountability. The Federal government is no exception. Bipartisan efforts, spanning Congress and the White House, have produced a new set of mandates for Federal reform:

This paper summarizes key features of GPRA [1], the methodological challenge it poses for science agencies, and a strategy for applying GPRA to fundamental science. A concluding comment considers the respective roles of Federal agencies and the larger community.

GPRA Key Features

GPRA is a critical element in the Clinton Administration's National Performance Review (, and its implementation is being closely watched by Congress (

The Act seeks transfer of best management practice from the private sector to the Federal sector, and it introduces new reporting requirements for each agency, centering on:

Initially, some viewed the new reform mandates as "powder puff" initiatives that would be easily circumvented and soon forgotten. But the authors of this latest set of reforms put teeth into the new requirements.

GPRA ties agency reporting requirements to the annual budget cycle and is quite specific about the concepts which should anchor planning and assessment. Since documents for the various acts must present consistent information and data, the significance of GPRA concepts extends beyond the application of the Act itself.

Budget discussions can no longer focus solely on how requested funds will be spent (e.g., on salaries, utilities, rents) or what sorts of activities will be supported (e.g., writing, teaching, publishing). GPRA requires use of concepts that emphasize results:

In addition, guidance from the Office of Management and Budget (OMB) for implementing the Act discusses

There is a clear preference in GPRA for the use of measures in order to encourage clear crisp statements of goals and results. Some might interpret this to mean that simple planning and evaluation models appropriate to well-defined manufacturing processes must be applied to each and every Federal activity. However, the framers of GPRA recognized the limitations of the simple manufacturing template and built flexibility into the Act's requirements in order to accommodate more complex programs.

GPRA provides that, if an agency determines that it is not feasible to express performance goals in an objective, quantifiable, measurable form, the agency can request use of an alternative form. The alternative form should be appropriate to the mission and goals of the agency and provide clear criteria for determining whether agency programs are successful or nor, or minimally effective or not.

Methodological Challenge

Even with the flexibility provided by GPRA, applying the Act to science (and other Federal) programs is a methodological challenge. [2]

Science agencies often employ indirect strategies for pursuing their goals. For example, an agency might provide funds to state or local governments or to private universities or laboratories to conduct training or research aimed at particular national objectives. The recipients of these funds need to remember the new climate of reform and accountability in Washington and to provide agencies with information about the results of their work. But it is the agencies themselves who must meet GPRA requirements even though they lack direct control over production.

In science, there can be long lags before the eventual outcomes and impacts of a particular program become apparent. For example, English mathematician Alan Turing's theoretical construct, commonly known as a "Turing machine," and related scientific insights laid the groundwork for today's explosion in applications of information technology. But these applications, much less their potential size and scope, were not foreseen either by Turing or his research contemporaries. They weren't even considered feasible until the semi-conductor was independently developed for other purposes.

Further, the intended outcomes and impacts of science are often contingent on the performance of other actors and the presence of an inter-connected set of social and institutional conditions. For example, when an entrepreneur builds on the findings of fundamental science to create a new invention, its subsequent development, adoption, and diffusion throughout the economy and society depend on the supply of venture capital, the regulatory and legal environment, accommodating market structures, and the training, experience, and attitudes of the workers who will produce and apply it and the customers who will use it.

Addressing the Challenge

The Federal fundamental science agencies produced a report, Assessing Fundamental Science (, to address this challenge. In this report, the definition of goals for fundamental science begins with the over-arching national goals of improved health and environment, prosperity, national security, and quality of life. All Federal programs are intended to contribute directly or indirectly to these broad national goals. Science agencies contribute to them by providing intermediate inputs or enabling ingredients necessary to their attainment. The intermediate inputs or enabling ingredients can be translated into the goals of science agencies themselves and they represent intermediate or enabling goals for the nation.

Assessing Fundamental Science defines the critical intermediate or enabling goal for fundamental science as "leadership across the frontiers of scientific knowledge." This does not mean that agencies or programs should seek to be numero uno in simplistic numerical rankings. Rather, it means that individual scientific research and education programs, whatever their field and purpose, can best advance progress toward over-arching national goals by performing world-class work. The task is not running a race or winning a medal. It is delivering cutting-edge results.

Assessing Fundamental Science identifies four additional intermediate goals for fundamental science. These are to (1) enhance connections between fundamental research and national goals, (2) stimulate partnerships that promote investments in fundamental science and engineering and effective use of physical, human, and financial resources, (3) produce the finest scientists and engineers for the 21st century, and (4) raise the scientific and technological literacy of all Americans.

The way in which any individual agency contributes to these various goals will depend on the specifics of its mission and strategic plan. The techniques that the agency applies to performance planning and assessment must be tailored to its particular organization and goals. One size does not fit all.

Assessing Fundamental Science eschews a rigid cookie-cutter approach, mechanical algorithms, and assessments based purely on quantitative indicators. It presents general principles for balanced assessment of fundamental science in the highly diverse Federal science enterprise:

Concluding Comment

A new climate of management reform and accountability for results pervades Washington and the nation. GPRA and related reform mandates are here to stay.

Initial experience with GPRA suggests that the Act provides a strategic management tool which can be used to enhance efficiency and effectiveness in a wide range of agencies-if it is appropriately applied. Science agencies are now struggling with the major conceptual issues that need to be addressed so that GPRA can be used to improve management, facilitate the creative processes of science and technology and their dynamic interactions, and enhance short-run and long-run contributions to over-arching national goals.

The science agencies themselves are responsible for responding to GPRA. Although they seek to avoid burden on the larger community, the community should be sensitive to the new reform climate and responsive to agency requests for inputs to associated planning and assessment efforts.


*Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the author and do not necessarily reflect the views of the National Science Foundation. Thanks are due the editor for insightful comments and suggestions, S.T. Hill for helpful discussion, and J. Griffith and A. Tupek for providing a productive work environment.

[1] For a more detailed picture of GPRA, related acts, and associated standards and guidelines for implementation, see Collins 1997 (

[2] See, for example, the General Accounting Office March 1997 and May 1997. (


Carnegie Commission on Science, Technology, and Government. Enabling the Future: Linking Science and Technology to Societal Goals (Carnegie Commission: New York, NY, 1992).

Chubin, Daryl E. "Meeting the Challenges of Performance Assessment" in AAAS Science and Technology Policy Yearbook 1994 edited by A. H. Teich, S. D. Nelson, C. McEnaney (American Association for the Advancement of Science: Washington, DC, 1994).

Collins, Eileen L. Performance Reporting in Federal Management Reform, National Science Foundation, March 14, 1997 (

The Congressional Research Institute Web Site at

Cozzens, Susan E. "Evaluation of Fundamental Research Programs: A Review of the Issues," Report of the Practitioners' Working Group to OSTP (Washington, DC, 1994).

Kostoff, Ronald N. "Peer Review: The Appropriate GPRA Metric for Research," Science, Vol. 277, No. 5326 (August 1, 1997), pp. 651-652.

Morin, Richard. "Poll Finds Wide Pessimism About Direction of Nation," The Washington Post, August 27, 1997, pp. A1 and A28.

Morin, Richard and Dan Balz. "Americans Losing Trust in Each Other and Institutions," The Washington Post, January 28, 1996, pp. A1 and A6-A7.

National Academy of Sciences Commission on Physical Sciences, Mathematics, and Applications. Quantitative Assessments of the Physical and Mathematical Sciences: A Summary of Lessons Learned (National Academy Press: Washington, DC, 1994).

US General Accounting Office. Measuring Performance: Strengths and Limitations of Research Indicators (GAO/RCED-97-91, March 1997). (

US General Accounting Office. Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 1997). (

US National Science and Technology Council. Assessing Fundamental Science (Office of Science and Technology Policy: Washington, DC, July 1996). (

US Office of Management and Budget. "Update on OMB-Wide Dialogue," Memorandum for OMB Staff, M-94-50 (Washington, DC, September 23, 1994).

US Office of Management and Budget. "Spring Review on Program Performance," Memorandum for the Heads of Executive Departments and Agencies, M-95-04 (Washington, DC, March 3, 1995).

US Office of Management and Budget. "Information on Performance Aspects for Fall Review," Memorandum for the Heads of Executive Departments and Agencies, M-96-22, Supplement 2 (Washington, DC, September 9, 1996).

US Office of Management and Budget. "Preparation and Submission of Strategic Plans and Annual Performance Plans," Circular No. A-11, Part 2 (Washington, DC, May 1997).

US Senate, Committee on Governmental Affairs. Report to Accompany S. 20, the Government Performance and Results Act of 1993 (US Government Printing Office, Washington, DC, 1993). (

The White House National Performance Review Web Site at

D-Lib Magazine |  Current Issue | Comments
Previous Story | Next Story