Evaluation and Program Planning, Vol. 9, pp. 63-72, 1986 0149-7189/86%3.00 + .OO Printed in the USA. All rights reserved. Copyright c 1986 Pergamon Journals Ltd. EVALUATING EMPLOYMENT AND TRAINING PROGRAMS BURT S. BARNOW ICF Incorporated ABSTRACT Thb paper zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA provides an overview of the methods and problems in evaluating employment and training programs. Impact evaluations are intended to determine the effect such programs have on earnings and other outcomes of interest, and process studies are used to asse.~ the ways in which programs are implemented. Impact evaluations are shown to sometimes pro- duce biased estimates because of selection bias problems, and process studies are subject to charges of bias because conclusions and interpretations are based on the judgments of the observers rather than quantitative estimates. Federal employment and training programs serve the economically disadvantaged and dislocated workers through the Job Training Partnership Act (JTPA). The Department of Labor has funded a process study of JTPA and an impact evaluation. Over the past two decades, employment and training programs have grown substantially. The obligations in 1964 under the Manpower Development and Training Act (MDTA) for classroom and on-the-job training were $98 million. By 1984 that figure had grown to $1,886 million under the Job Training Partnership Act (JTPA) - an increase of over 1800%. As the expendi- tures have grown, so has interest in how well the pro- grams have accomplished their goals. Because JTPA became effective in October 1983 and is different in many ways from the Comprehensive Employment and Training Act (CETA), its predecessor, there is interest in how the program is being implemented and the reasons for the patterns of implementation as well as in the impact of the program on earnings, employment, and reduction in welfare dependency. This paper addresses the subject of evaluating em- ployment and training programs funded under the job Training Partnership Act. As used here, evaluation refers to systematic efforts to collect and analyze data, both qualitative and quantitative, on the programs, their participants, and their outcomes, to better under- stand what the program is accomplishing and how it is being accomplished. Monitoring will be the term used to describe efforts to determine if statutory require- ments, such as participant eligibility and fiscal limita- tions, are being met, whereas the term evaluation will be used for studies of longer-term goals and more analytical examinations of short-term performance. The next section of the paper presents some general principles for evaluating employment and training programs. The third section briefly describes JTPA and the current evaluation system established by the Employment and Training Administration. Section IV discusses how the current evaluation system might be improved and suggests areas where additional research is needed. PRINCIPLES FOR EVALUATING EMPLOYMENT AND TRAINING PROGRAMS Whenever the federal government supports an activity such as employment and training, there are a number of interested parties who value information on how the program was implemented, who it served, whether or not the program was “effective,” and how it could be improved. Program reporting and monitoring can pro- vide the answers to some questions, such as who was served by the program and what happened to the par- ticipants when they left the program, but frequently there is a need for analytical studies to understand how the program was implemented and what the net effects of the program were. We shall refer to the former type The author gratefully acknowledges the following individuals for their advice and comments in the preparation of this article: John Wallace, Greta Tate, Carol Romero, Fred Romero, and Tim Sullivan. Requests for reprints should be sent to Burt S. Barnow, Senior Economist, ICF, 1850 K Street NW, Washington, DC 20006. 63